Emotional Intelligence

Reading Facial Expressions in Video Calls: What Science Tells Us

Video calls reveal facial expressions but can obscure them. Learn what science says about reading faces remotely and how AI enhances human perception.

The Challenge of Digital Face Reading

Humans are wired to read faces. We process facial expressions automatically, often without conscious awareness. But video calls introduce challenges:

  • Reduced visual information: Lower resolution, compression artifacts, poor lighting

  • Eye contact disconnect: Looking at the camera is not the same as mutual gaze

  • Display limitations: Small windows, grid views, self-view distraction

  • Latency: Micro-delays disrupt natural conversational rhythm
  • With 18,100 monthly searches for "facial expressions," people clearly want to understand this skill better—and for good reason.

    The Science of Facial Expressions

    Universal Expressions


    Research by Paul Ekman and others identified facial expressions that appear universal across cultures:

  • Happiness: Raised cheeks, crow's feet around eyes

  • Sadness: Inner eyebrow raise, lip corner depression

  • Anger: Lowered brows, tightened jaw

  • Fear: Raised eyebrows, widened eyes, tense lower eyelids

  • Disgust: Wrinkled nose, raised upper lip

  • Surprise: Raised eyebrows, dropped jaw

  • Contempt: One-sided lip raise
  • Microexpressions


    These are brief (1/25th to 1/5th of a second), involuntary expressions that reveal concealed emotions. They flash across the face before conscious control can suppress them.

    In video calls, microexpressions are harder to catch due to frame rate limitations and attention fragmentation.

    What Video Calls Reveal and Conceal

    What You Can See


  • Overall emotional tone: General happiness, frustration, or engagement

  • Sustained expressions: Emotions held for more than a second

  • Major shifts: Clear transitions from one emotional state to another

  • Deliberate expressions: Smiles, nods, frowns intended to communicate
  • What Gets Lost


  • Subtle microexpressions: Too fast for typical video frame rates

  • Eye behavior details: Pupil dilation, exact gaze direction

  • Lower face nuance: Often cut off by camera framing

  • Body language integration: Posture, hand gestures, physical orientation
  • Improving Your Perception

    Technical Improvements


  • Improve lighting: Front-facing, diffused light reduces shadows that hide expressions

  • Camera positioning: Eye level, showing full face including mouth

  • Stable connection: Prioritize video quality; compression destroys facial detail

  • Full-screen viewing: Larger images reveal more
  • Attentional Strategies


  • Focus on the speaker: Avoid multitasking during important conversations

  • Watch for transitions: Changes in expression often matter more than static states

  • Notice incongruence: When words and expressions do not match, pay attention

  • Track over time: Build a baseline for how this person normally presents
  • The Role of AI in Facial Analysis

    Modern emotion AI addresses limitations of human perception in video calls:

    Frame-by-Frame Analysis


    AI can analyze every frame, catching microexpressions that humans miss due to attention limits or frame rate perception.

    Pattern Recognition


    Machine learning models trained on millions of faces can identify subtle expression patterns that require expertise to spot manually.

    Objective Tracking


    AI provides consistent measurement without the biases and attention failures that affect human perception.

    Multi-Modal Integration


    Advanced systems combine facial analysis with vocal analysis, creating a more complete picture than either channel alone.

    Ethical Considerations

    Facial analysis raises important questions:

    Consent


    People should generally know when their facial expressions are being analyzed by AI.

    Cultural Context


    While basic expressions appear universal, display rules vary significantly across cultures.

    Over-Reliance


    Facial expressions are one data point among many. They can be misinterpreted, culturally variable, or intentionally controlled.

    Disability and Neurodivergence


    Some people naturally present facial expressions differently. Systems must account for individual variation.

    Practical Applications

    Sales and Customer Conversations


    Notice when engagement drops or confusion appears. Adjust your message accordingly.

    Job Interviews


    Observe candidate comfort levels with different topics. Follow up where you notice tension.

    Team Meetings


    Track overall team engagement. If faces show disengagement, the meeting approach may need adjustment.

    Negotiations


    Watch for expressions that contradict stated positions. Confidence and uncertainty have different faces.

    Key Takeaways

    1. Video calls introduce challenges to natural facial expression reading
    2. Basic emotions show in consistent facial patterns across cultures
    3. Microexpressions often occur too quickly for conscious human perception
    4. Technical setup significantly affects what can be perceived
    5. AI provides frame-by-frame analysis that supplements human perception

    Reading faces in video calls requires more deliberate attention than in-person interaction—but with the right approach and tools, it remains a valuable source of information.

    Pavis Team

    Research & Development

    The Pavis Team researches conversation intelligence, emotional AI, and behavioral psychology to help professionals communicate more effectively.

    Try PAVIS Now →

    Stay ahead of every conversation

    Get the latest insights on emotional intelligence, negotiation tactics, and real-time conversation analysis delivered to your inbox.

    No spam. Unsubscribe anytime.