
(© kerkezz - stock.adobe.com)
In A Nutshell
- A simple gaze pattern — looking at an object, making eye contact, then looking back — strongly signals to others that you want to communicate.
- People were up to 96% likely to interpret this sequence as a clear request for help, much higher than other eye contact patterns.
- The effect was the same whether the gaze came from a human-like avatar or a robot, suggesting applications for social robots and AI.
- This study was conducted online with virtual agents; more research is needed to confirm how this works in real-life interactions.
ADELAIDE, South Australia — Someone glances at an object, locks eyes with another person, then looks back at the same object. According to research, that simple sequence carries a powerful message about whether someone—or something—wants to communicate.
Researchers from universities across three countries studied 137 people to decode exactly when eye contact signals real communication intent. Their discovery, published in Royal Society Open Science, could influence how robots interact with humans and help people become better communicators.
The Most Convincing Gaze Pattern Isn’t What Scientists Expected
The study revealed that timing and context matter far more than simply making eye contact. The most convincing communicative signal occurred when agents looked at an object, made eye contact, then looked back at the same object, a pattern researchers called “Intervene-Same.” This sequence made people 96% likely to believe the agent wanted something from them.
“Participants were most likely, and fastest, to perceive a request when eye contact occurred between two averted gaze shifts towards the same object,” the researchers wrote in their paper.
Early eye contact, where agents made direct gaze first before looking at objects, convinced people only 41% of the time, ranking among the least convincing signals despite expectations that recent eye contact would be most powerful.

How Scientists Tested the Trust Factor
The research team tested both human-like avatars and robots resembling the iCub robot used in artificial intelligence research. Participants watched these agents perform different gaze sequences while sitting behind a virtual table with three colored blocks. Their task: decide whether the agent needed help getting one of the blocks or was just looking around.
A clear hierarchy of trust emerged. After the powerful “Intervene-Same” pattern, repeated eye contact (looking directly at the participant, then at an object, then making eye contact again) convinced people 89% of the time. At the bottom sat conditions with no eye contact at all (27% trust rate) and early eye contact scenarios.
Participants responded fastest when they felt most certain about an agent’s intentions. In highly communicative conditions, people quickly decided to help. In scenarios with unclear signals, they hesitated longer before making decisions.
Why Robots and Humans Triggered the Same Response
One striking discovery was that these patterns worked equally well for both human-looking agents and robots. Despite participants rating humans higher on human-likeness and liveliness while scoring robots higher on likability, the eye contact effects remained consistent across both.
The consistency reveals that humans have evolved to read communicative intent from gaze patterns regardless of who or what is doing the looking, as long as the eyes look reasonably human-like.
Nathan Caruana from Flinders University, who led the research, designed the study to understand how people naturally interpret social signals in real-time interactions. The team measured not just whether participants thought agents wanted something, but how quickly they made those decisions, a sign of confidence in their judgments.
Real-World Applications Beyond the Lab
As robots become increasingly common in homes, hospitals, and workplaces, understanding how to program convincing social signals could determine whether people accept or reject these artificial companions. Current social robots often rely on obvious gestures like pointing or speaking to communicate needs. Subtle gaze patterns could create more natural, intuitive interactions that feel less mechanical.
The research also offers practical insights for human communication. People naturally use these gaze patterns, but understanding them consciously could help in situations where clear non-verbal communication matters—from classroom teaching to workplace presentations to social interactions for people who find eye contact challenging.
The study’s methodology was notably rigorous, with participants from diverse ethnic backgrounds and careful experimental design that eliminated potential biases. Researchers tested six different gaze conditions across 288 trials per participant, with each person interacting with both human and robot agents.
However, the research was limited to screen-based interactions rather than face-to-face encounters, and all participants knew they were interacting with artificial agents. Future research needs to test whether these patterns hold up in live, physical interactions and across different cultural contexts, since most participants came from Western backgrounds.
When someone wants to communicate through glances alone, the secret appears to be looking at what matters, making eye contact, then looking back at what matters again.
Disclaimer: This study was conducted entirely online with screen-based avatars and robot simulations. The authors note that real-life, face-to-face interactions may differ, and more research is needed to confirm whether these results hold up in physical, in-person contexts and across different cultures.
Paper Summary
Methodology
Researchers recruited 137 participants through an online platform to complete a virtual reality-style task. Participants watched computer-generated agents (both human-looking avatars and robots) perform different gaze sequences while sitting behind a table with three colored blocks. The agents would make three gaze movements per trial, and participants had to decide whether the agent wanted one of the blocks or was just looking around. The study tested six different conditions: no eye contact, early eye contact, eye contact between looks at different objects, eye contact between looks at the same object, late eye contact, and repeated eye contact. Each participant completed 288 trials across both human and robot agents, with trial orders randomized to prevent bias.
Results
The study found significant differences between all gaze conditions in how communicative they appeared. The “Intervene-Same” condition (eye contact between two looks at the same object) was perceived as most communicative, with 96% of participants choosing to help the agent. This was followed by repeated eye contact (89%), eye contact between looks at different objects (63%), late eye contact (57%), early eye contact (41%), and no eye contact (27%). Participants also responded faster when they were more certain about the agent’s intentions. The same patterns held equally for both human-like and robot agents, despite participants rating them differently on human-likeness and other characteristics.
Limitations
The study was conducted entirely online using screen-based interactions rather than face-to-face encounters. All participants knew they were interacting with artificial agents, not real humans. The sample was ethnically diverse but primarily represented Western cultural backgrounds, limiting generalizability to other cultures. The research focused on brief gaze sequences and didn’t examine longer, more complex communication patterns. Future studies need to test whether these findings hold up in physical, real-world interactions and across different cultural contexts.
Funding and Disclosures
The research was supported by an Experimental Psychology Society small grant. The authors declared no competing interests. All study materials, data, and analysis code were made publicly available on the Open Science Framework, following transparent research practices.
Publication Information
This study was published in Royal Society Open Science on July 16, 2025, authored by Nathan Caruana (Flinders University), Friederike Charlotte Hechler (Macquarie University and Universität Potsdam), Emily S. Cross (ETH Zurich), and Emmanuele Tidoni (University of Leeds). The paper was titled “The temporal context of eye contact influences perceptions of communicative intent” and can be found at doi:10.1098/rsos.250277.







