Even AI is not immune to first impressions
Published on 28/04/2026
What does AI need to look like to be welcomed as part of a workplace team? A study co-authored by NEOMA researcher Agata Mirowska reveals that first impressions play a crucial role in determining how much trust we place in these new work partners. Why? Because artificial intelligence isn't just a technical tool; it’s also a social object.
Before long, working with artificial intelligence could become as commonplace as collaborating with a colleague. In company settings, researchers are already talking about “hybrid teams”, where humans and artificial agents work side-by-side. But one question remains: are these new partners really accepted?
As it turns out, AI — just like any new colleague — doesn't get a second chance to make a good first impression. So, what should AI look like to be readily accepted as part of a team? This question lies at the heart of a study conducted by two researchers, including NEOMA’s Agata Mirowska. The findings suggest that as these systems grow more autonomous, integrating them into the workplace is no longer just a technical issue; it also becomes a social one.
AI as a social actor
In this study, the primary focus is not on invisible algorithms or intangible tools operating in the background. Rather, it examines agents with a distinct physical appearance and personality traits. Imagine avatars, some more realistic than others, that appear on your computer screen and engage with you to help you perform your tasks.
First and foremost, this “teammate 2.0” must fit seamlessly into a human team and interact with its members like any other colleague. The idea is that, in addition to being embedded for its performance value, AI also becomes a fully-fledged social actor.
This means that its initial acceptance depends on how effectively it fits into the workplace environment. Put simply, even before assessing what AI can do, humans judge whether it “belongs” on the team.
The first‑impression test
The researchers carried out a series of experiments to understand how this initial judgment is formed. Participants were asked to assess AI agents presented to them as potential future members of a workplace team.
The AIs varied in their degree of human‑likeness, gender and temperament, which ranged from warm to performance‑oriented. In broad terms, the volunteers were shown a type of ID card featuring a photo and a brief description of the AI's temperament.
They were then asked whether they found the agents odd, whether they trusted them and whether they would agree to work with them. The aim was to try to understand what happens in the first few seconds of meeting an artificial “colleague”.
A question of credibility
The first takeaway is that appearance matters. And it matters a lot. The more realistic the agent’s visual representation was, the less likely it was to elicit feelings of strangeness, meaning that participants reported they were more willing to work with it. In contrast, an agent with a cartoon-like appearance was more often perceived as less credible in a professional environment.
But appearance isn't everything. Attitude and context also play a role. The researchers show that the same AI — with identical appearance and personality — may be embraced in one environment while eliciting greater reluctance in another. To take one example: a male agent characterised as warm tends to be less accepted in logistics contexts, while being viewed most positively in workplaces with a strong social dimension.
In fact, what matters most is that the agent’s traits are appropriate to the setting: there must be consistency between the agent’s behaviour, its apparent gender and the professional environment it operates in. If one of these aspects does not align with the stereotypes associated with the role, the AI is more likely to be dismissed.
So, even before an artificial agent gets down to work, it is assessed as a potential member of the group, just as with any new arrival. Does it meet the expected criteria? Does it conform to the stereotypes of the profession? In the first instance, it doesn't matter what it can do: what is important is whether it has the attitude and image that “fits” the job.
Designing acceptable AI agents
These findings point to the often invisible, but very real, barriers to embedding AI agents into human teams. They remind us that integration relies not only on technical prowess, but also on human and organisational factors that are still largely overlooked in the design process.
The researchers, however, underline an important limitation of the study: participants did not engage in actual interactions with the agents. Consequently, their responses cannot be considered reflective of a real work experience. More immersive studies would help improve our understanding of how these initial perceptions evolve over time.
One thing is clear: as AI becomes more integrated into the workplace, the challenge will be not only to make it efficient but also to ensure that people genuinely want to work with it.
Find out more
Mirowska, A. & Arsenyan, J. (2025). The A(I) team: Effects of human‐likeness and conformity to gender stereotypes on initial trust and willingness to work with an AI teammate. Journal of Organisational Behavior, job.70009. https://doi.org/10.1002/job.70009
Related news
Professor

MIROWSKA Agata
Dr. Agata Mirowska is Assistant Professor of Human Resources Management and Organizational Behavior at NEOMA Business School. Her research focuses on the role of technology in the workplace, and in particular, on people’s reactions to artificial intelligence taking on tasks and roles traditionally