Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. It is used in contexts both mundane, like entertainment, and high stakes, like the workplace, hiring and health care.
A wide range of industries already use emotion AI, including call centers, finance, banking, nursing and caregiving. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees' internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice.
Scholars have raised concerns about emotion AI's scientific validity and its reliance on contested theories about emotion. They have also highlighted emotion AI's potential for invading privacy and exhibiting racial, gender and disability bias.
Some employers use the technology as though it were flawless, while some scholars seek to reduce its bias and improve its validity, discredit it altogether or suggest banning emotion AI, at least until more is known about its implications.
I study the social implications of technology. I believe that it is crucial to examine emotion AI's implications for people subjected to it, such as workers—especially those marginalized by their race, gender or disability status.
Workers' concerns
To understand where emotion AI use in the workplace is going, my colleague Karen Boyd and I set out to examine inventors' conceptions of emotion AI in the workplace. We analyzed patent applications that proposed emotion AI technologies for the workplace. Purported benefits claimed by patent applicants included assessing and supporting employee well-being, ensuring workplace safety, increasing productivity and aiding in decision-making, such as making promotions, firing employees and assigning tasks.
We wondered what workers think about these technologies. Would they also perceive these benefits? For example, would workers find it beneficial for employers to provide well-being support to them?
My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey partly representative of the U.S. population and partly oversampled for people of color, trans and nonbinary people and people living with mental illness. These groups may be more likely to experience harm from emotion AI. Our study had 289 participants from the representative sample and 106 participants from the oversample. We found that 32% of respondents reported experiencing or expecting no benefit to them from emotion AI use, whether current or anticipated, in their workplace.
While some workers noted potential benefits of emotion AI use in the workplace like increased well-being support and workplace safety, mirroring benefits claimed in patent applications, all also expressed concerns. They were concerned about harm to their well-being and privacy, harm to their work performance and employment status, and bias and mental health stigma against them.
For example, 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions.
Participants' voices
One participant who had multiple health conditions said, "The awareness that I am being analyzed would ironically have a negative effect on my mental health." This means that despite emotion AI's claimed goals to infer and improve workers' well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. Indeed, other work by my colleagues Roemmich, Florian Schaub and I suggests that emotion AI-induced privacy loss can span a range of privacy harms, including psychological, autonomy, economic, relationship, physical and discrimination.
On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said, "They could decide that I am no longer a good fit at work and fire me. Decide I'm not capable enough and not give a raise, or think I'm not working enough."
Participants in the study also mentioned the potential for exacerbated power imbalances and said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace, pointing to how emotion AI use could potentially intensify already existing tensions in the employer-worker relationship. For instance, a respondent said, "The amount of control that employers already have over employees suggests there would be few checks on how this information would be used. Any 'consent' [by] employees is largely illusory in this context."
Lastly, participants noted potential harms, such as emotion AI's technical inaccuracies potentially creating false impressions about workers, and emotion AI creating and perpetuating bias and stigma against workers. In describing these concerns, participants highlighted their fear of employers relying on inaccurate and biased emotion AI systems, particularly against people of color, women and trans individuals.
For example, one participant said, "Who is deciding what expressions 'look violent,' and how can one determine people as a threat just from the look on their face? A system can read faces, sure, but not minds. I just cannot see how this could actually be anything but destructive to minorities in the workplace."
Participants noted that they would either refuse to work at a place that uses emotion AI—an option not available to many—or engage in behaviors to make emotion AI read them favorably to protect their privacy. One participant said, "I would exert a massive amount of energy masking even when alone in my office, which would make me very distracted and unproductive," pointing to how emotion AI use would impose additional emotional labor on workers.
Worth the harm?
These findings indicate that emotion AI exacerbates existing challenges experienced by workers in the workplace, despite proponents claiming emotion AI helps solve these problems.
If emotion AI does work as claimed and measures what it claims to measure, and even if issues with bias are addressed in the future, there are still harms experienced by workers, such as the additional emotional labor and loss of privacy.
If these technologies do not measure what they claim or they are biased, then people are at the mercy of algorithms deemed to be valid and reliable when they are not. Workers would still need to expend the effort to try to reduce the chances of being misread by the algorithm, or to engage in emotional displays that would read favorably to the algorithm.
Either way, these systems function as panopticon-like technologies, creating privacy harms and feelings of being watched.
Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.