The Electronic Privacy Information Center (EPIC) and 27 other organizations have submitted a letter to the videoconferencing company, Zoom, to cease its plans to develop and incorporate an emotion tracking software into its platform. The organizations came together to call Zoom’s attention to their concerns about the new software.
Emotion Analysis
Zoom’s in-house exploration of an AI for monitoring human behaviour in real time follows the likes of Uniphore and Sybill who already provide similar services. The premise is primarily aimed at helping in sales calls. The hope is, by providing emotional analysis, it will improve how well virtual sales calls can go. Emotional analysis will be gained from facial expressions, body language, and speech.
Josh Dulberger, Head of Product, Data and AI at Zoom spoke about their new development to Protocol. Currently the new feature, called Zoom IQ for Sales, provides a post-meeting report rather than receiving it in real time. The report will be able to provide a conversation transcript and feedback on the meeting and whether the potential customer seemed engaged or not. While some AI-based transcription services can make mistakes Dulberger seems confident in their techniques, “We’re looking at things like speaker cadence and other factors in the linguistic approach to try to disentangle one speaker from another”.
Just one quick look at Zoom’s website and it’s clear that they value customer happiness and security. But the organizations behind this letter feel this latest development goes against those values.
Letter to Zoom
There are concerns that the collection of users recordings for emotional data points is a violation of privacy and human rights. The letter urges Zoom to cease their plans to continue this feature. The main concerns are as follows:
- Based on pseudoscience: Experts admit that emotion analysis does not work. Facial expressions are often disconnected from the emotions underneath, and research has found that not even humans can accurately read or measure the emotions of others some of the time. Developing this tool adds credence to pseudoscience and puts your reputation at stake.
- Discriminatory: Emotion AI, like facial recognition, is inherently biased. It has connections to practices like physiognomy which have been proven to be misleading, erroneous, and racist. These tools assume that all people use the same facial expressions, voice patterns, and body language—but that’s not true. Adding this feature will discriminate against certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices.
- Manipulative: You are already marketing sentiment analysis as a way for businesses to capitalize on users and close deals faster. Offering emotion analysis to monitor users and mine them for profit will further expand this manipulative practice.
- Punitive: The use of this bad technology could be dangerous for students, workers, and other users if their employers, academic or other institutions decide to discipline them for “expressing the wrong emotions” based on the determinations of this AI technology.
- A data security risk: Harvesting deeply personal data could make any entity that deploys this tech a target for snooping government authorities and malicious hackers.
Reputation in the Industry
The end of the letter urges Zoom to consider their influence as a leader of their space. They have a responsibility of setting the course for other companies. They need to make it clear that there is no place for this technology in video communications. Fight for the Future and the other organizations have asked that Zoom publicly respond to their request and commit to not implementing emotion AI by May 20, 2022.
Do you confused about privacy within your company? Value Privacy’s experts can take care of any privacy concerns you may have. Contact us to find out how we can help.