A collective of 28 human rights organizations has urged Zoom to halt its work on an emotion tracking system that aims to analyze the engagement and sentiments of users. Of course, the idea sounds invasive from the get-go, but it is not the first project of its kind when it comes to the end goal. There are already a ton of software products out there targeted at remote workers that track everything from mouse clicks to web activity on behalf of employers.

These products already have divided opinions, but monitoring user activity happens in all shapes and forms, and sometimes, from extremely unsuspecting sources. A recent investigation uncovered that a healthy number of mental health and meditation apps engaged in data mining and handed it over to third parties for advertising. But the scope of monitoring user activity widened in the pandemic era under the garb of corporate solutions for assessing the availability and attentiveness of workers. It appears that Zoom, one of the biggest success stories in this era of dramatic work shift, was also engaged in a concerning project of its own.

Related: Zoom Will Now Bust You If You're Late To A Meeting

Fight for the Future, alongside 27 other human rights organizations, has written an open letter to Zoom, calling it to halt the AI-driven emotion tracking software that studies the facial expression of video call participants. The organization classified the software as “discriminatory, manipulative, potentially dangerous,” further pointing out that it is based on the flawed assumption that markers such as voice pattern, facial expressions, and body language are uniform for all people. There is valid concern behind the criticism, as facial recognition algorithms and other AI-based products have proved to be error-prone when it comes to processing imagery depicting people of color and varied body forms. According to a report from Protocol, the video-conferencing giant is working on a product called Zoom IQ that will offer something called a sentiment analysis report to a meeting’s host once the call is over.

A New Era Of AI-Driven Tracking

zoom meeting

There are companies out there such as Uniphore and Sybill that are reportedly developing products that aim to leverage AI for monitoring human behavior during video calls in real-time. Zoom, on the other hand, is said to be researching ways that will allow it to incorporate such a system into its portfolio of products. “These are informational signals that can be useful; they’re not necessarily decisive” Josh Dulberger, Head of Product (Data and AI) at Zoom, was quoted as saying. Despite the cautious outlook from Zoom’s executive, the idea of AI analyzing facial expressions and rating sentiments of participants in a video call is quite unnerving, to put it mildly. The concerns are legit, and following years of advocacy from experts and activists, the US government has also warmed up to the idea of regulating artificial intelligence.

While the premise of privacy intrusion with Zoom’s product is certainly ringing alarm bells, what is truly concerning is the “one-size-fits-all” assumption, and whether such tech is reliable at all. In the other letter to Zoom, experts warned that such a system is based on pseudoscience as machine learning hasn’t really proved itself capable of understanding human emotions based on facial expressions.  The experts further warned that despite all its good intentions, such a product could be twisted for enforcing punitive actions in more domains than just the corporate world, and will also pave the way for a new industry where emotion analysis will lead to more potent digital manipulation and open the floodgates for harvesting extremely personal user data. If enforced, Zoom’s product would add a whole new dimension to discriminatory AI norms.

Next: Google Is Asking Users To 'Donate' Messages To Improve AI

Source: Fight For The Future, Protocol