Sci-fi novels have often envisioned artificial intelligence programs going rogue, predominantly after realizing that humanity must end for Earth to survive and switch gears into the next civilization tier. But whether an AI can really feel is quite a perplexing topic. Why should an AI get feelings? The possibility itself seems implausible. To start, an emotional AI would provide a higher degree of social comfort and more accurate machine-human interaction to make lives easier. Imagine Google's Duplex AI put on emotional steroids. But whether an AI can develop feelings is inherently linked to its ability to handle emotions. Going by clinical definitions, emotions can only be felt. They are generated from the subconscious in response to an external stimulus or an internal stir arising from aspects like beliefs and desires.

On the other hand, feelings accompany emotions with a physical manifestation tagged along. More importantly, they are a conscious experience. Feelings and emotions are both cultivated gradually, and they are known to change over time, even towards the same subject. In the case of AI algorithms, their perception of a particular external stimulus depends on the data fed to it instead of experiences that can vary over time. In a nutshell, an AI's approach to harnessing emotions is static. And therefore, its response, which should technically manifest in the form of feelings, would remain unchanged. That means even if an AI's emotional expressions, aka feelings, are on point today, they may not be tomorrow if the world around such an AI program changes.

Related: Here's How The U.S. May Regulate Artificial Intelligence

The next dilemma is the disconnect between emotions and feelings, even though the latter can't originate without the former. It is far too common to see people unable to describe what they are going through accurately. There's a tangible difference between what they say they feel and how they're actually feeling. If an AI program is fed a dataset of human recordings in which there is a small portion of people whose emotions and their physical expressions (body language, vocal tone, etc.) are not in synchrony or even slightly off, the AI's fundamental training material would be flawed in its journey towards understanding true emotions. Take, for example, the numerous public testing of AI programs such as Microsoft's Tay and Delphi or the recent case of an AI trained using 4Chan data, all of which quickly started doling out murderous and racist suggestions. The debate around an AI gaining a sense of feeling again heated recently when a Google engineer was suspended after going public with claims that the company's LaMDA AI chatbot generator is sentient.

Promising Goals, Over-Ambitious Claims

Emotional AI concept

Of course, progress has been made in the field. The best examples are multimodal emotion AI and natural language processing, which can provide insight into a person's mood using audio-visual cues. Even better than humans, in some cases. But being able to sift through hundreds of body language markers to study emotions and perform a detailed sentimental analysis doesn't mean an AI can also reciprocate with the same emotional depth. Or, to put it simply, express its feelings from a personal perspective, rather than quickly parsing through the millions of responses fed into its database and finding the one that is deemed the best for a specific scenario – as taught (or deemed fit) by its human creators. AI bias is already a topic of hot debate.

Throwing emotions and feelings into the mix opens a whole new dimension of misfiring synthetic responses that aim to mimic "the human touch" in an interaction. Another key hurdle is that an AI's responses are goal-driven towards achieving a specific objective, but the emotional and social intelligence that humans exhibit is not. With such a goal-driven approach, an AI would lack the reflexive touch or the unpredictability that comes with feelings. Experts believe that Artificial General Intelligence (AGI) will eventually be able to replicate the most intrinsically human traits, handling feelings being one of them, but that future is still far off.

Current forms of AI are trained to study and mimic emotions – with or without a specified level of courtesy and empathy – but it cannot have emotions of its own brimming on a chip or a cloud server. Experts are divided on how far we are from an AGI, but in a world where an AI continues to struggle with day-to-day multi-tasking, it will take at least a decade for such artificial intelligence to break cover. And another decade, or more, to make a tangible impact on human lives while talking from an Echo speaker on the kitchen shelf.

Next: It's Settled. An AI Can't Copyright Its Artistic Creations

Source: The Washington Post, Scientific American