Deepfakes, realistic but AI-generated profiles mostly linked to the world of scammers and hackers, are now being used intensively by companies in an effort to increase sales. Deep fake technology has gotten so good at creating human faces that humans no longer know when they are talking to a real person. Even more disturbing, a recent study revealed that humans trust a fake face more than a real face.

Not everything surrounding deep fakes is bad news. Similar underlying technology is being used to make the world more accessible. For example, deep fakes are expected to benefit education, as the use of multimedia and interactive lessons continue to grow in the classrooms as part of the hybrid-learning, post-pandemic world. Not to mention, they can help to breathe new life into history, or to predict weather patterns with greater accuracy.

Related: Meta Creating Human-Level AI For The Metaverse

A new study identified over 1,000 deep fake profiles on LinkedIn, belonging to more than 70 companies. The study by Stanford Internet Observatory researchers, and further investigated by NPR, concluding that deep fakes are being used as a marketing tool. Companies have turned to fake profiles to increase their sales workforce, widen their social media net, and circumvent algorithms that limit visibility. LinkedIn has already removed more than 15 million accounts which included AI-generated profile images, NPR reported.

How To Spot A Deep Fake

Deepfake Detection Using Pupil Shape

According to Norton, deep fakes can be identified in spite of their increased sophistication. One of the simplest ways is to right-click the image and then on “search the web for image.” Performing a search like this can reveal other projects using the same image and possibly with a different name associated with the profile. It is also important to check the name against its credentials, education, and work experience.

Other methods to identify deep fakes require a detailed analysis of the image by looking for any elements that seem unnatural. For example, unnatural coloring, lack of emotion, or a faked emotion. Hair is one of the most difficult things to imitate for a computer, which can make it a good aspect to focus on. Likewise, eyes that appear too centered, or backgrounds that are conveniently blurred out, can be good ways to identify if an image was artificially generated. As companies are using deep fakes to engage in sales tactics, it can also be useful to pay attention to messages and chats. Typically, AI programmed to engage in conversations will fail to respond to out-of-script questions.

Due to the general increase in the use of deepfakes, AI experts are pressing for regulation that would require all deep fake creations to be verified and traceable. Blockchain verification has also been proposed as an effective way to keep AI-generated deep fakes in check as well.

Next: Who Is QAnon? AI Might Have Just Figured It Out

Source: NPR, Norton