2023 is the year that AI really comes for creative human jobs like writing and drawing, or so claim the tech evangelists, but popular consumer tech website CNET has already pushed over six dozen AI-written articles covering finance without even announcing the shift. In 2022, AI-based image generation models exploded in popularity, but it was OpenAI that really sparked the chatter with ChatGPT, a conversational chatbot that is a little too good at writing. From academic essays and romantic tales, to code and rap songs, ChatGPT will do it all. Some have even used it to write a children’s book.CNET, a media conglomerate owned by Red Ventures and known for its tech coverage, has actually taken a quiet "lead" in content generation using AI. Online marketing expert Gael Breton was the first to spot it when he came across a Google Search description for one of CNET’s articles claiming it was “generated using automation technology.” The warning message further added that it was “thoroughly edited and fact-checked by an editor.” These AI-generated finance articles were published under the CNET Money byline.

RELATED: Microsoft's VALL-E AI Can Imitate Your Voice Using A 3-Second Audio Sample

Pros, Cons, And The Outlook Ahead

A human talking to a robot.
Created with DALL-E 2

What is worrying, however, is the fact that nowhere in the article it is explicitly disclosed that those words of financial advice were generated using an AI. It is only when (and if) the reader taps on the CNET Money byline text that they see a drop-down, noting that CNET Money is just an alias for an AI engine, and not some unnamed staffer on the CNET editorial team. So far, CNET has published 73 articles using an undisclosed AI engine, starting in November 2022, with the most recent story coming out on Jan. 9, 2023.

While some of the articles are basic step-by-step guides, a few of them answer nuanced questions such as “Should you break a CD early for a better rate” and “Does Experian boost your credit score?” One would ideally want to get such counsel from an expert, preferably a human, instead of an AI-generated article that merely scraped the web for relevant content elsewhere. CNET is yet to release an official statement regarding the deployment of an AI engine to dole out financial advice. Skirting the ethical debate here, the more important question is how it challenges the internet search guidelines and whether it will have a real impact on a human journalist’s job.

As spotted by Search Engine Journal, Google clearly sees AI-generated content as spam because it violates the search webmaster guidelines, and its search team is authorized to take action against such content. Unfortunately, Google lacks the technical tools to detect AI-generated content in search results. There are, however, third-party options out there such as GPT-2 Output Detector, GLTR, and GPT Zero to sniff out AI-generated words. These are not foolproof, by the way. If implemented at scale, the false positives generated by these tools could have serious repercussions for the career and livelihoods of actual people.

So the question now is, should AI be banned from the writing job market? That’s debatable. Folks are using it to write rough first drafts when their creative juices are not flowing, find hot keywords to generate content, gather reader sentiments more efficiently, and deploy it for SEO optimization. Markets and Markets says AI’s usage in social media marketing and related activities will cross the $2 billion mark in 2023. But can AI replace journalists? Business Insider tried to find out the answer with ChatGPT, and despite the factual errors, the answer is bad news. While CNET claims to have used a (human) editor to fact-check the AI-written articles on its website, it might be only a matter of time before AI comes for creative jobs.

MORE: AI Streamer Designed To Be Edgy Starts Promoting Holocaust Denial

Source: Gael Breton/Twitter, CNET, Search Engine Journal, Markets and Markets, Business Insider