CNET’s AI-Written Articles Are Riddled With Errors:
The tech media site has been forced to issue multiple, major corrections to a post published on CNET, created via ChatGPT, as first reported by Futurism. In one single AI-written explainer on compounding interest, there were at least five significant inaccuracies, which have now been amended.
A few weeks ago, while waiting for a meeting to start, I listened to a group of coworkers talk about how they had used ChatGPT to explain various parts of their jobs. They were clearly wowed (and somewhat intimidated) by the tool, and immediately began speculating about everything it could do for them.
The thing about ChatGPT and the rest of the current batch of “AI” tools is that they are basically doing string predictions. Having been given large data sets, ChatGPT takes a sequence of words and then tries to predict what you expect to hear next. That’s it. It does not care about—or even understand—the meaning of the words or the truth of its responses.
In other words, it is a bullshit generator.
I think we need to be very clear about this fact in any conversations about what these tools can do and how they should be used.