ChatGPT Won't Obsolete Humans
In the last weeks, I've seen many news, opinions, and discussions about ChatGPT and deep learning language models. The topic is so hot that screenshots of "chatbot magic" popped up even in my low-volume college classmates' chat group1. Debates range from "is it a step forward for chatbots" to "will it disrupt industries and replace humans". And while we haven't outsourced writing essays to AIs just yet, I decided to chime in and lay out my thoughts on the topic.
As you might guess from the clickbaity title, I'm skeptical about large language models replacing humans anytime soon. I don't judge them from a technical perspective since I know only a bit about their inner workings. My reason for skepticism is simple: chatbots don't ask questions2. I haven't seen a single ChatGPT conversation where it asks relevant questions before answering.
Take this article, for example.
The author asks ChatGPT to write a function that looks like a square root but with a twist: it must return 35 instead of 36 when given 6.
Chatbot generates a square root function.
True, humans can overlook this condition too and produce the same answer.
But once corrected, they can come up with questions like "what should it return for 7" or "is 6 the only corner case"3.
ChatGPT instead obediently generates a lengthy
It's not good enough for the author, so they ask the chatbot to generalize the code.
It outputs a better-looking but incorrect code, and the error stays unnoticed by the author.
The inability to ask questions comes as no surprise: according to ChatGPT itself, its underlying model uses statistical relations between features of training and input data and isn't able to build causalities yet. It will be hard for AI to replace humans because in all areas it's about to "disrupt", asking the right question is crucial. Assistance will be limited, too: as we saw in the example above, spotting errors in machine answers is tricky and could outweigh the benefits of using one.
AI can replace one human activity already: writing keyword-ridden SEO texts for content mills. Current language models generate texts of a given size and subject; they are grammatically correct, albeit not always meaningful, and never novel. Unfortunately, neither meaningfulness nor novelty is search engine spammers' criteria. Once text generation is available from a search engine, we will see a wave of generated blabber in search results.
The recent spike of interest in language models looks to me more like media hype around a shiny new thing than astonishment at a groundbreaking discovery. Let's see if it sticks long enough to produce something useful or if it'll only cause wreck and collapse as blockchain and cryptocurrencies did.
I must confess I posted some too.
Apart from "How can I help you?" at the beginning of a conversation.
Sure, some people won't ask questions even then. I see it as a disadvantage; we don't need to replicate it in AIs.