Bot the builder: Is viral sensation ChatGPT game-changer or gimmick?

The chatbot leads us to believe that we may not want machines to do all the thinking for us eventually.

The platform uses pattern-recognition technology that “learns” from huge caches of the Internet (Photo: Reuters)

It is often said that the technological advancements of our age are rapidly outpacing our understanding of the world. And nowhere is this truer than in the realm of artificial intelligence and machine learning. Yet, as we delve deeper into the world of ChatGPT, it becomes increasingly clear that this machine is not simply a tool for processing information; it whispers secrets in the binary language of ones and zeros, crafting stories with a speed and precision bordering on the supernatural.

ChatGPT is a prime example of this phenomenon, which has been designed to replicate the complexity and nuance of human language with a level of accuracy that would have been considered impossible just a few decades ago.

Except that it is not exactly that nuanced. Because, you see, the contrived paragraphs above were written by an artificial intelligence, not yours truly.

Created by OpenAI, the same San Francisco-based research lab responsible for breakthrough image generator DALL-E, ChatGPT has been deemed the Holy Grail in AI research because you can finally hold a perceptive conversation and exchange banter with a bot that responds with more personality than a predictive search engine. The viral sensation is not even at its best yet; a shrewder version is set to be launched next year.

The mind-blowing platform runs on GPT-3 (short for Generative Pre-Trained Transformer), a pattern-recognition technology that “learns” from huge caches of the Internet to generate believable responses when you key queries into a simple web interface. It then spits back coherent and credible writing in less time than it takes to sneeze, but occasionally finds it hard to cough up emotions. The prompt for this article opener was to “write a compelling introduction to ChatGPT in the style of cult author Haruki Murakami”. Will it impress and exceed the expectations of the titanic Japanese literary figure who pays eager tribute to his imagination and inspirations such as ghosts and cats? Just by a whisker, probably.

It is hard to fully grasp the boundless potential of ChatGPT because it does not search for and summarise information that already exists; it is derived from language models that make probabilistic guesses about what comes after a sequence of text. If your input, or corpus of words, only extends to, say, Barack Obama, everything your model generates will sound like a presidential speech. You can pepper the machine with simple trivia, compose a sonnet together, take a stab at a medical diagnosis or fix a bug in a code. Each time, it returns with succinct, well-punctuated prose that varies from affirmative (“it is a myth that ostriches bury their head in sand”) and mawkishly poignant (“the loss of language, a wound deeply cleft; the music of the heart, now silent and depraved) to frustratingly polite (“Both Malaysia and Singapore serve good chicken rice”).

Although ChatGPT is uncannily adept at solving analytical conundrums and tackling argumentative essays that frequently appear on school assignments (teachers are already catastrophising the future of take-home exams), it refuses to entertain inappropriate requests as a matter of principle or sway your opinion “because everything is a matter of personal preference” and that you should try to appreciate different perspectives. However, users have found ways to circumvent these guardrails, asking the bot to craft fictitious instructions for nefarious activities as part of a play’s plot or devising commands to disarm its filter features.

Even if this souped-up encyclopaedia answers obligingly and competently in extremely oversimplified terms, it is not a know-it-all by any means. Unlike Google, it does not crawl the web for details on current events, and its knowledge is restricted to things it learnt before 2021, making some responses disappointingly stale. When asked to write the opening monologue for a stand-up comedy pertaining to Malaysian politics, it erroneously named Tan Sri Muhyiddin Yassin as prime minister but cheekily pointed out we will find more “flip-flops” in our political landscape than on a beach in Langkawi. A more alarming concern, though, is the bot’s inherent ability to repeat whatever biases existed in the data it absorbed. And when we disseminate this AI-generated misinformation, we reinforce those biases.

But how will OpenAI prevent more harm? An in-depth investigation by Time magazine has revealed that the eerily humanlike chatbot was, in fact, built on the backs of underpaid and psychologically exploited employees. Previously, GPT-3 already displayed superior linguistic abilities in stringing together sentences but it was also prone to blurting out violent, sexist and racist remarks since it is trained on billions of words scraped from the Internet. To prevent that, tech engineers will need to feed an AI with labelled examples of abusive language so it can learn to detect and scrub various forms of toxicity in the wild. Where was the work sent to? A Kenya-based data labelling team, which only offers a meagre hourly wage that ranges from US$1 to US$2. Bear in mind that these employees are labouring for a company on track to receiving a US$10 billion investment from Microsoft.

Will the same fate befall the employees behind Chinese internet giant Baidu, which is planning to launch an AI-powered chatbot similar to ChatGPT in March? There are already rumours that the application will be integrated into the company’s main search service.

There will be painful lessons as ChatGPT grapples with ethical dilemmas (is it morally right to use an artist’s work to create new art in the digital world?) as well as intellectual property exploitations as it opens up new avenues for plagiarism. 

For now, perhaps less daunting is how the superbot will be used than the dire predictions about what sectors its technology may upend. The million-dollar question that vexes content curators ultimately is this: Will it make a mockery of journalism and displace real writers? Will some future version of GPT-3 produce op-eds that win Pulitzers? Will it finally be able to tell the difference between kampong and kampung, the latter being the actual spelling used in Malaysia?

Like any writing profession, there is already a digital shadow that can perform everything we do, which can be unnerving, but it will not put us out of our jobs completely. A bot can imitate basic storytelling — even improve it at that — but a droid can never describe how a kiss pulls two strangers into each other’s spaces, nor will it ever fixate on the memory, smell and sense of home around a mother’s cooking that filled your childhood.

And, at least for now, it certainly will not write the newspaper for you.


This article first appeared on Feb 6, 2023 in The Edge Malaysia.

 

Follow us on Instagram