Hi Everyone,
Welcome back, and—as always—thanks for subscribing.
This week I want to talk about ChatGPT.
Until recently, I had used ChatGPT only a few times—and very casually. I asked some good questions and some dumb questions. And I wondered if it would tell me it wanted to “be alive”—as it did New York Times reporter Kevin Roose.
Recently though, I decided to spend a bit more serious time with ChatGPT. I wanted to try and evaluate whether it would actually be something I could use to create great art content—a focus of mine, and (again), the focus of this newsletter. In the back of my mind, since Chat GPT’s release, I’ve wondered how it might not just be a new super-powered customer service rep or personal assistant but something bigger, for truly advancing knowledge.
My takeaway is, unfortunately, no, it is not something I can use for this. In the process, I have also come to be very concerned about ChatGPT, to the extent that I have signed the recent “Pause on Giant AI Experiments” open letter, which calls for “a pause for at least 6 months [on] the training of AI systems more powerful than GPT-4.”
My relatively brief experience experimenting with this technology has made me realize its dangerous current “perfect” storm of techno-invention: combining corporate irresponsibility, massive adoption, lack of regulation, and a significant misunderstanding as to the risks and repercussions of this type of technology.
I tried to use ChatGPT in ways that I have seen people suggest would be relevant for me, or that the tool suggested to me. For example, I tried to use it for…
Editing
Expanding readership by changing the writing style of my work
Brainstorming
Research and source suggestions
Summarization
Translation
If anything might be useful about ChatGPT, it is the top two: editing and change of writing style. Yet I would personally not recommend it for anyone doing anything other than very basic, unimaginative writing.
For all the other ways above (brainstorming through translation), on the surface, they seem promising. However, the major, major issue underlying all of them is that ChatGPT cannot source or provide any rationale as to how it comes up with ideas and it makes things up (like people and books, basically anything, for reasons I am not sure of). You may have heard ChatGPT’s lies referred to as its “hallucinations.”
Some people might say this is not the point of the technology, and that I didn’t read the disclaimer (written below the search box on ChatGPT’s site and featured in the image above), and they might be right. ChatGPT is a fascinating tool that mimics human conversation and can output so many types of materials that we create ourselves—like form emails, marketing copy, and even sub-par poetry—and it says it “may produce inaccurate information.” Yet what I have gathered is that a lot of people using this technology don’t know what the point of the tool is or haven’t read the disclaimer and generally feel ChatGPT is this cool new “AI” that can answer any question you ask it; that it can do your work for you; and that it can give you quicker answers than a Google search, and that sometimes it’s “wrong” but mostly it’s “right.”
In this respect, I find the rollout of ChatGPT and the public education about it to be deeply flawed.
ChatGPT should have been released with a bigger (and much more publicized) disclaimer similar to what writer Stephen Shankland’s recently said about it in CNET:
“ChatGPT doesn't exactly know anything. It’s an AI that’s trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong.”
Or ChatGPT could append this tweet, from their CEO Sam Altman, to the chatbot:
Open AI seems to have currently taken the approach not of a public re-education campaign, but of cautiously celebrating. While they acknowledge their tech is in beta, the overriding sense is that the adoption of the product is their sole focus and that the untruths are not a big issue. Regarding numbers, Shankland estimates user stats are already staggering:
“UBS analyst Lloyd Walmsley estimated in February that ChatGPT had reached 100 million monthly users the previous month, accomplishing in two months what took TikTok about nine months and Instagram two and a half years. The New York Times, citing internal sources, said 30 million people use ChatGPT daily.”
Again, what feels so dangerous here is the casual approach Open AI is taking to truth and the resulting volume of misinformation and lies being output by the product at an exponential volume.
As a scholar, and someone who cares deeply about truth versus truthiness, I’m scared by actions like this as it seems to be yet another example of our broader cultural pull towards fast information versus correct information and a lack of care for (the difficult and complex work of) searching for truth. I keep coming back to the question of whether we would just let an individual do this online—and while the Internet is filled with people like this, including our former president—the usual sense is we would not want to create something on purpose to do this. (Or maybe we do?)
Where do we go when we continue this trend? A world where populations can marshall any information as truth because no one knows what real history or histories are. A world where it’s easy for people to change narratives about the present because people don’t know the past. And a world where we have more of a tendency to repeat past mistakes because we don’t know them well enough, if at all.
The Internet enables the profound opportunity to share the highest-quality information humanity has gathered and worked for, for generations, to learn and understand. Our ancestors would have been truly amazed. But it seems like most people don’t care about this, and massive corporate efforts reflecting and exponentially fanning this attitude seem to make things so much worse, almost instantly, creating an Internet mostly full of noise, where the signals become harder and harder to identify.
As someone who wants to celebrate technological innovation and believes it can do good, I’m truly hoping someone replies to me here telling me I am misinformed, that my argument is flawed, that I being alarmist, and I should try other (better) techniques with the tool. I’ll be waiting.
Thanks for your continued readership and see you again soon.
Best,
Matthew
Great post, although I am much more cynical about where AI may take us. I hope lawsuits and regulation force a slowdown and result in more ethical and responsible development.