The (sad) story of the chatbot who knows me better than my mother
An early and cursory warning: yes, it is a fact that the person writing to you is one more among many, probably too many, people writing about ChatGPT. If this is reason enough to make you bored, you will have my complete sympathy and understanding if you decide to stop here. If it is not reason enough to quench your curiosity, as I hope it will, continue.
A second warning: the author of these words admits to being surprised and impressed by the capacity and quality of the productions of some of these "machines". This feeling did not result only from reading about the subject but, above all, from direct experiences with programs like ChatGPT, a text generator, and Midjourney, which creates images based on text commands. In short, I understand the enchantment felt when reading texts or seeing images produced by these systems. I understand why I feel this way. But this enchantment quickly mutates into disillusionment and even disbelief or shame, when I come across some comments that denote a dangerous dazzle.
I'll start with an example of this danger. A few days ago I came across a video in which a university professor and head of a computer science team praised the capabilities of ChatGPT. In the first few minutes, the protagonist presented his experience with the program. He started by placing lines of code with intentional errors, hoping that artificial intelligence (AI) would detect them, an exercise he often proposed to his students. Even though he made several iterations, placing increasing difficulty in his experiments, the program always managed to get it right, with higher accuracy and speed than human students. He continued with more subjective incitements, asking the AI to help him with the schedules and scales of colleagues. In his expression was remarkable the growing wonder on account of these developments. He stubbornly tested the program, prodding it with questions about how he should manage his team, asking it for ideas about organization and about leadership. At this point, his face could not hide the excitement also present in his words: "It's extraordinary! I learned a lot! The 'tool' knows me really well! At this moment the aforementioned disappointment and shame of others sets in and I stop watching the video. What do you mean? How can someone, who is even from the field, say such a thing? How can one show such ignorance about how this kind of system works?
It would almost certainly be tedious to list all the references I have consulted on this subject in the last two months. Yet in none of them did I find the following hypothesis formulated: is our delight and amazement at the increasingly impressive results computed by an artificial intelligence due to the decrease, the decay, of our own intelligence? The following is an essay for a possible answer.
The lure of illusion and "salvation" through computers
In the same way that we are attracted to the configuration of the human face - allowing us extraordinary abilities to infer feelings, emotions, and ideas through representations as simple as emojis or rudimentary drawings - it seems poignant, albeit in a sadly parochial and patronizing way, the excitement that comes from finding such intelligence in an "entity" without eyes, nose, or mouth. It will be equivalent to the excitement (and fear) of discovering that intelligent extraterrestrial beings exist. It is not necessary to resort to science, popular wisdom is enough to know that too much enthusiasm can make our attention short-sighted and our reflections anemic. Passion, whatever its object, however wonderful it may be to live it, does not always contribute to making the best decisions, although it does make it easier to decide and act.
The critic Joseph Weizenbaum, already in 1983, warned about the "magical" effect that technology has on us. Magical in the sense that we perceive the results it presents to us as magic1. Steve Jobs' Apple, for example, grew up precisely on this idea. "It works like magic" is a common expression to find in launches of its products. This manifestation is not unique to that brand. Advertisements for new technologies and features of existing technologies - cell phones, watches, productivity programs and systems, automobiles - often excite for the fluidity of operation that is not always seen in reality. As in other forms of advertising, the aim is to enchant and convince to buy, literally and/or figuratively, without regard for the truth, just like in the bullshit that a scammer tries to foist on us. I will say more about bullshit later on. In practice, removing the ethical dimension of the analysis, as in real life, it is not magic, it is illusion. And we human beings can't help but marvel at a good prestidigitation.
Nor should we underestimate the importance of Aesthetics. I remember using SAP, in 2007, and it was a pain to have to do it. Not only because of the slowness and complexity of the system, but especially, in my case, because it was ugly. Purists will say that it is not because we put "glitter", fluid animations and a nicer appearance that a system will work better. I conceive that this will not be the fundamental aspect, as far as capability is concerned. But it is true that the more enjoyable an experience is, the more we can expect to want to repeat it. Not being an expert, nor knowing deeply the tricks of the trade, it is easy to see that by making adjustments to the speed of animations or by adding entertaining elements you can contribute to ease, for example, the feeling of slowness or to reduce the impatience caused by the time you wait for the results of a computation.
Weizenbaum, still at the beginning of the history of our coexistence with computers, warned that "the computer has been a solution in search of problems - the ultimate technological fix that insulates us from having to address problems. Machines have already freed us from many heavy jobs and made our lives easier in so many others. How much better is our life? What have we done with this (supposed) evolution? We have created new problems instead of solving other important and avoidable problems. What problems are we neglecting as the myopia caused by our dazzle worsens?
From the outsourcing of fundamental capabilities to the (more than likely) degradation of our intelligence and ability to detect and distinguish bullshit
A colleague of Weizenbaum's, Lewis Mumford2has coined the concept "megatech bribery," which directs us toward the tendency to overlook the downsides of technologies when we are promised a share of their benefits. Individual memory seems to me a good example: who still memorizes telephone numbers or e-mail addresses? Almost all of us have mini-computers in our pockets and wallets that do this for us. These devices also interfere with collective memory, and produce changes in the way we converse, for example. Who decides to wait for the associations, the "train of ideas and thoughts", when you are trying to remember which actress starred in that movie you liked and want to tell your friends about? Any doubt is now clarified by a quick and easy consultation of the cell phone. We can associate this phenomenon to trust, or the lack of it when, for example, we doubt the answers that others give us by resorting "only" to memory. From legitimate doubt to fundamentalist skepticism, where we don't believe in people, but only in "data", it's a short step. Interestingly, these dynamics make us more prone to the infiltration of bullshit when we blindly trust data and not people. In the same way that we have delegated memory to machines, the critical sense has also been entrusted to them, and is no longer part of the dynamics of conversation and relationships. We sacrifice memory, associative capacity, curiosity for speed. We kill questions, doubt and uncertainty for orders that guarantee us answers. To get better answers we must learn to ask better questions.3.
Like us, ChatGPT is lazy and, contrary to what is advertised, it is not coming to help us deal with laziness. Conversely, it contains the real risk of contributing to increasing it. I am referring to the kind of laziness that directs us to the deterioration of our abilities, conceiving that there is another kind that, potently, allows us to evolve (for example, the well-known creative effects of leisure and boredom).
It is easy to find a pattern in the many writings available on the subject, particularly in the Education sector, where there are many people concerned about the detrimental effects such tools can have on critical thinking and the related ability to translate it into words in essay format. There are also those who use this phenomenon as a pretext to criticize the sector, arguing that we are at an optimal stage to break with outdated mindsets and outdated methods4.
"If you can't write better than a machine, why are you even writing?". "We have entered a new world. Goodbye homework!" These two sentences were written by two heavyweights (at least in terms of visibility and perceived importance) of today's world, Marc Andreessen and Elon Musk, respectively. I think that these kinds of comments are dangerous because they disregard very important dimensions. These supposedly progressive thoughts often seem to forget that progress should not be at the expense of important or even essential losses. What should be lost with progress is what is wrong, what is too much and what causes us suffering. Not what allows us to evaluate what is right, what allows us to be more and better. Thinking about homework, for example, or writing essays, we cannot forget that language (written and/or oral) and thinking are intimately connected. To improve one dimension is to improve the other and vice-versa. Manuel Monteiro, in his books "Por amor à lingua" and "O mundo pelos olhos da língua" suggests that the mishaps we make to the language, to the way we express ourselves, whether in writing or orally, reveal to us some modern aspects (a word I learned from the author, who uses it a lot) of our inner self and our social functioning: the impoverishment of critical thinking skills; a growing inability to substantiate our opinions; the ease of adhering to opinions formulated by others; the difficulty in distinguishing truth, lies, and bullshit.
The existence of a program that is able to impress by the quality and "originality" of its productions should not make the learning process unnecessary. Therefore, it is not just about knowing how to ask good questions to get good answers, the process of finding and formulating the questions and the path of designing the answers are fundamental and irreplaceable. These are activities that we should not outsource or delegate to a machine, or even to another person. It is, it must be, an individual quest. No one can learn or acquire our skills for us.
It is also notorious the ability of these machines to foist bullshit on us. The creators of ChatGPT themselves warn of this, and also extend their warnings about the biases, errors, and misinformation that their creation can offer their user-clients. The internet is littered with examples, some hilarious, of the platform's mistakes. It's not the mistakes that concern me. It's our, ever-increasing, inability to spot bullshit.
Understanding better how this type of intelligence works5we can understand the reasons that lead to such errors and biases. Trying to explain it too simply, ChatGPT is very adept at faking the semantic competence of humans; and since we "know" that it uses information available on the Internet - "the data" -, with a "learning" capacity much higher than ours, it is easy to believe without questioning the answers it offers us. This is precisely one of the many problems, as the aforementioned Weizenbaum had already warned us, in the form of questions: who/what are the sources of information? What systems and criteria do their creators use to guarantee the ethics, justice and truth of their answers?
Outsourcing our essential abilities, the ones that make us who we are, is always risky. In the limit, without gaining more and better awareness of ourselves and the use of any tools at our disposal, we will be contributing to a form of eugenics of thought and feeling: an artificial "brain" that indirectly commands all the others.
The dystopian fear of human extinction caused by machines
Besides the fears that are directed at Education, there are other obvious fears that translate into concern for the eventual (predictable, according to some opinions) obsolescence of some professions and activities such as journalism, law, consulting, creative writing. Basically, for some, the idea that machines are going to take our place has just begun to happen. It is the fear of our own obsolescence. Finally we would have the terrible answer to the dreaded question "what do we do/do here?": nothing! because there is a machine doing it in our place. Some detect in these artificial intelligences the potential to be better than us at demands that we consider unique to our species, the ones that make us special, in our eyes, of course.
I have no doubt that AI, in general, will be able to free us from many tasks that have no added value to be performed by people. In fact, we should already use some existing capabilities in these systems to free people from tasks that are known to cause illness (physical and mental), and for people we should maintain the conditions or create them, in the many cases where they don't exist, to be able to add real value, for themselves and for others.
A great opportunity to improve ourselves
If you didn't get it or I wasn't clear until now, this text is not a criticism of AI or ChatGTP. It is a self-criticism, of us human beings. We get scared and excited with fewer and fewer reasons to do so, which leads us to join or reject novelties with a speed that does not match the time we need to actually learn.
But this text is not only critical. It is also a manifesto of hope, an alert to the opportunity to improve ourselves. By detecting these failures, fears and anxieties of ours, we can, therefore, devise solutions. It is moments like this that allow us to ask good and important questions; and to find quick answers, without being definitive, to problems encountered or anticipated 6and to pay attention again to what is important for us to pay attention to.
Language is a way to connect minds and people, live and on the fly. We are still far away, and perhaps it will never happen, from connecting with machines in relationships that go beyond utility and transactionality, however much many of us think we have deep relationships with some machines. As Tim Leberecht tells us7these artificial intelligences, "are not capable of establishing relationships [other than between the data they collect, I might add]: with themselves, with others, with truth, with the future. We humans define ourselves through relationships." Perhaps the democratized access and predictably rapid evolution of these kinds of technologies can finally free us up to devote ourselves to what is really important to improve ourselves as humans.
May there be and may there be even more tools like ChatGTP that show us the importance of embarking on a path that allows us to be more intelligent-critical, and skeptical-open-minded, and rigorous-romantic, and serious without taking ourselves too seriously.
Some additional references
- ChatGPT Is a Mirror of Our Times
- Welcome to the Next Level of Bullshit
- Lies, BS and ChatGPT
- AI-Generated Bullshit Is A Challenge To Our "Vigilance
- ChatGPT: Automatic expensive BS at scale
- The Undergraduate Essay Is About to Die
- The College Essay Is Dead
- ChatGPT is everywhere. Here's where it came from
- The End of Writing
- Economics of ChatGPT: A Labor Market View on the Occupational Impact of Artificial Intelligence
- Could ChatGPT do my job?
Written for Link to Leaders on February 22, 2023, published February 28, 2023.