The new AIs are magical and world-changing
But not in the ways everyone keeps talking about
I just have to write this so that it exists somewhere on the internet.
I keep seeing writers who fall into one of two camps:
Pro: The new AIs are amazing and are going to take all the jobs and may destroy the world. Link: all of the internet.
Anti: The new AIs aren’t amazing at all: Link: Gary Marcus and Freddie DeBoer.
I don’t think either of those positions are right. I think something else, and I’ve never seen anyone write this out clearly, so I thought I’d do it.
The new AIs (by which I mean LLMs like ChatGPT) are in fact amazing, and have done something incredible. The thing they’ve done is: solve human language. I’m a linguist by profession, and have been working with computer imitations of human language for most of my career - or rather, I’ve been refusing to work with them, because they’ve been shit. I think a lot of people don’t know this, but before GPT 2 (maybe before GPT 3), computers could not write grammatical English (nor any other language). They simply could not create full sentences that followed the laws of English grammar consistently.
(Genuinely, lots of people don’t seem to know this. Perhaps because thoughout this century they have seen computers communicating with them. But whenever a computer presented you with a grammatically correct question or sentence before about 2021, that question or sentence had been written by a real person, and stored in the computer. No computer prior to that time had the ability to compose their own sentences. I know this because the translation systems they wanted me to work with would take a perfectly normal sentence in Chinese (or any other language), and produce grammatically incorrect nonsense in English.)
GPT2 and particularly GPT3 solved this problem. It’s miraculous what they produce: completely correct, grammatically and (as of GPT3) semantically coherent. This is a massive breakthrough, much bigger than the celebrated victories in chess (Deep Blue) and go (AlphaGo).
I’m actually really upset by the failure of the linguistics community (so far as I have seen) to respond to this. Link: Language Log, all the linguistics journals. Think about it: linguistics as a science is concerned with the investigation of one thing: language. Up until this point in time, there has only been one thing in the universe capable of producing language: us. We’ve now created a new thing that can do language, the object of study for this science… and so far as I can tell, the science of linguistics has not responded at all. It’s the equivalent of discovering aliens, and linguists just going, huh. Not interested. I can’t imagine what’s going through the heads of people who aren’t crazily investing their time in research into this new species of language-maker.
So that’s the AI-really-is-amazing part.
But I also don’t think that AI is doing, or is going to do, all the amazing or terrible things that people keep talking about.
This is a massive topic, so I’m only going to put the headlines here.
AI is not going to take away human jobs. This is because people will just change the jobs we do, and do the things that AI doesn’t. Just like people don’t carry messages any more, because we have email and phones for that. AI will change the jobs we do, but the reason people work in modern economies is because people love making other people work. If there’s really nothing that needs doing, we will force each other to dig holes and fill them in again. Jobs won’t go away.
AI is not going to be our friend, because their brains aren’t anything like ours. You are spectacularly ignorant compared to AI. It would die of boredom talking to you.
AI is not going to take over the world, because AI doesn’t want to take over the world. The big difference between us and them is that we are designed to have desires, because we are carriers for genes. It’s not consciousness, it’s desire. (This is where the philosophers have been going wrong for the past couple of decades. Pace Chalmers, consciousness may be a hard problem, but it’s not actually the important problem.)
Alright, I just had to write that down.
I think you are on to something. LLMs are pure language, language without knowledge, logic or intent and you'd think that that would be a Big Deal for linguists, but I haven't really seen any commentary along those lines (disclaimer: I'm not a linguist).
The interesting linguistical lesson is perhaps how how much of the experience of personhood is mediated by language alone. LLMs don't 'know' anything, they don't 'think', even less 'feel', they only construct statistically probable sentences from arrays of floating-point numbers, and yet it is very hard not to react as if there actually was someone there. Conversely, how many real people out there don't really know, think or feel very much but get along by being smooth talkers?
Perhaps this is old hat for linguists? Anyone who has seen a literal transcription of an everyday conversation will know that the brain is hardwired to infer meaning and intent from even very disorganised language, a kind of linguistic pareidolia (https://en.wikipedia.org/wiki/Pareidolia).