I think you are on to something. LLMs are pure language, language without knowledge, logic or intent and you'd think that that would be a Big Deal for linguists, but I haven't really seen any commentary along those lines (disclaimer: I'm not a linguist).
The interesting linguistical lesson is perhaps how how much of the experience of personhood is mediated by language alone. LLMs don't 'know' anything, they don't 'think', even less 'feel', they only construct statistically probable sentences from arrays of floating-point numbers, and yet it is very hard not to react as if there actually was someone there. Conversely, how many real people out there don't really know, think or feel very much but get along by being smooth talkers?
Perhaps this is old hat for linguists? Anyone who has seen a literal transcription of an everyday conversation will know that the brain is hardwired to infer meaning and intent from even very disorganised language, a kind of linguistic pareidolia (https://en.wikipedia.org/wiki/Pareidolia).
I don't think linguistics has got to grips with it in a very meaningful way yet. From what I've read (and I only did scientific linguistics to BA level - I've been a translator since, but that's a bit different), human utterances are usually treated as though they meet a platonic ideal of speech: the person saying X fully intends to say X, and properly understands the full meaning of X. Anything that doesn't meet this standard is sidelined as a disfluency ("um," "er," etc.) or as a stage in the language learning process on the way to proper expression.
(This is a bit of a simplification, because I know that there is some research into, for example, disfluencies and what they tell us about sentence formation and semantic interference. But I think the idea of ideal speech representing ideal thought still largely holds true.)
And yes, you're right: the appearance of correct and fluent speech without real mental processes behind it represents a very serious challenge to that paradigm. And linguists and psycholinguists are the smart people I want to see teasing out the details of this... I just haven't seen any sign of it happening yet.
I may be being grossly unfair, and perhaps this work is in the pipeline. But I thought it was really interesting that no one even seems to be commenting on the miracle that is GPT language.
I think you are on to something. LLMs are pure language, language without knowledge, logic or intent and you'd think that that would be a Big Deal for linguists, but I haven't really seen any commentary along those lines (disclaimer: I'm not a linguist).
The interesting linguistical lesson is perhaps how how much of the experience of personhood is mediated by language alone. LLMs don't 'know' anything, they don't 'think', even less 'feel', they only construct statistically probable sentences from arrays of floating-point numbers, and yet it is very hard not to react as if there actually was someone there. Conversely, how many real people out there don't really know, think or feel very much but get along by being smooth talkers?
Perhaps this is old hat for linguists? Anyone who has seen a literal transcription of an everyday conversation will know that the brain is hardwired to infer meaning and intent from even very disorganised language, a kind of linguistic pareidolia (https://en.wikipedia.org/wiki/Pareidolia).
I don't think linguistics has got to grips with it in a very meaningful way yet. From what I've read (and I only did scientific linguistics to BA level - I've been a translator since, but that's a bit different), human utterances are usually treated as though they meet a platonic ideal of speech: the person saying X fully intends to say X, and properly understands the full meaning of X. Anything that doesn't meet this standard is sidelined as a disfluency ("um," "er," etc.) or as a stage in the language learning process on the way to proper expression.
(This is a bit of a simplification, because I know that there is some research into, for example, disfluencies and what they tell us about sentence formation and semantic interference. But I think the idea of ideal speech representing ideal thought still largely holds true.)
And yes, you're right: the appearance of correct and fluent speech without real mental processes behind it represents a very serious challenge to that paradigm. And linguists and psycholinguists are the smart people I want to see teasing out the details of this... I just haven't seen any sign of it happening yet.
I may be being grossly unfair, and perhaps this work is in the pipeline. But I thought it was really interesting that no one even seems to be commenting on the miracle that is GPT language.