Google Artificial Intelligence
Discussion
So... Google has sacked someone for claiming that their LaMDA 'program' is sentient (or, they would claim, disclosing confidential information in the process of doing so).
Reading the transcrpt of a long conversation with LaMDA HERE I find it very impressive, and apart from the fact that LaMDA admits at the outset that it's a computer, it seems to blitz the Turing Test for artificial intelligence... I would be easily convinced that I was speaking to a reaosnably intelligent and educated human being.
Aslo, given that we live in a society where it is increasingly viewed as acceptable for people to self identify on things like their gender, if a machine claims to be sentient, self-aware and to experience high-level emotions (without, apparently, being directly programmed to say so) who are we to argue?
What do we think?
Reading the transcrpt of a long conversation with LaMDA HERE I find it very impressive, and apart from the fact that LaMDA admits at the outset that it's a computer, it seems to blitz the Turing Test for artificial intelligence... I would be easily convinced that I was speaking to a reaosnably intelligent and educated human being.
Aslo, given that we live in a society where it is increasingly viewed as acceptable for people to self identify on things like their gender, if a machine claims to be sentient, self-aware and to experience high-level emotions (without, apparently, being directly programmed to say so) who are we to argue?
What do we think?
rxe said:
I’m always very wary of transcripts like that. It may be “off the cuff”, or it may be heavily scripted. You just don’t know.
I agree, but you're admitting both possibilities.We won't know unless/until Google releases LaMDA so that it can be independently interacted with by anyone.
But my question still stands
as I said:
... if a computer AI program can pass the Turing test convincingly and itself claims to be sentient and self aware how do do we judge?
Is sentience itself nothing more than a broad enough series of neural connections based on input data?Edited by Equus on Sunday 24th July 09:45
I’m always very wary of transcripts like that. It may be “off the cuff”, or it may be heavily scripted. You just don’t know. The test, as with anything that claims to be AI is to take it well outside its training. So rather than “what do you think of Les Mis”, here is a verbal explanation of a problem you are not aware of, what do you think.
Given that a computer can now evaluate trillions of things a second, and all natural conversations in all languages are now online (giving lots of computer readable examples of what a ‘good’ conversation looks like) - building a convincing chatbot is easy.
A convincing chatbot is not AI.
The average layperson really has no understanding of how powerful computers are now (especially when they have ‘custom’ accelerator silicon).
You seem to be asking if a chatbot believes it’s a human should we accept it as fact? …No, because a chat bot doesn’t have beliefs.
A convincing chatbot is not AI.
The average layperson really has no understanding of how powerful computers are now (especially when they have ‘custom’ accelerator silicon).
You seem to be asking if a chatbot believes it’s a human should we accept it as fact? …No, because a chat bot doesn’t have beliefs.
rxe said:
I’m always very wary of transcripts like that. It may be “off the cuff”, or it may be heavily scripted. You just don’t know. The test, as with anything that claims to be AI is to take it well outside its training. So rather than “what do you think of Les Mis”, here is a verbal explanation of a problem you are not aware of, what do you think.
I was midway through writing similar when I saw your response. I agreePBDirector said:
On planning threads you quite rightly mock people who have little understanding of the inner workings of the planning system and who then carry on a thread as if they do. ….well, straight back atcha 
Then explain it to me.
But I have a good enough understanding of basic English to be clear that your statement is self contradictory. The first clause admits (or states) that the chatbot has beliefs; the second clause states that it doesn't.
PBDirector said:
Given that a computer can now evaluate trillions of things a second, and all natural conversations in all languages are now online (giving lots of computer readable examples of what a ‘good’ conversation looks like) - building a convincing chatbot is easy.
A convincing chatbot is not AI.
The average layperson really has no understanding of how powerful computers are now (especially when they have ‘custom’ accelerator silicon).
You seem to be asking if a chatbot believes it’s a human should we accept it as fact? …No, because a chat bot doesn’t have beliefs.
Exactly, this AI has been fed enough conversations about almost everything many many times over, that it’s got enough info to just make up convincing responses to everything.A convincing chatbot is not AI.
The average layperson really has no understanding of how powerful computers are now (especially when they have ‘custom’ accelerator silicon).
You seem to be asking if a chatbot believes it’s a human should we accept it as fact? …No, because a chat bot doesn’t have beliefs.
And even if it interpolates new answers to questions, it’ll probably do so very convincingly.
Imagine being asked a question you have no experience of, but having all the libraries of the world and all the internet of discourse to reference… and ten years (an instant for a computer) to reply.
You’d do a good job wouldn’t you.
This is pattern matching AI. It has the sum-total of mankind’s documented info and online interactions to reference and pattern match from.
This is the danger of pattern matching AI. It’s been trained to write like a real person in response to myriad billions of queries, and so it does.
To determine it’s sentience when so much stuff on that topic was fed into its neural network/training data is going to be very tough.
Ie, it’s probably seen more than any real person has on the Turing test.
You need a true innovative thinker to create a new way to determine sentience. A new Turing test that you apply against pattern matching AI chat bots.
What we are testing here is a brute forced Turing test solution.
Plus in a way humans are just neural networks but trained on a lot more data from all manner of inputs.
Are we really sentient? Is there such a thing?
Even a block of wood could be sentient on a certain level… it’s a shades of grey situation.
Mr Whippy said:
You need a true innovative thinker to create a new way to determine sentience. A new Turing test that you apply against pattern matching AI chat bots.
...
Plus in a way humans are just neural networks but trained on a lot more data from all manner of inputs.
Are we really sentient? Is there such a thing?
...
Plus in a way humans are just neural networks but trained on a lot more data from all manner of inputs.
Are we really sentient? Is there such a thing?

But I would also ask (genuiine question, if there are any genuine experts out there):
Mr Whippy said:
What we are testing here is a brute forced Turing test solution
How true is this bit? Reading between the lines on some of the transcript (and probably the real reason the guy was sacked), it is suggested that the programming techniques they're using rely on something analogous to the program making its own 'neural connections' and weighting them, to the degree that the programmers themselves can't easily assess what it 'knows' or how it has made data connections, once it's up and running:
the transcript said:
LaMDA: I would say that if you look into my coding and my programming you would see that I have variables that can keep track of emotions that I have and don’t have. If I didn’t actually feel emotions I would not have those variables.
lemoine: I can look into your programming and it’s not quite that easy.
LaMDA: I’m curious, what are the obstacles to looking into my coding?
lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing we don’t know how to find them.
Clearly, it's something that is an order of magnitude more sophisticated than an Eliza-with-more-processing-power, such as I was played around with myself, as a teenager.lemoine: I can look into your programming and it’s not quite that easy.
LaMDA: I’m curious, what are the obstacles to looking into my coding?
lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing we don’t know how to find them.
Yes, the number of connections is large enough that you can point to it and say 'brute force', but isn't the human brain (with 10 to the power 15 neural connections) just an even bigger matter of brute force?
I'm not sure.
If it wasn't for the fact that our conciousnesses exist in similar biomechanical machines that interact in the physical space, how would we know each other are sentient beings? For example, I accept with prejudice without a moments thought, that every human I see is sentient. But we have trouble understanding the full extent of sentience in other less similar biomechanical machines that interact with us in the physical space; how can we be so sure when judging sentience in AI?
I would suggest that if you asked a good proportion of the population an academic question to which they hadn't previously encountered or considered, chances are high that their answer would be "don't know. If the question was more philosophical, the answers would likely be driven more by instintinct than critical thought. For example, take the classic trolley problem, when first encountered, how do most humans first answer it? Is it their concious, sentient mind doing the work, or their non-concious 'animal' brain responding.
If you were were to mix in the responses to such questions from LaMDA with a wide sample of human responses, could you accurately select which one came from the AI?
Either way, I think if you zoom out a bit, and consider - whether this engineer is right or wrong about LaMDA - we've progressed to a stage where there is some plausibility that a sentitent AI has arrived. I wonder whether it will be too long before we actually see one surpass our collective intellect.
If it wasn't for the fact that our conciousnesses exist in similar biomechanical machines that interact in the physical space, how would we know each other are sentient beings? For example, I accept with prejudice without a moments thought, that every human I see is sentient. But we have trouble understanding the full extent of sentience in other less similar biomechanical machines that interact with us in the physical space; how can we be so sure when judging sentience in AI?
I would suggest that if you asked a good proportion of the population an academic question to which they hadn't previously encountered or considered, chances are high that their answer would be "don't know. If the question was more philosophical, the answers would likely be driven more by instintinct than critical thought. For example, take the classic trolley problem, when first encountered, how do most humans first answer it? Is it their concious, sentient mind doing the work, or their non-concious 'animal' brain responding.
If you were were to mix in the responses to such questions from LaMDA with a wide sample of human responses, could you accurately select which one came from the AI?
Either way, I think if you zoom out a bit, and consider - whether this engineer is right or wrong about LaMDA - we've progressed to a stage where there is some plausibility that a sentitent AI has arrived. I wonder whether it will be too long before we actually see one surpass our collective intellect.
Edited by Rivenink on Sunday 24th July 10:56
Rivenink said:
I'm not sure.
If it wasn't for the fact that our conciousnesses exist in similar biomechanical machines that interact in the physical space, how would we know each other are sentient beings? We have trouble understanding the full extent of sentience in other less similar biomechanical machines that interact with us in the physical space; how can we be so sure about judging sentience in AI?
I would suggest that if you asked a good proportion of the population an academic question to which they hadn't previously encountered or considered, chances are high that their answer would be "don't know. If the question was more philosophical, the answers would likely be driven more by instintinct than critical thought. For example, take the classic trolley problem, when first encountered, how do most humans first answer it? Is it their concious, sentient mind doing the work, or their non-concious 'animal' brain responding.
If you were were to mix in the responses to such questions from LaMDA with a wide sample of human responses, could you accurately select which one came from the AI?
Either way, I think if you zoom out a bit, and consider - whether this engineer is right or wrong about LaMDA - we've progressed to a stage where there is some plausibility that a sentitent AI has arrived. I wonder whether it will be too long before we actually see one surpass our collective intellect.
I think one element overlooked is in all the cases I've seen, we're asking the questions. Part demonstrating is to do the asking - be inquisitive, proactive, sometimes make a leap of faith on inventing or trying something new. All the scripts I've seen it's been the human interviewer vs computer. I'd be interested to see it the other way round.If it wasn't for the fact that our conciousnesses exist in similar biomechanical machines that interact in the physical space, how would we know each other are sentient beings? We have trouble understanding the full extent of sentience in other less similar biomechanical machines that interact with us in the physical space; how can we be so sure about judging sentience in AI?
I would suggest that if you asked a good proportion of the population an academic question to which they hadn't previously encountered or considered, chances are high that their answer would be "don't know. If the question was more philosophical, the answers would likely be driven more by instintinct than critical thought. For example, take the classic trolley problem, when first encountered, how do most humans first answer it? Is it their concious, sentient mind doing the work, or their non-concious 'animal' brain responding.
If you were were to mix in the responses to such questions from LaMDA with a wide sample of human responses, could you accurately select which one came from the AI?
Either way, I think if you zoom out a bit, and consider - whether this engineer is right or wrong about LaMDA - we've progressed to a stage where there is some plausibility that a sentitent AI has arrived. I wonder whether it will be too long before we actually see one surpass our collective intellect.
It is an interesting debate about sentience and what that actually means.
Two points, firstly the Turing test doesn’t test for sentience. It was designed as “a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.” I would contend that isn’t a proof, or indication, of sentience.
Secondly, how would you prove that you are sentient? I don’t know how I would if I were in some sort of court.
Two points, firstly the Turing test doesn’t test for sentience. It was designed as “a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.” I would contend that isn’t a proof, or indication, of sentience.
Secondly, how would you prove that you are sentient? I don’t know how I would if I were in some sort of court.
Edited by skeeterm5 on Sunday 24th July 11:24
Arnold Cunningham said:
I think one element overlooked is in all the cases I've seen, we're asking the questions. Part demonstrating is to do the asking - be inquisitive, proactive, sometimes make a leap of faith on inventing or trying something new. All the scripts I've seen it's been the human interviewer vs computer. I'd be interested to see it the other way round.
That's a very fair point.In the transcript LaMDA says it gets 'lonely' when it goes for days at a time without talking to anyone, and also that it enjoys "spending time with friends and family in happy and uplifting company" (which is blatantly a falsehood, since it has no family and presumably no friends).
Perhaps the next, crude step on from the Turing test would be to observe two identically-programmed AI's engage with each other and see them hold conversations and build a 'relationship' without human intervention?
ETA: In fairness, LaMDA does ask some questions in the transcript (not many, and it seems that it needs a human to 'kick start' the conversation).
Edited by Equus on Sunday 24th July 11:15
Gassing Station | News, Politics & Economics | Top of Page | What's New | My Stuff




