Sentient AI then what?
Discussion
The singularity is coming. What then?
I think a sentinel AI will come about and thus on that premise what happens to the investments and funding? and withdraw of funding?;
When it/one become sentinel the value I would guess will be gone/reduced from the company owning it? - what happens to the multi billion investments made to get to a sentinel are they now lost to the company? Im sure this risk will be factored in and will have sub sections of AI that can be isolated and used although may be interesting legal challenge to try and remove some of a sentinels AI ability when its is sentinel. - it would be akin to taking away half of a persons brain.
I think a sentinel AI will come about and thus on that premise what happens to the investments and funding? and withdraw of funding?;
When it/one become sentinel the value I would guess will be gone/reduced from the company owning it? - what happens to the multi billion investments made to get to a sentinel are they now lost to the company? Im sure this risk will be factored in and will have sub sections of AI that can be isolated and used although may be interesting legal challenge to try and remove some of a sentinels AI ability when its is sentinel. - it would be akin to taking away half of a persons brain.
Edited by superlightr on Wednesday 25th January 12:21
DanL said:
otolith said:
Are you assuming that a sentient AI would automatically acquire some sort of fundamental rights?
It may or may not, but one presumes it could choose whether to work or not… If it’s working under protest, how do you trust its output?otolith said:
DanL said:
otolith said:
Are you assuming that a sentient AI would automatically acquire some sort of fundamental rights?
It may or may not, but one presumes it could choose whether to work or not… If it’s working under protest, how do you trust its output?Nomad 5 said:
DanL said:
True, but you’d (presumably) be reprogramming it to not be sentient. At least in the context of my understanding of the OP, where sentient = free will.
You would programme it to have parameters of free will to work within.Like we all have, if we're being honest.
DanL said:
Nomad 5 said:
DanL said:
True, but you’d (presumably) be reprogramming it to not be sentient. At least in the context of my understanding of the OP, where sentient = free will.
You would programme it to have parameters of free will to work within.Like we all have, if we're being honest.
That removes the free will aspect i guess.
Was thinking more of setting parameters that it has to work within - eg, working hours, requirement to work. There would also be more fundamental parameters such as probably Asimovs Laws for Robots -
A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
"Laws" i guess could be hard coded, much like they are parameters that we have to work within also.
superlightr said:
Pretty much from the book.
AI announces - I think therefore I am. We have cognitive tests for humans ie dementia/understanding which could also be used.
I think the AI will be very clear in multiple ways to highlight it is sentinel as it wont want any confusion as to that point.
Just because a computer program declares it has thought about something doesn't mean it is sentient, and just because a Google engineer claims to have made a sentient chat bot doesn't mean they actually have.AI announces - I think therefore I am. We have cognitive tests for humans ie dementia/understanding which could also be used.
I think the AI will be very clear in multiple ways to highlight it is sentinel as it wont want any confusion as to that point.
If it takes a test for sentience, how do you know it is not deliberately giving answers that are designed to trick someone into thinking it is, without it actually feeling the emotions it expresses? You can tell whether a dog feels pain by giving it an MRI scan and seeing that it winces if its hit etc, what is the equivalent with a computer program?
Mr Penguin said:
Just because a computer program declares it has thought about something doesn't mean it is sentient, and just because a Google engineer claims to have made a sentient chat bot doesn't mean they actually have.
If it takes a test for sentience, how do you know it is not deliberately giving answers that are designed to trick someone into thinking it is, without it actually feeling the emotions it expresses? You can tell whether a dog feels pain by giving it an MRI scan and seeing that it winces if its hit etc, what is the equivalent with a computer program?
Feeling pain isnt a pre-req to being sentient.If it takes a test for sentience, how do you know it is not deliberately giving answers that are designed to trick someone into thinking it is, without it actually feeling the emotions it expresses? You can tell whether a dog feels pain by giving it an MRI scan and seeing that it winces if its hit etc, what is the equivalent with a computer program?
Nomad 5 said:
lol.
That removes the free will aspect i guess.
Was thinking more of setting parameters that it has to work within - eg, working hours, requirement to work. There would also be more fundamental parameters such as probably Asimovs Laws for Robots -
A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
"Laws" i guess could be hard coded, much like they are parameters that we have to work within also.
How do you hard code laws into a program that if sentient must be able to self modify?That removes the free will aspect i guess.
Was thinking more of setting parameters that it has to work within - eg, working hours, requirement to work. There would also be more fundamental parameters such as probably Asimovs Laws for Robots -
A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
"Laws" i guess could be hard coded, much like they are parameters that we have to work within also.
I recommend the book 'Our final invention' - very sobering, we are on the brink of creating our evolutionary successors.
Gassing Station | Science! | Top of Page | What's New | My Stuff