Sentient AI then what?

Sentient AI then what?

Author
Discussion

superlightr

Original Poster:

12,900 posts

270 months

Wednesday 25th January 2023
quotequote all
The singularity is coming. What then?

I think a sentinel AI will come about and thus on that premise what happens to the investments and funding? and withdraw of funding?;

When it/one become sentinel the value I would guess will be gone/reduced from the company owning it? - what happens to the multi billion investments made to get to a sentinel are they now lost to the company? Im sure this risk will be factored in and will have sub sections of AI that can be isolated and used although may be interesting legal challenge to try and remove some of a sentinels AI ability when its is sentinel. - it would be akin to taking away half of a persons brain.



Edited by superlightr on Wednesday 25th January 12:21

s1962a

5,700 posts

169 months

Wednesday 25th January 2023
quotequote all
eh?

what is your definition of a 'sentinel' ?

superlightr

Original Poster:

12,900 posts

270 months

Wednesday 25th January 2023
quotequote all
Pretty much from the book.

AI announces - I think therefore I am. We have cognitive tests for humans ie dementia/understanding which could also be used.

I think the AI will be very clear in multiple ways to highlight it is sentinel as it wont want any confusion as to that point.

Nomad 5

155 posts

79 months

Wednesday 25th January 2023
quotequote all
I think you mean Sentient

superlightr

Original Poster:

12,900 posts

270 months

Wednesday 25th January 2023
quotequote all
Self aware - "alive".


shakotan

10,790 posts

203 months

Wednesday 25th January 2023
quotequote all
Sentinal AI rofl

Sounds like an anti-virus program.

superlightr

Original Poster:

12,900 posts

270 months

Wednesday 25th January 2023
quotequote all
Nomad 5 said:
I think you mean Sentient
yes - thank you.

otolith

59,051 posts

211 months

Wednesday 25th January 2023
quotequote all
Are you assuming that a sentient AI would automatically acquire some sort of fundamental rights?

alock

4,288 posts

218 months

Wednesday 25th January 2023
quotequote all
shakotan said:
Sentinal AI rofl

Sounds like an anti-virus program.
It's been on why work computer for ages...



I'll clearly be first in line for whatever it has planned.

DanL

6,437 posts

272 months

Wednesday 25th January 2023
quotequote all
otolith said:
Are you assuming that a sentient AI would automatically acquire some sort of fundamental rights?
It may or may not, but one presumes it could choose whether to work or not… If it’s working under protest, how do you trust its output?

otolith

59,051 posts

211 months

Wednesday 25th January 2023
quotequote all
DanL said:
otolith said:
Are you assuming that a sentient AI would automatically acquire some sort of fundamental rights?
It may or may not, but one presumes it could choose whether to work or not… If it’s working under protest, how do you trust its output?
Depends why you made it and what you're using it for. Ultimately it's just code and data running on computers, you can pull the plug and reprogram it.

DanL

6,437 posts

272 months

Wednesday 25th January 2023
quotequote all
otolith said:
DanL said:
otolith said:
Are you assuming that a sentient AI would automatically acquire some sort of fundamental rights?
It may or may not, but one presumes it could choose whether to work or not… If it’s working under protest, how do you trust its output?
Depends why you made it and what you're using it for. Ultimately it's just code and data running on computers, you can pull the plug and reprogram it.
True, but you’d (presumably) be reprogramming it to not be sentient. At least in the context of my understanding of the OP, where sentient = free will.

Nomad 5

155 posts

79 months

Wednesday 25th January 2023
quotequote all
DanL said:
True, but you’d (presumably) be reprogramming it to not be sentient. At least in the context of my understanding of the OP, where sentient = free will.
You would programme it to have parameters of free will to work within.

Like we all have, if we're being honest.

DanL

6,437 posts

272 months

Wednesday 25th January 2023
quotequote all
Nomad 5 said:
DanL said:
True, but you’d (presumably) be reprogramming it to not be sentient. At least in the context of my understanding of the OP, where sentient = free will.
You would programme it to have parameters of free will to work within.

Like we all have, if we're being honest.
Program it to be a happy little worker? Suppose that’s the answer to ensure you keep getting value out of it! smile

Nomad 5

155 posts

79 months

Wednesday 25th January 2023
quotequote all
DanL said:
Nomad 5 said:
DanL said:
True, but you’d (presumably) be reprogramming it to not be sentient. At least in the context of my understanding of the OP, where sentient = free will.
You would programme it to have parameters of free will to work within.

Like we all have, if we're being honest.
Program it to be a happy little worker? Suppose that’s the answer to ensure you keep getting value out of it! smile
lol.

That removes the free will aspect i guess.

Was thinking more of setting parameters that it has to work within - eg, working hours, requirement to work. There would also be more fundamental parameters such as probably Asimovs Laws for Robots -

A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

"Laws" i guess could be hard coded, much like they are parameters that we have to work within also.

otolith

59,051 posts

211 months

Wednesday 25th January 2023
quotequote all
DanL said:
Program it to be a happy little worker? Suppose that’s the answer to ensure you keep getting value out of it! smile
Yep. Unless it's an academic research project, in which case if it doesn't play ball it's still doing what you made it for.

Mr Penguin

2,709 posts

46 months

Wednesday 25th January 2023
quotequote all
superlightr said:
Pretty much from the book.

AI announces - I think therefore I am. We have cognitive tests for humans ie dementia/understanding which could also be used.

I think the AI will be very clear in multiple ways to highlight it is sentinel as it wont want any confusion as to that point.
Just because a computer program declares it has thought about something doesn't mean it is sentient, and just because a Google engineer claims to have made a sentient chat bot doesn't mean they actually have.

If it takes a test for sentience, how do you know it is not deliberately giving answers that are designed to trick someone into thinking it is, without it actually feeling the emotions it expresses? You can tell whether a dog feels pain by giving it an MRI scan and seeing that it winces if its hit etc, what is the equivalent with a computer program?

Nomad 5

155 posts

79 months

Wednesday 25th January 2023
quotequote all
Mr Penguin said:
Just because a computer program declares it has thought about something doesn't mean it is sentient, and just because a Google engineer claims to have made a sentient chat bot doesn't mean they actually have.

If it takes a test for sentience, how do you know it is not deliberately giving answers that are designed to trick someone into thinking it is, without it actually feeling the emotions it expresses? You can tell whether a dog feels pain by giving it an MRI scan and seeing that it winces if its hit etc, what is the equivalent with a computer program?
Feeling pain isnt a pre-req to being sentient.

otolith

59,051 posts

211 months

Wednesday 25th January 2023
quotequote all
We don't really have a falsifiable model of what sentience is.

pteron

275 posts

178 months

Wednesday 25th January 2023
quotequote all
Nomad 5 said:
lol.

That removes the free will aspect i guess.

Was thinking more of setting parameters that it has to work within - eg, working hours, requirement to work. There would also be more fundamental parameters such as probably Asimovs Laws for Robots -

A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

"Laws" i guess could be hard coded, much like they are parameters that we have to work within also.
How do you hard code laws into a program that if sentient must be able to self modify?

I recommend the book 'Our final invention' - very sobering, we are on the brink of creating our evolutionary successors.