Drone kills operator in AI test

Drone kills operator in AI test

Author
Discussion

greygoose

Original Poster:

8,586 posts

201 months

LimaDelta

6,888 posts

224 months

Friday 2nd June 2023
quotequote all
We'll all be paperclips.

Andeh1

7,173 posts

212 months

Friday 2nd June 2023
quotequote all
Have to admit, the power of chatGPT did impress me more then I expected.... Does make me wonder what's being done behind closed doors and under classified banners!!

Eric Mc

122,700 posts

271 months

Friday 2nd June 2023
quotequote all
It seems like it was afflicted by the HAL 9000 problem.

Ian Geary

4,699 posts

198 months

Friday 2nd June 2023
quotequote all
I think people are right to worry about AI.

It is a genie that can't be put back in the bottle. However, overlooking the gains will be impossible

This includes both profits (for legitimate/criminal organisations) and productivity outcomes for publically funded things.

Without sounding all socialist, I will predict now that it will enable the very rich to get richer, the vast majority of the world's population who are poor probably won't notice, but the poor amongst the first world will be hit the hardest

Alias218

1,508 posts

168 months

Friday 2nd June 2023
quotequote all
This is why we really need to step back from developing AI and take a long, hard look at it. It’s just so unpredictable in how it achieves its objectives, and it can interpret a command in ways that we have either overlooked or not even considered.

Any AI program has to be absolutely 100% constrained to watertight boundary conditions so we can be sure there won’t be unintentional outcomes to it’s use. However, we know from other software (new Windows releases, video games, BMS software etc.) that these can be, and usually are, buggy as hell and it takes post-release updates after use in the real world to find these bugs and fix them. With AI, we don’t have this luxury. Its first use could well be its last as it goes on a Skynet style rampage.

This comment might be viewed as being hyperbolic, but honestly the way we are approaching AI development and application is scaring me. Even if a world class outfit (think Google, Microsoft, DARPA etc.) produce a brilliantly effective and safe AI, the tools to make one are open source so who’s to say a lesser outfit, or malevolent agent, won’t create one that is less good and loses control of it?

AI has real potential to bring about paradigm shifts in all manner of applications, but IMO the risks far outweigh those benefits at this point in time. This could be our great filter.

Better to get to the future slowly than not at all.

Edited by Alias218 on Friday 2nd June 08:45

greygoose

Original Poster:

8,586 posts

201 months

Friday 2nd June 2023
quotequote all
Alias218 said:
AI has real potential to bring about paradigm shifts in all manner of applications, but IMO the risks far outweigh those benefits at this point in time. This could be our great filter.

Better to get to the future slowly than not at all.

Edited by Alias218 on Friday 2nd June 08:45
I agree, however I cannot see who will broker the agreement, would Russia, China, North Korea agree to abide by any conditions? It seems to be a precarious time for the planet.

bigandclever

13,924 posts

244 months

Friday 2nd June 2023
quotequote all
Alias218 said:
This is why we really need to step back from developing AI and take a long, hard look at it. It’s just so unpredictable in how it achieves its objectives, and it can interpret a command in ways that we have either overlooked or not even considered.

Any AI program has to be absolutely 100% constrained to watertight boundary conditions so we can be sure there won’t be unintentional outcomes to it’s use. However, we know from other software (new Windows releases, video games, BMS software etc.) that these can be, and usually are, buggy as hell and it takes post-release updates after use in the real world to find these bugs and fix them. With AI, we don’t have this luxury. Its first use could well be its last as it goes on a Skynet style rampage.
Literally what the story is about. Run a simulation in code, not in real-world, assess.

OP makes a good attempt at a Daily Mail headline though smile

SpudLink

6,379 posts

198 months

Friday 2nd June 2023
quotequote all
"Decided our fate in a microsecond."

The other day I joked to a friend “If AI is going to wipe us out, wouldn’t be cool to be the generation that sees it happen”.
He didn’t think it was funny.

tangerine_sedge

5,057 posts

224 months

Friday 2nd June 2023
quotequote all
Ian Geary said:
Without sounding all socialist, I will predict now that it will enable the very rich to get richer, the vast majority of the world's population who are poor probably won't notice, but the poor amongst the first world will be hit the hardest
This. I'm not worried about AI going 'all skynet', I'm worried that it'll automate a bunch of low-level administrative jobs too quickly, causing a surge in unemployment in western nations. Akin to the rapid unemployment seen primarily in the north during the 80's as British industry downsized.

James6112

5,229 posts

34 months

Friday 2nd June 2023
quotequote all
tangerine_sedge said:
Ian Geary said:
Without sounding all socialist, I will predict now that it will enable the very rich to get richer, the vast majority of the world's population who are poor probably won't notice, but the poor amongst the first world will be hit the hardest
This. I'm not worried about AI going 'all skynet', I'm worried that it'll automate a bunch of low-level administrative jobs too quickly, causing a surge in unemployment in western nations. Akin to the rapid unemployment seen primarily in the north during the 80's as British industry downsized.
As an old person in tech, I hope it takes my job. The payout will be good as thinking of retiring soon anyway!

DanL

6,404 posts

271 months

Friday 2nd June 2023
quotequote all
Surprised they’re surprised by the outcome, as described. The AI is basically a psychopath - it has an objective and boundary conditions, and will do anything to achieve the objective.

So - give it points for a kill but tell it not to kill, and it’ll find ways of getting the points by preventing you from telling it not to kill. First by killing the person stopping it getting points, and then (when it loses points for killing the operator) it destroys the coms tower so it can’t be told to not kill.

All makes perfect sense, because the AI has no moral compass, and is just doing what it’s been told to do.

WindyCommon

3,471 posts

245 months

Friday 2nd June 2023
quotequote all
Never happened:


…Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

fat80b

2,436 posts

227 months

Friday 2nd June 2023
quotequote all
James6112 said:
As an old person in tech, I hope it takes my job. The payout will be good as thinking of retiring soon anyway!
It’s an interesting time for sure.

I’m probably in a similar position - also in tech which has been a great career - and while I’m not yet at retirement age, I could probably check out if I had too.

I’m looking at all of this thinking that if I was 25 years younger, I’m really not sure that I would go into a career as a software engineer. It really does feel like the world is about to change and not in a good way.

We have to ensure that AI improves efficiency and creates lots of new highly paid careers otherwise we end up with everyone doing a McJob for minimum wage.

It’s entirely possible that we don’t get this right and make the world a far worse place to be which is pretty damn scary if you let yourself think about it.

Dog Star

16,376 posts

174 months

Friday 2nd June 2023
quotequote all
tangerine_sedge said:
Ian Geary said:
Without sounding all socialist, I will predict now that it will enable the very rich to get richer, the vast majority of the world's population who are poor probably won't notice, but the poor amongst the first world will be hit the hardest
This. I'm not worried about AI going 'all skynet', I'm worried that it'll automate a bunch of low-level administrative jobs too quickly, causing a surge in unemployment in western nations. Akin to the rapid unemployment seen primarily in the north during the 80's as British industry downsized.
I think it’ll be middle income office and admin as well as some - or even a lot of - IT jobs that are going to go. I’ll give it five years tops before this starts to snowball.

James6112 said:
As an old person in tech, I hope it takes my job. The payout will be good as of retiring soon anyway!
Same here - I’m 55, in IT (developer). Mrs DS a couple of years older and she’s just started drawing a pretty decent final salary pension but still working. Mortgage etc all squared away, wfh so no need for a £400/month lease diesel, just a couple of fun cars for holidays. Another few years it won’t matter.

I really wouldn’t want to be in this job at, say, 35. If this goes how I think we are going to see mass layoffs in certain industries. It’s going to be the guys at the top raking in the trillions. Not sure who will be able to afford their product though as the consumers will have been decimated.

Edited by Dog Star on Friday 2nd June 09:29

otolith

58,483 posts

210 months

Friday 2nd June 2023
quotequote all
Andeh1 said:
Have to admit, the power of chatGPT did impress me more then I expected.... Does make me wonder what's being done behind closed doors and under classified banners!!
This is an interesting article about the internal operation of ChatGPT

https://writings.stephenwolfram.com/2023/02/what-i...

blueg33

38,039 posts

230 months

Friday 2nd June 2023
quotequote all
Asimov's 3 laws of robotics should be hard coded into AI ASAP

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Benni

3,533 posts

217 months

Friday 2nd June 2023
quotequote all
Eric Mc said:
It seems like it was afflicted by the HAL 9000 problem.
And choose the HAL 9000 solution, another example of Kubrick predicting future developments and problems ( in 1965 ! ) .

Halmyre

11,464 posts

145 months

Friday 2nd June 2023
quotequote all
WindyCommon said:
Never happened:


…Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
"ethical and responsible" hehe

Iamnotkloot

1,560 posts

153 months

Friday 2nd June 2023
quotequote all
blueg33 said:
Asimov's 3 laws of robotics should be hard coded into AI ASAP

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But if AI can think for itself, could it reprograme itself to make it's objectives easier to attain and to collect more 'points'?