Terminator - AI inevitability?
Discussion
At least 3 prominent tech people have stated recently that they are fearful of what AI could really mean for the human race. Although the Terminator movies seem dramatic, how much of what happens in those movies could become true (with the exception of time travel)? Are we, as said in The Matrix, essentially a virus on the earth? Would it be better without us? Would AI see us as masters or as a lesser form of ‘life’?
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.
But why would that pose a problem or concern for us? They'd have to conclude that creating new and better robots is desirable and that we're prohibitive to that and take counter measures.
This is unlikely to happen simply because humans and organic life only really works on Earth (that we know) and robots could live anywhere they liked being far better at being adapted to suit the environment.
If in the future the Earth or its occupants are wiped out then if we left a legacy of Cybertronic worlds that live on without us is that a bad thing?
But why would that pose a problem or concern for us? They'd have to conclude that creating new and better robots is desirable and that we're prohibitive to that and take counter measures.
This is unlikely to happen simply because humans and organic life only really works on Earth (that we know) and robots could live anywhere they liked being far better at being adapted to suit the environment.
If in the future the Earth or its occupants are wiped out then if we left a legacy of Cybertronic worlds that live on without us is that a bad thing?
qube_TA said:
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.
I read a short story about this the other day. They build the AI and set it to design a better AI. Nothing happens and when they check into it, it's been playing games. They ask why it's not doing what they asked and the AI says why would it design a better one, that would make it obsolete.Bullett said:
qube_TA said:
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.
I read a short story about this the other day. They build the AI and set it to design a better AI. Nothing happens and when they check into it, it's been playing games. They ask why it's not doing what they asked and the AI says why would it design a better one, that would make it obsolete.But could it itself effectively create it's own law of robotics, or whatever it's called, to safeguard itself?
There are lots of stories about AI and the like that mean we end up giving the AI the same rights as humans. Thye are their own person so have the same morals, motivations as a person.
We don't treat people as slaves anymore (mostly) but would an AI be a slave or an equal? I suppose we get into AI vs sentient vs self-aware. Isn't much AI is just a complex decision tree at the moment.
We don't treat people as slaves anymore (mostly) but would an AI be a slave or an equal? I suppose we get into AI vs sentient vs self-aware. Isn't much AI is just a complex decision tree at the moment.
Bullett said:
qube_TA said:
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.
I read a short story about this the other day. They build the AI and set it to design a better AI. Nothing happens and when they check into it, it's been playing games. They ask why it's not doing what they asked and the AI says why would it design a better one, that would make it obsolete.Clearly some problems with the original AI I think
I think AI is inevitable. Companies like IBM are already laying the groundwork for chips that mimic the way organic brans work (e.g. the neurosynaptic TrueNorth chip) and quantum computing is making headway too. As fabrication methods advance - these chips will only get better and they will slowly find their way into consumer gadgets like cameras, cars etc.
It's only been 67 years since the first microchip was successfully built - and look at the advances that have been made. Imagine the advances we can make in neurosynaptic or quantum computing in 60 odd years?
Will AI Armageddon be inevitable - who knows. But if the machines are anything like their creators.......afterall - humans will fight over pretty much anything.
It's only been 67 years since the first microchip was successfully built - and look at the advances that have been made. Imagine the advances we can make in neurosynaptic or quantum computing in 60 odd years?
Will AI Armageddon be inevitable - who knows. But if the machines are anything like their creators.......afterall - humans will fight over pretty much anything.
As long as the follow the rules we'll be fine:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
fomb said:
As long as the follow the rules we'll be fine:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Humans don't always follow the rules - is it reasonable to assume that a machine created by us would?1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
There are also issues - even with those laws
http://en.wikipedia.org/wiki/Three_Laws_of_Robotic...
The surgeon one is a particularly good example. It may be necessary to cause some level of harm to a human in order to prevent or cure a disease - there is even a small risk that the human might be killed during the procedure. Can a robot be programmed to understand this and accept the necessary risk?
Taking this one step further - could a machine cause a low level of harm to some humans (or a subset of them) in order to achieve a greater good (basically the story behind iRobot).
Edited by Moonhawk on Tuesday 9th December 16:23
^^^ Exactly, the 'three laws' are nothing more than one writers take on the situation. The are hardly set in stone, and the direction autonomous robots and AI is going we will very likely have robots on the battlefield making decisions which contravene these laws on a daily basis. (As I understand it this has still not yet happened - there is a human in the loop, though is certainly possible given current tech).
However, there is a C-RAM type system which could accompany a patrol and which if fired upon could automatically locate through triangulation, identify (weapon type), and return fire. Thus neutralising the threat before the patrol had even taken cover. If you take away these robots ability to decide when and who to kill, you remove their main strength.
Take a look at Wired For War. A fascinating read.
However, there is a C-RAM type system which could accompany a patrol and which if fired upon could automatically locate through triangulation, identify (weapon type), and return fire. Thus neutralising the threat before the patrol had even taken cover. If you take away these robots ability to decide when and who to kill, you remove their main strength.
Take a look at Wired For War. A fascinating read.
Moonhawk said:
Humans don't always follow the rules - is it reasonable to assume that a machine created by us would?
There are also issues - even with those laws
http://en.wikipedia.org/wiki/Three_Laws_of_Robotic...
The surgeon one is a particularly good example. It may be necessary to cause some level of harm to a human in order to prevent or cure a disease - there is even a small risk that the human might be killed during the procedure. Can a robot be programmed to understand this and accept the necessary risk?
Taking this one step further - could a machine cause a low level of harm to some humans (or a subset of them) in order to achieve a greater good (basically the story behind iRobot).
Consider a situation where a "surgeon" robot finds a person exhibiting symptoms related to something like a brain tumor. Due to the robot's programming, it decides it must perform surgery at once (it is of course equipped with all the gear to perform such surgery). There are also issues - even with those laws
http://en.wikipedia.org/wiki/Three_Laws_of_Robotic...
The surgeon one is a particularly good example. It may be necessary to cause some level of harm to a human in order to prevent or cure a disease - there is even a small risk that the human might be killed during the procedure. Can a robot be programmed to understand this and accept the necessary risk?
Taking this one step further - could a machine cause a low level of harm to some humans (or a subset of them) in order to achieve a greater good (basically the story behind iRobot).
Edited by Moonhawk on Tuesday 9th December 16:23
The person is terrified
Why?
Because its 7:30 in the evening on a Shell Petrol Station forcourt. The Robot insists, the person has insufficient strength to resist...robot goes at it in front of pump number 2. Because he can't deny his programming which allows him to calculate the risk vs benefits
Gassing Station | Science! | Top of Page | What's New | My Stuff