Why is AI such an impossible goal?
Discussion
Lots of android/robot programmes around lately and I wondered why AI as been such an impossible goal? Each year some scientist predicts "AI in the next 20 years" but here were are...nothing!
I know that we have limited 'intelligence' with Siri and Google but nothing you could clearly communicate with and IBMs Watson was an amazing achievement and well beyond everything to date but still it could not ask about its own existence.
I know something like chess was thought to be that impossible task due to the complexities of the game and it also appears that "GO" (I think) has now been done but these are games by strict rules and it would never break out and become sentient.
In the past of the 1950's and '60s we have tried to build a brain but is that really the way to do it? Not really talking about a conscious life but more of a realist Turing test I suppose.
Suppose if we knew the answer to this then the goal would be achieved.
I know that we have limited 'intelligence' with Siri and Google but nothing you could clearly communicate with and IBMs Watson was an amazing achievement and well beyond everything to date but still it could not ask about its own existence.
I know something like chess was thought to be that impossible task due to the complexities of the game and it also appears that "GO" (I think) has now been done but these are games by strict rules and it would never break out and become sentient.
In the past of the 1950's and '60s we have tried to build a brain but is that really the way to do it? Not really talking about a conscious life but more of a realist Turing test I suppose.
Suppose if we knew the answer to this then the goal would be achieved.
Greetings, Professor Falken.
A strange game. The only winning move is not to play. How about a nice game of chess?
Morningside said:
What/who is the latest best AI so far?
We're seeing advances made in single purpose AI for functions like voice recognition or game playing which are edging closer to the possibility of general purpose AI. For instance, the way that the Go challenge was won was to build a neural net, teach it to recognise winning patterns used by human masters and then have it play millions of games against other instances of itself and learn from those. Effectively it was taught to play and then sent away to think about it.
https://www.wired.com/2016/03/sadness-beauty-watch...
That's a very different thing to the algorithmic, mechanical way that chess was won.
You have to answer a different question first - what do you actually mean by AI? I think that the Turing test is too one dimensional and not that good a measure in today's world. To my mind all that does is seek to show that AI is nothing more than replicating a human in a conversational machine.
I think that there is also a real mix up between AI and consciousness, two very distinct things in my mind and I do not buy the "self awareness" test.
I think you could make the case that some of the really good predictive and behavioural analytics display good elements of AI, takning lots of data and concluding so something that may not even be in the data. If you consider this as AI then there are many very sophisticated examples.
S
I think that there is also a real mix up between AI and consciousness, two very distinct things in my mind and I do not buy the "self awareness" test.
I think you could make the case that some of the really good predictive and behavioural analytics display good elements of AI, takning lots of data and concluding so something that may not even be in the data. If you consider this as AI then there are many very sophisticated examples.
S
Edited by skeeterm5 on Monday 9th January 17:05
skeeterm5 said:
You have to answer a different question first - what do you actually mean by AI? I think that the Turing test is too one dimensional and not that good a measure in today's world. To my mind all that does is seek to show that AI is nothing more than replicating a human in a conversational machine.
I think that there is also a real mix up between AI and consciousness, two very distinct things in my mind and I do not buy the "self awareness" test.
I think you could make the case that some of the really good predictive and behavioural analytics display good elements of AI, takning lots of data and concluding so something that may not even be in the data. If you consider this as AI then there are many very sophisticated examples.
The problem with defining AI is that once you know how to do something artificially it stops looking like a sign of intelligence. But what do you mean by consciousness?I think that there is also a real mix up between AI and consciousness, two very distinct things in my mind and I do not buy the "self awareness" test.
I think you could make the case that some of the really good predictive and behavioural analytics display good elements of AI, takning lots of data and concluding so something that may not even be in the data. If you consider this as AI then there are many very sophisticated examples.
Dr Jekyll said:
The problem with defining AI is that once you know how to do something artificially it stops looking like a sign of intelligence. But what do you mean by consciousness?
And that's the rub. We have yet to quantitatively define consciousness, so it's very difficult to tell whether a computer has it or not. That's especially true because we try to frame consciousness as a human thing, and a machine consciousness might look very different. We are making steps though. An Xbox with a kinect fitted is probably more intelligent than a lot of single celled life, and some of the best autonomous vehicles are approaching insect level.
Some good points raised here.
I was watching an ant type thing crawling along the bathroom floor this morning and thought does it have any idea where it is? Does it see me as a huge giant? Does it feel hunger or sorrow? Or is it just a simple chemical robot that does simple tasks that look conscious?
I mean the creature would have a very, very small brain with not many internal connections.
I was watching an ant type thing crawling along the bathroom floor this morning and thought does it have any idea where it is? Does it see me as a huge giant? Does it feel hunger or sorrow? Or is it just a simple chemical robot that does simple tasks that look conscious?
I mean the creature would have a very, very small brain with not many internal connections.
Morningside said:
Some good points raised here.
I was watching an ant type thing crawling along the bathroom floor this morning and thought does it have any idea where it is? Does it see me as a huge giant? Does it feel hunger or sorrow? Or is it just a simple chemical robot that does simple tasks that look conscious?
Ahem..I was watching an ant type thing crawling along the bathroom floor this morning and thought does it have any idea where it is? Does it see me as a huge giant? Does it feel hunger or sorrow? Or is it just a simple chemical robot that does simple tasks that look conscious?
https://www.youtube.com/watch?v=ZLZW8Deq8vE
anonymous said:
[redacted]
But it's based on a very anthropocentric idea of the concept of intelligence. We are the most intelligent thing we know of, so we test for intelligence by the ability to pass as one of us, but there could be naturally occurring intelligences in the universe far exceeding our own but which would not be able to pass as human by making small talk. We could conceivably create a true AI which doesn't pass the Turing test - for one thing, it might choose not to! Or it might, particularly if it has been created by bootstrapping less sophisticated AI, appear to be entirely inscrutable. otolith said:
We could conceivably create a true AI which doesn't pass the Turing test - for one thing, it might choose not to! Or it might, particularly if it has been created by bootstrapping less sophisticated AI, appear to be entirely inscrutable.
It could get a headache trying to think down to our level.Google's been doing cool stuff with their neural networks
Self learning machines creating their own encryption
published paper:
https://arxiv.org/abs/1610.06918
Random Article:
http://arstechnica.co.uk/information-technology/20...
I feel they should try again, but give unlimited access to a couple of super computers and the D:wave ( http://www.dwavesys.com/ )
Self learning machines creating their own encryption
published paper:
https://arxiv.org/abs/1610.06918
Random Article:
http://arstechnica.co.uk/information-technology/20...
I feel they should try again, but give unlimited access to a couple of super computers and the D:wave ( http://www.dwavesys.com/ )
anonymous said:
[redacted]
By your definition a lot of humans wouldn't pass the test and nor would any non human intelligence. That is the fundamental issue with the test, it doesn't prove intelligence at all, it proves an ability to fool somebody that you are talking to a human, how does that demonstrate intelligence?If you had the desire you could create a really deep relational database with good logic trees to spoof "intelligent" conversation. I think a better measure is an ability to create unique solutions to problems based on varied data sets, which then makes it useful.
Gassing Station | Science! | Top of Page | What's New | My Stuff