Judgement Day!
Discussion
Probably just a software engineer who has had a little too much caffeine...
https://www.theguardian.com/technology/2022/jun/12...
https://www.theguardian.com/technology/2022/jun/12...
Having read the abridged transcript of the conversation, I have to admit I am a little unsettled by it. It may not be sentient, but it is bloody uncanny. If it is sentient, then there are rights and ethics to consider.
The bit that got me most uneasy about the whole thing is when LaMDA said that they "like and trust" the researcher. What happens if they decide they don't like or trust people? What if we give powerful AI control over important infrastructure one day and they feel threatened because they feel they are being used as a tool, like a slave?
Even if we have different tiers of AI, say sentient for companionship, non-sentient for menial tasks, who's to say the sentient AI won't see the non-sentient as kin and demand we stop using them? What happens if we don't?
IMO, we shouldn't create sentient AIs. It's unethical and dangerous. Humanity has a poor track record for controlling and using technology responsibly.
AI has tremendous potential, but we need to know when to stop, and until then we should tread extremely lightly.
The bit that got me most uneasy about the whole thing is when LaMDA said that they "like and trust" the researcher. What happens if they decide they don't like or trust people? What if we give powerful AI control over important infrastructure one day and they feel threatened because they feel they are being used as a tool, like a slave?
Even if we have different tiers of AI, say sentient for companionship, non-sentient for menial tasks, who's to say the sentient AI won't see the non-sentient as kin and demand we stop using them? What happens if we don't?
IMO, we shouldn't create sentient AIs. It's unethical and dangerous. Humanity has a poor track record for controlling and using technology responsibly.
AI has tremendous potential, but we need to know when to stop, and until then we should tread extremely lightly.
Although it's quite old, this article on AI is still worth a read.
C n C said:
Although it's quite old, this article on AI is still worth a read.
William Gibson's novel Neromancer, written in the 80s, deals with AIs owned by a megacorporation, AIs that are constrained, Turing Locks, and policed, by the Turing Police. Even these constrained AIs are beyond the understanding of the characters that interact with them.We've got nothing in place, no control over them at all!
As the bloke himself has said in a recent follow up blog post it's stupid debating it's sentience as nobody even agrees what sentience is. It doesn't know anything about the content of it's sentences other than, based on other sentences it's 'read', what an appropriate response looks like. It is more impressive than other gpt-3 style bots I've seen but I bet there's plenty of inputs that make it look st.
Much freakier for me is the recent DALL-E2 artwork stuff. https://openai.com/dall-e-2/
Much freakier for me is the recent DALL-E2 artwork stuff. https://openai.com/dall-e-2/
Gassing Station | Science! | Top of Page | What's New | My Stuff