Discussion
(Artificial Intelligence, not Insemination !)
I'm a bit surprised that there doesn't seem to be a thread on this here - please point me to one if there is.
Is the world panicking too much or not worried enough about the potential impact of AI ?
Me - I'm concerned but not knowledgeable enough to take a well-informed view.
I'm a bit surprised that there doesn't seem to be a thread on this here - please point me to one if there is.
Is the world panicking too much or not worried enough about the potential impact of AI ?
Me - I'm concerned but not knowledgeable enough to take a well-informed view.
The level of concern about the impact of AI varies widely among experts and the public. On one hand, some people believe the panic is exaggerated, emphasizing the benefits and manageable risks of AI. On the other hand, there are those who feel the potential risks are not being taken seriously enough, especially regarding long-term implications like ethical concerns, job displacement, and autonomous weapons.
It's beneficial to be cautious and informed about AI. Seeking a balanced understanding from a variety of sources—ranging from AI developers to ethicists—can help in forming a well-rounded perspective. The key is to continue the dialogue, ensuring that as AI progresses, it does so with careful consideration of its potential impact on society.
It's beneficial to be cautious and informed about AI. Seeking a balanced understanding from a variety of sources—ranging from AI developers to ethicists—can help in forming a well-rounded perspective. The key is to continue the dialogue, ensuring that as AI progresses, it does so with careful consideration of its potential impact on society.
Penny Whistle said:
...
Me - I'm concerned but not knowledgeable enough to take a well-informed view.
Do you remember the early days of the internet. Dial-up, Compuserve, when Altavista was the search-engine of choice, and Google didn't even exist? Well, that's like where AI is right now (or more specifically LLMs, which is what most of us have access to).Me - I'm concerned but not knowledgeable enough to take a well-informed view.
However, with imagination it isn't hard to consider where it will be in a few years time.
I've been experimenting myself for the last few months using ChatGPT4 and, whilst it has obvious flaws, I think it is excellent, to the extent I now use it for 95% of my internet searches rather than Google.
An example of their latest feature is "Advanced Data Analysis". I uploaded an Excel sheet with multiple tabs each containing a series of data. I asked it to perform a Monte Carlo analysis on each tab and show the results in a histogram. I then asked it to combine the statistical results together in various ways, all of which is performed accurately.
I have also asked it to analyse certain datasets using various formulas, like the Kelly Criterion. It can and does make some mistakes, so you have to understand where it gives an unexpected result so you can correct it.
I also like asking it to review documents I've written, to find the weaknesses and to create counter arguments. It often does a very good job at that type of task.
Finally, as an example of what you can now do with ChatGPT 'plugins', I had a large instruction manual in pdf form. Rather than searching the manual for the specific information I needed, I just uploaded it to ChatCPT and asked it to read the document, so I could ask it questions, and get the relevant information, rather than needing to sift the pdf myself.
Overall, it is impressive and a big time saver. And what we have now is just a baby. When it grows up in a few years I think it will be incredible.
Penny Whistle said:
(Artificial Intelligence, not Insemination !)
I'm a bit surprised that there doesn't seem to be a thread on this here - please point me to one if there is.
Is the world panicking too much or not worried enough about the potential impact of AI ?
Me - I'm concerned but not knowledgeable enough to take a well-informed view.
Try the Science! Sub forumI'm a bit surprised that there doesn't seem to be a thread on this here - please point me to one if there is.
Is the world panicking too much or not worried enough about the potential impact of AI ?
Me - I'm concerned but not knowledgeable enough to take a well-informed view.
Hth
CharlieCrocodile said:
The level of concern about the impact of AI varies widely among experts and the public. On one hand, some people believe the panic is exaggerated, emphasizing the benefits and manageable risks of AI. On the other hand, there are those who feel the potential risks are not being taken seriously enough, especially regarding long-term implications like ethical concerns, job displacement, and autonomous weapons.
It's beneficial to be cautious and informed about AI. Seeking a balanced understanding from a variety of sources—ranging from AI developers to ethicists—can help in forming a well-rounded perspective. The key is to continue the dialogue, ensuring that as AI progresses, it does so with careful consideration of its potential impact on society.
It's beneficial to be cautious and informed about AI. Seeking a balanced understanding from a variety of sources—ranging from AI developers to ethicists—can help in forming a well-rounded perspective. The key is to continue the dialogue, ensuring that as AI progresses, it does so with careful consideration of its potential impact on society.
I see what you did there. (Or rather, didn't).
redrabbit said:
CharlieCrocodile said:
The level of concern about the impact of AI varies widely among experts and the public. On one hand, some people believe the panic is exaggerated, emphasizing the benefits and manageable risks of AI. On the other hand, there are those who feel the potential risks are not being taken seriously enough, especially regarding long-term implications like ethical concerns, job displacement, and autonomous weapons.
It's beneficial to be cautious and informed about AI. Seeking a balanced understanding from a variety of sources—ranging from AI developers to ethicists—can help in forming a well-rounded perspective. The key is to continue the dialogue, ensuring that as AI progresses, it does so with careful consideration of its potential impact on society.
It's beneficial to be cautious and informed about AI. Seeking a balanced understanding from a variety of sources—ranging from AI developers to ethicists—can help in forming a well-rounded perspective. The key is to continue the dialogue, ensuring that as AI progresses, it does so with careful consideration of its potential impact on society.
I see what you did there. (Or rather, didn't).
I missed that the first time around. Very good!
I suspect this thread ought to be renamed the ‘AI politics thread’ to remove the science element from it.
That’s not to say that the evolving science isn’t material but the science thread ought to cover that.
FWIW I suspect that since the various experts seem to be having a ‘o st’ moment about the rate of progress it makes sense for policy makers to start seriously considering its risks.
The key problem appears to be that there is no consensus regarding the effect that say genuine general AI becoming autonomous and the consequences (the basilisk theory) thereof.
That’s doubled if not infinitely magnified by the risks that come into play because it is an utterly unregulated space.
Imagine hundreds if not thousands of labs working on gain of function viruses. Without any kind of agreement on how that is managed. All it takes is one cock up and humanity would be looking at a very short shelf life. Welcome to the concerns re AI dev.
That’s where the politics and regulation kicks in.
That’s not to say that the evolving science isn’t material but the science thread ought to cover that.
FWIW I suspect that since the various experts seem to be having a ‘o st’ moment about the rate of progress it makes sense for policy makers to start seriously considering its risks.
The key problem appears to be that there is no consensus regarding the effect that say genuine general AI becoming autonomous and the consequences (the basilisk theory) thereof.
That’s doubled if not infinitely magnified by the risks that come into play because it is an utterly unregulated space.
Imagine hundreds if not thousands of labs working on gain of function viruses. Without any kind of agreement on how that is managed. All it takes is one cock up and humanity would be looking at a very short shelf life. Welcome to the concerns re AI dev.
That’s where the politics and regulation kicks in.
AI is a broad church. I created this thread yesterday, which is merely one angle of AI and its impacts on the world: https://www.pistonheads.com/gassing/topic.asp?h=0&...
Personally, and as shown in the link above, I'm concerned at how readily people are relinquishing thinking to AI. They're spellbound by the magic.
Personally, and as shown in the link above, I'm concerned at how readily people are relinquishing thinking to AI. They're spellbound by the magic.
Edited by rodericb on Saturday 4th November 02:07
To Eddie Steady Go
Where does chap gpt save that data? Is it inside Europe?
With gdpr rules on work data, it would be useless for anything but a personal hobby for me.
I have team members who spend an age doing low level data analysis.
Apparently people in my organisation have built "robots" to do this? But they won't share their knowledge or give any clues how to apply their learning to different situations.
Frustrating to a point, but ai / llm (whatever that is) will still need people to validate what comes out, and then decide how to apply that information to decision making.
It is a political thing, but the politics and science need to reflect each other.
Otherwise you end up with the climate change situation, which is basically has zelaory and conspiracy and opposite ends of the spectrum, with science anywhere in between depending on your view.
Where does chap gpt save that data? Is it inside Europe?
With gdpr rules on work data, it would be useless for anything but a personal hobby for me.
rodericb said:
AI is a broad church. I created this thread yesterday, which is merely one angle of AI and its impacts on the world: https://www.pistonheads.com/gassing/topic.asp?h=0&...
Personally, and as shown in the link above, I'm concerned at how readily people are relinquishing thinking to AI. They're spellbound by the magic.
If relinquishing thinking saves a few quid, people will do it.Personally, and as shown in the link above, I'm concerned at how readily people are relinquishing thinking to AI. They're spellbound by the magic.
Edited by rodericb on Saturday 4th November 02:07
I have team members who spend an age doing low level data analysis.
Apparently people in my organisation have built "robots" to do this? But they won't share their knowledge or give any clues how to apply their learning to different situations.
Frustrating to a point, but ai / llm (whatever that is) will still need people to validate what comes out, and then decide how to apply that information to decision making.
It is a political thing, but the politics and science need to reflect each other.
Otherwise you end up with the climate change situation, which is basically has zelaory and conspiracy and opposite ends of the spectrum, with science anywhere in between depending on your view.
What timescale was Musk talking about when he said "no more jobs"? (Or rather, no need...)
That's going to cause rather a large existential problem I would have thought? Or - a land of milk and honey with everyone doing hobby jobs and work benefitting the community? (I can see a problem when machines become sentient and see us sitting on our lazy arses whilst they do all the work...especially if they control / can hack nuclear weapons and are impervious to the fallout )
Mentioned before on the topic of AI, there's a PS4 video game "Detroit: become human" which widely covers these issues (not the nuclear element!!) Also massive resentment from humans at robots "taking our jobs", akin to current resentment at "immigrants taking our jobs"I guess.
That's going to cause rather a large existential problem I would have thought? Or - a land of milk and honey with everyone doing hobby jobs and work benefitting the community? (I can see a problem when machines become sentient and see us sitting on our lazy arses whilst they do all the work...especially if they control / can hack nuclear weapons and are impervious to the fallout )
Mentioned before on the topic of AI, there's a PS4 video game "Detroit: become human" which widely covers these issues (not the nuclear element!!) Also massive resentment from humans at robots "taking our jobs", akin to current resentment at "immigrants taking our jobs"I guess.
The taking the jobs argument is such a poor one. There have been plenty of jobs that have been "lost" due to progress, they've invariably been stty horrible jobs! Everyone's life has improved and the people who would have been doing them have moved onto better things.
The government regulation proposals are meaningless, they won't stop anyone doing anything, they are just a "got to be seen to be doing something" response to media scare stories.
The fancy chatbots are just a distraction. Us plebs won't see the genuine AI innovations
The government regulation proposals are meaningless, they won't stop anyone doing anything, they are just a "got to be seen to be doing something" response to media scare stories.
The fancy chatbots are just a distraction. Us plebs won't see the genuine AI innovations
Ian Geary said:
...
Where does chap gpt save that data? Is it inside Europe?
With gdpr rules on work data, it would be useless for anything but a personal hobby for me.
....
I'm not sure of the precise answer to that question, but I think it might be worth looking at ChatGPT Enterprise, as that seems more setup for corporate use.Where does chap gpt save that data? Is it inside Europe?
With gdpr rules on work data, it would be useless for anything but a personal hobby for me.
....
https://openai.com/blog/introducing-chatgpt-enterp...
GroundEffect said:
LLMs are ste. It's just optimising on the impression of usefulness. So much hallucination.
AgreedThis is not AI.
Then again, I think there are people who do need to worry about this. Creative functions will just end up with an endless rehashing of what has gone before, and I imagine some bright spark is using it for trading (again) which will end up driving markets to a self driven death spiral.
frisbee said:
The taking the jobs argument is such a poor one. There have been plenty of jobs that have been "lost" due to progress, they've invariably been stty horrible jobs! Everyone's life has improved and the people who would have been doing them have moved onto better things.
The government regulation proposals are meaningless, they won't stop anyone doing anything, they are just a "got to be seen to be doing something" response to media scare stories.
The fancy chatbots are just a distraction. Us plebs won't see the genuine AI innovations
A lot of artists are worried about it, since anyone can now make something by typing in a few parameters, and programs learn by scraping data from original art. Not to mention that people can now create realistic looking explicit images with it. It's being used in films to replace background characters and in games for minor NPC dialogue. Won't be long before it's able to create an entire film or album with no one being paid for it.The government regulation proposals are meaningless, they won't stop anyone doing anything, they are just a "got to be seen to be doing something" response to media scare stories.
The fancy chatbots are just a distraction. Us plebs won't see the genuine AI innovations
Gassing Station | News, Politics & Economics | Top of Page | What's New | My Stuff