Hypothetical - "taking advantage" of AI road users
Discussion
This is a bit of a fun one. Let's imagine that we're in a world where you can reliably identify at least some vehicles as being driven by AI. Let's say there are certain models without human control.
I'd say that in these circumstances the traditional adage of driving so that you don't have to cause another road user to alter speed or course no longer applies. Instead, a more appropriate adage is probably something like "don't cause the passengers of the AI road user to be subject to unusual acceleration or jerk - and if it's a goods vehicle, do anything you like". For example, if an AI vehicle is approaching in lane 1 of an empty motorway which you are joining, you should be fine to cut it up so long as it wouldn't have to swerve or brake hard - you should be able to rely on it reacting appropriately far more than a human driver (presumably by moving into lane 2, but perhaps also mild braking to slot in behind you is also fine). And if it's a lorry, since there's no humans to be inconvenienced, then sod it, pull out right in front of it as long as it can physically avoid the crash. Why not?
Another example might be a large roundabout. Don't give way to the AI car so long as it doesn't have to slow down much. If it's an AI HGV, then it's even fine to make it emergency brake.
What other theoretical techniques do you think a human could use to take advantage of AI cars' almost perfect predictability and reliability?
And would this be advanced driving, or being a dick?
I'd say that in these circumstances the traditional adage of driving so that you don't have to cause another road user to alter speed or course no longer applies. Instead, a more appropriate adage is probably something like "don't cause the passengers of the AI road user to be subject to unusual acceleration or jerk - and if it's a goods vehicle, do anything you like". For example, if an AI vehicle is approaching in lane 1 of an empty motorway which you are joining, you should be fine to cut it up so long as it wouldn't have to swerve or brake hard - you should be able to rely on it reacting appropriately far more than a human driver (presumably by moving into lane 2, but perhaps also mild braking to slot in behind you is also fine). And if it's a lorry, since there's no humans to be inconvenienced, then sod it, pull out right in front of it as long as it can physically avoid the crash. Why not?
Another example might be a large roundabout. Don't give way to the AI car so long as it doesn't have to slow down much. If it's an AI HGV, then it's even fine to make it emergency brake.
What other theoretical techniques do you think a human could use to take advantage of AI cars' almost perfect predictability and reliability?
And would this be advanced driving, or being a dick?
Edited by Somewhatfoolish on Thursday 23 November 23:13
This is definitely being a dk. Causing change of speed is costing somebody time/money. OK, maybe not much, but it's still not good.
You're then reliant on another road user changing their actions. With the HGV on the roundabout, yes, you would get out. However, humans + roundabouts. There's so much going on, people often don't look ahead. That's why people shunt people at r'bouts, having expected them to move. I don't know about you, but the guy ahead being slow strikes me as more likely than somebody messing with an HGV in a way that forces it to anchor on. I've been forced to e-brake on a roundabout before. I ALWAYS hit my horn in such situations - better that than the person behind not noticing that I've stopped.
The only semi-acceptable case is the slip road, but remember that HGV's stop slower than cars. If you anchor on (say somebody in a Porsche messes up undertaking you on the slip), then the HGV, AI or no AI, may not be able to stop.
And all bets are off if the AI is running on Windows.
You're then reliant on another road user changing their actions. With the HGV on the roundabout, yes, you would get out. However, humans + roundabouts. There's so much going on, people often don't look ahead. That's why people shunt people at r'bouts, having expected them to move. I don't know about you, but the guy ahead being slow strikes me as more likely than somebody messing with an HGV in a way that forces it to anchor on. I've been forced to e-brake on a roundabout before. I ALWAYS hit my horn in such situations - better that than the person behind not noticing that I've stopped.
The only semi-acceptable case is the slip road, but remember that HGV's stop slower than cars. If you anchor on (say somebody in a Porsche messes up undertaking you on the slip), then the HGV, AI or no AI, may not be able to stop.
And all bets are off if the AI is running on Windows.
Somewhatfoolish said:
What other theoretical techniques do you think a human could use to take advantage of AI cars' almost perfect predictability and reliability?
And would this be advanced driving, or being a dick?
You raise one of the most interesting issues with regard to autonomous vehicles. An issue that neither the manufacturers nor politicians seem prepared to acknowledge let alone discuss. And would this be advanced driving, or being a dick?
Edited by Somewhatfoolish on Thursday 23 November 23:13
Isaac Asimov's First Laws of Robotics states:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The introduction of autonomous vehicles will completely change the dynamic between pedestrians and vehicles. Pedestrians will feel safe in walking out into the road in front of autonomous vehicles, provided there is sufficient stopping distance, in the knowledge that the vehicle will automatically slow or come to a halt in order to comply with the First Law.
This will cause traffic to slow to a crawl, or be at a standstill for lengthy periods, in locations where there are many pedestrians wishing to cross the road.
And what about all the naughty little ten year old boys who will invent a new version of "chicken" by identifying every approaching autonomous vehicle and then walking out in the road in front of it bringing it to a halt, and then running away when the irate passengers realise what's going on.
The capacity for pedestrian interference in the transit of autonomous vehicle would appear to be unbounded. Which may be a good thing, or a bad thing, depending upon whether you are a pedestrian or a passenger.
johnao said:
You raise one of the most interesting issues with regard to autonomous vehicles. An issue that neither the manufacturers nor politicians seem prepared to acknowledge let alone discuss.
Isaac Asimov's First Laws of Robotics states:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
You do realise he was just a sci-fi writer, right? These are not actual laws, other than within the context of his books.Isaac Asimov's First Laws of Robotics states:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Many robotic applications will exist solely to contradict rule 1. They will be by no means universal, though I do concede that in the case of autonomous vehicles something along those lines would be beneficial.
The idea of causing an autonomous HGV to emergency stop while negotiating a roundabout may result in not breaking to avoid colliding with you in order to prevent a much larger accident behind it as it jackknifes across three lanes of traffic.
If this was the case I expect you would be charged accordingly, in the same way you would were it to be a human-driven HGV.
LimaDelta said:
johnao said:
You raise one of the most interesting issues with regard to autonomous vehicles. An issue that neither the manufacturers nor politicians seem prepared to acknowledge let alone discuss.
Isaac Asimov's First Laws of Robotics states:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
You do realise he was just a sci-fi writer, right? These are not actual laws, other than within the context of his books.Isaac Asimov's First Laws of Robotics states:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
I was not attempting to start an earnest discussion on the general relevance of science fiction to the reality of everyday existence.
A study has already been done about pedestrians and autonomous vehicles and it doesn't look good
Because autonomous vehicles will be risk-averse, the
model suggests that pedestrians will be able to behave with impunity, and autonomous vehicles may
facilitate a shift towards pedestrian-oriented urban neighborhoods. At the same time, autonomous
vehicle adoption may be hampered by their strategic disadvantage that slows them down in urban
traffic
https://people.ucsc.edu/~adammb/publications/Milla...
untakenname said:
A study has already been done about pedestrians and autonomous vehicles and it doesn't look good
Because autonomous vehicles will be risk-averse, the
model suggests that pedestrians will be able to behave with impunity, and autonomous vehicles may
facilitate a shift towards pedestrian-oriented urban neighborhoods. At the same time, autonomous
vehicle adoption may be hampered by their strategic disadvantage that slows them down in urban
traffic
https://people.ucsc.edu/~adammb/publications/Milla...
Thanks for posting. This looks like an interesting read. I've printed a copy and will try and digest it over the next few days.Because autonomous vehicles will be risk-averse, the
model suggests that pedestrians will be able to behave with impunity, and autonomous vehicles may
facilitate a shift towards pedestrian-oriented urban neighborhoods. At the same time, autonomous
vehicle adoption may be hampered by their strategic disadvantage that slows them down in urban
traffic
https://people.ucsc.edu/~adammb/publications/Milla...
I'm intrigued by the title of chapter 3. "The crosswalk chicken game".
Autonomous vehicles are going to be considered adequate if they can cope with traffic conditions that are normal. If you deliberately set out to make the conditions abnormal "by taking advantage of them", then you're increasing the risk to yourself, the vehicle you're cutting up and any road users nearby. So, no, it's not OK to cut them up and you should be prosecuted for doing so deliberately just as you should be if you deliberately cut up a vehicle driven by a human.
The Asimov style of thinking about AI and morality and legal accountability is based on AI closely mimicking human thought. That isn't how current AI works at all, so it's a profoundly unhelpful starting point for thinking about current AI. When people are actually confronted with machines that employ AI, they actually find the moral and legal questions aren't particularly profound. For example, flight control systems in aircraft (a) make decisions about how the aircraft should fly and (b) decide when certain alerts should be sent to the flight crew and (c) can also overrule the crew. When the interaction of the systems and the crew goes badly wrong and a load of passengers get killed, the problem is seen as an engineering failure, not a moral abdication by people to machines of decision-making.
Every time there is an advance in machine intelligence, people tend to do two things. (1) they narrow their definition of intelligence to exclude what machines can now do. (2) They have a moral panic about what technology may be just over the horizon.
The Asimov style of thinking about AI and morality and legal accountability is based on AI closely mimicking human thought. That isn't how current AI works at all, so it's a profoundly unhelpful starting point for thinking about current AI. When people are actually confronted with machines that employ AI, they actually find the moral and legal questions aren't particularly profound. For example, flight control systems in aircraft (a) make decisions about how the aircraft should fly and (b) decide when certain alerts should be sent to the flight crew and (c) can also overrule the crew. When the interaction of the systems and the crew goes badly wrong and a load of passengers get killed, the problem is seen as an engineering failure, not a moral abdication by people to machines of decision-making.
Every time there is an advance in machine intelligence, people tend to do two things. (1) they narrow their definition of intelligence to exclude what machines can now do. (2) They have a moral panic about what technology may be just over the horizon.
ATG said:
Autonomous vehicles are going to be considered adequate if they can cope with traffic conditions that are normal. If you deliberately set out to make the conditions abnormal "by taking advantage of them", then you're increasing the risk to yourself, the vehicle you're cutting up and any road users nearby. So, no, it's not OK to cut them up and you should be prosecuted for doing so deliberately just as you should be if you deliberately cut up a vehicle driven by a human.
I agree that autonomous vehicles should be, and are being designed to be, adequate under normal traffic conditions. The issue that is being considered here is that with autonomous vehicles being programmed to avoid harming humans outside of the vehicle, if physically possible, and human nature being what it is, there will be frequent opportunities for humans, be they pedestrians, drivers or ten-year old tearaways, to "take advantage" of the benign nature of AI. The outcome of which may be to limit the utility of AI vehicles. I also agree that it's not going to be OK to cut them up when (say) entering a roundabout. But I doubt whether the majority of the British driving public will see it the same way; particularly if they happen to be late or in a hurry for whatever reason. As for prosecuting drivers or pedestrians who "take advantage" of an AI vehicle... I doubt that's going to happen, unless it's a particularly egregious example that results in an accident and personal injury.
You can already do this
To the MLM's with the safety packs & active cruise. Just pull back in after having gone L1, L2, L3 to pass & maybe possibly pull back in a tad too early.
Que lots of emergency braking & flashy lights, certainly on the Volvo's & many Merc's this appears to work rather reliably so I have heard
To the MLM's with the safety packs & active cruise. Just pull back in after having gone L1, L2, L3 to pass & maybe possibly pull back in a tad too early.
Que lots of emergency braking & flashy lights, certainly on the Volvo's & many Merc's this appears to work rather reliably so I have heard
johnao said:
I agree that autonomous vehicles should be, and are being designed to be, adequate under normal traffic conditions. The issue that is being considered here is that with autonomous vehicles being programmed to avoid harming humans outside of the vehicle, if physically possible, and human nature being what it is, there will be frequent opportunities for humans, be they pedestrians, drivers or ten-year old tearaways, to "take advantage" of the benign nature of AI. The outcome of which may be to limit the utility of AI vehicles.
I also agree that it's not going to be OK to cut them up when (say) entering a roundabout. But I doubt whether the majority of the British driving public will see it the same way; particularly if they happen to be late or in a hurry for whatever reason. As for prosecuting drivers or pedestrians who "take advantage" of an AI vehicle... I doubt that's going to happen, unless it's a particularly egregious example that results in an accident and personal injury.
I think it is a little naive to think that the AI system is going to be particularly focused on dealing with people jumping out or other drivers cutting it up. There's no reason to think it will be or should even try to be better than a human at those things.I also agree that it's not going to be OK to cut them up when (say) entering a roundabout. But I doubt whether the majority of the British driving public will see it the same way; particularly if they happen to be late or in a hurry for whatever reason. As for prosecuting drivers or pedestrians who "take advantage" of an AI vehicle... I doubt that's going to happen, unless it's a particularly egregious example that results in an accident and personal injury.
If you jump in front of one, don't expect it to be able to avoid you.
If you do something deliberately stupid, why expect a machine to be designed to strenuously try to protect you?
If I stick my foot under a lawnmower and lose some toes, very few would blame the lawnmower or its manufacturer.
Why would we require autonomous cars to protect us from our own irresponsibility? Isn't the greater good served better by not expecting them to do so?
Somewhatfoolish said:
. If it's an AI HGV, then it's even fine to make it emergency brake.
What about if there is a AI vehicle behind the HGV and you cause that to become a jerky ride?Edited by anonymous-user on Thursday 23 November 23:13
Or if you cause another road use to crash because you pulled in front of a AI HGV that had to emergency brake?
Both are definitely driving like a dick.
Eta: if you wouldn't cut in front of a 'humans' HGV now, then what makes it OK if it's a AI HGV? Foolish
In the spirit of the OP and from my (admittedly limited) experience driving my last car, an Audi S3 with AI in the form of Adaptive Cruise Control, my forcast is that rather than improving traffic flow and increasing road capacity AI will leave huge gaps between vehicles and result in exactly the opposite.
I don't think you'll get many people performing "traditionally dangerous" manoeuvres against AI vehicles. You might "know" the 30 ton HGV bearing down on you is programmed to stop if you pull out in front, but your brain is still going to be saying "that's a 30 ton HGV approaching at 50mph".
In low speed environments I can certainly see it being more commonplace. For example in stop-start traffic people will force their way out of side roads and just make the AI vehicle wait. Or will force their way into a lane at the last minute. Although as mentioned above, the risk of doing this will be the multiple HD cameras on the AI vehicle which could be used as evidence against you in the event of any accident.
In low speed environments I can certainly see it being more commonplace. For example in stop-start traffic people will force their way out of side roads and just make the AI vehicle wait. Or will force their way into a lane at the last minute. Although as mentioned above, the risk of doing this will be the multiple HD cameras on the AI vehicle which could be used as evidence against you in the event of any accident.
Zetec-S said:
I don't think you'll get many people performing "traditionally dangerous" manoeuvres against AI vehicles. You might "know" the 30 ton HGV bearing down on you is programmed to stop if you pull out in front, but your brain is still going to be saying "that's a 30 ton HGV approaching at 50mph".
In low speed environments I can certainly see it being more commonplace. For example in stop-start traffic people will force their way out of side roads and just make the AI vehicle wait. Or will force their way into a lane at the last minute. Although as mentioned above, the risk of doing this will be the multiple HD cameras on the AI vehicle which could be used as evidence against you in the event of any accident.
Indeed. Google cars had a problem a while back with four way stops. People edge forward. The AI then sees that they’re over the stop line, so waits. As a result, it never went...In low speed environments I can certainly see it being more commonplace. For example in stop-start traffic people will force their way out of side roads and just make the AI vehicle wait. Or will force their way into a lane at the last minute. Although as mentioned above, the risk of doing this will be the multiple HD cameras on the AI vehicle which could be used as evidence against you in the event of any accident.
I honestly would look forward to a large number of autonomous vehicles on the road, because (I think) I would be able to make better progress on the road. They would be more predictable, and certainly not aggressive. Whether they are as stupid as current drivers would be up to the programmers.
The flip side to that is you will never find me in an autonomous vehicle, due to the discussion above about 1st Law stuff. At some point, an autonomous vehicle will have to decide between hitting a bus full of school children or driving you off a cliff. I would prefer to be in charge of that decision.
The flip side to that is you will never find me in an autonomous vehicle, due to the discussion above about 1st Law stuff. At some point, an autonomous vehicle will have to decide between hitting a bus full of school children or driving you off a cliff. I would prefer to be in charge of that decision.
I think you might find this interesting from MIT: http://moralmachine.mit.edu/
HappyMidget said:
I think you might find this interesting from MIT: http://moralmachine.mit.edu/
Choice for you...
Gassing Station | Advanced Driving | Top of Page | What's New | My Stuff