• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

First Death by Autonomous Vehicle Accident

THE:MILKMAN

Member
Watch the police release video. There is NO WAY a car or even human would have stopped/avoided this accident as it was the cyclist's fault for crossing the black street. The driver should NOT have taken her eyes off of the road but it's fully the victim's fault for negligence

I'm not so sure about that (will wait to see what the NTSB have to say of course)

Here is a human in a similar situation:

 

Dubloon7

Banned
I'm not so sure about that (will wait to see what the NTSB have to say of course)

Here is a human in a similar situation:


bad example. dead victim was walking slowly across the street with her bike, near the middle of the road. your example shows a single pedestrian standing still along the side of the road which would be slightly more easily maneuverable around
 

Dunki

Member
bad example. dead victim was walking slowly across the street with her bike, near the middle of the road. your example shows a single pedestrian standing still along the side of the road which would be slightly more easily maneuverable around
No he was standing outside the women was in the middle of the car. You could have not have avoided this.
 

THE:MILKMAN

Member
bad example. dead victim was walking slowly across the street with her bike, near the middle of the road. your example shows a single pedestrian standing still along the side of the road which would be slightly more easily maneuverable around

I'd say my example actually has things that possibly make it harder for the driver. Higher speed (a 70MPH DC I think) drunk man walking toward car not left to right and a road where pedestrians are not expected.
 
After watching the video, i am not sure how one can avoid the person without getting into an accident. The technology is not at fault but the driver definitely should be charged for playing the phone while driving.
 

bronk

Banned
I saw two pedestrians get hit by a car last year.
They jaywalked out into traffic and the car couldn't brake in time.

Autonomous cars need to be our future. We have to cut down on traffic fatalities from drunk driving, texting and driving, and reckless driving.

37,000 people died by car in 2017. What would that look like if all cars were self-driving? 10? 5?
Exact point I made with my friend. He wants them gone for good. I dont get it.
 

THE:MILKMAN

Member
After watching the video, i am not sure how one can avoid the person without getting into an accident. The technology is not at fault but the driver definitely should be charged for playing the phone while driving.

I'm really surprised so many here think the accident was unavoidable or at least could have been lessened. I expect a autonomous car to be better than shown here when lower-end tech fitted to cars currently can perform better than
shown in this tragic video.

AEB is fitted as standard on many cars here now (even cars under £10,000) and even though much more basic than what I assume is fitted to these Uber XC90's, tests show there surely should have been some level of response at least from the Uber car?

 

Fbh

Member
RIP to the victim.

I don't quite see how this is an argument against autonomous vehicles though, or why this accident has some people talking about banning them. The technology is still in development and no one is arguing that these things are flawless. All these cars have a human driver behind the wheel exactly for situations like this, but instead of doing his job the guy was on his phone.
 

Dubloon7

Banned
I'd say my example actually has things that possibly make it harder for the driver. Higher speed (a 70MPH DC I think) drunk man walking toward car not left to right and a road where pedestrians are not expected.

how much area is it to avoid when it's a single person vs. a person with a bike perpendicular to travel direction?
 

THE:MILKMAN

Member
how much area is it to avoid when it's a single person vs. a person with a bike perpendicular to travel direction?

Not sure what you're asking or why its relevant? My opinion is the XC90 with the lidar/radar systems it had should have easily avoided the woman. If you look at other Euroncap videos like the one I posted above you'll find
ones where an impact is unavoidable but even here the "basic" radar AEB system reduces the cars speed from ~28 MPH to ~17 MPH in very short order. They appear to start braking even before the pedestrian is in the firing line.



I hope there weren't overrides in the XC90 that meant it could operate in auto mode with system(s) switched off for example. I just can't understand why in the video we have seen it shows basically nil response from the car/systems.
 

Dubloon7

Banned
Not sure what you're asking or why its relevant? My opinion is the XC90 with the lidar/radar systems it had should have easily avoided the woman. If you look at other Euroncap videos like the one I posted above you'll find
ones where an impact is unavoidable but even here the "basic" radar AEB system reduces the cars speed from ~28 MPH to ~17 MPH in very short order. They appear to start braking even before the pedestrian is in the firing line.



I hope there weren't overrides in the XC90 that meant it could operate in auto mode with system(s) switched off for example. I just can't understand why in the video we have seen it shows basically nil response from the car/systems.

in the following example which do you think would be more easily hit:

1. a fiat T-boning a school bus, or
2. a school bus T-boning a fiat
 

THE:MILKMAN

Member
in the following example which do you think would be more easily hit:

1. a fiat T-boning a school bus, or
2. a school bus T-boning a fiat

Sorry, Dubloon but you still have me lost!? If your suggestion above relates to a person always losing against a +2 tonne SUV I agree (but unsure why you are asking this of me) but will just ask a simple question:

Should the XC90 have avoided this accident if its systems were operating correctly?

Edit: The video was obscuring your question above.....

So I understand you are talking about a narrow person face on against a bicycle side-on and of course you are correct but you are not taking into account the woman was quite briskly moving to the right and in my example no lateral movement. I believe the car in my example was traveling ~50 MPH versus 38 MPH in this case? I think a very slight braking and/or swerve to the left combined with the lady moving right would have resulted in no contact.
 
Last edited:

Dubloon7

Banned
Sorry, Dubloon but you still have me lost!? If your suggestion above relates to a person always losing against a +2 tonne SUV I agree (but unsure why you are asking this of me) but will just ask a simple question:

Should the XC90 have avoided this accident if its systems were operating correctly?

Edit: The video was obscuring your question above.....

So I understand you are talking about a narrow person face on against a bicycle side-on and of course you are correct but you are not taking into account the woman was quite briskly moving to the right and in my example no lateral movement. I believe the car in my example was traveling ~50 MPH versus 38 MPH in this case? I think a very slight braking and/or swerve to the left combined with the lady moving right would have resulted in no contact.
have you ever been in a car accident? do you know the statistics in how the average person reacts to something near immediate? they jump the car left or right. no way, in shape or form was this action avoidable, aside from the dead victim to RUN across the street, instead of casually pushing the bike across. i undertsand what you are saying, but I do not see any guilt of the car or the ignorant driver as the victim is to be blamed here for winning that darwin award
 

llien

Member
Watch the police release video. There is NO WAY a car or even human would have stopped/avoided this accident as it was the cyclist's fault for crossing the black street.
We can see human avoiding an accident like that in the video from the post #51.
 

Dunki

Member
We can see human avoiding an accident like that in the video from the post #51.
But here this person was on the outside. In the new video she was complete in the middle of the car. I really would like to see someone react to this one and make it.
 
Last edited:

llien

Member
But here this person was on the outside. In the new video she was complete in the middle of the car.
She moved slowly across a two lane road. We only see camera footage, human eyes are very likely able to spot a moving target from a further distance, than visible on the video and try to avoid collision by turning left or right.
 

THE:MILKMAN

Member
Facepalm for the guy filming this while driving to show where a fatal collision happened but it does clearly show the released video does not represent what could been seen IMO (leaving aside the autonomous stuff).

 

Dubloon7

Banned
She moved slowly across a two lane road. We only see camera footage, human eyes are very likely able to spot a moving target from a further distance, than visible on the video and try to avoid collision by turning left or right.
bother to read my follow-up comments with another commentor where we discuss the probability of hitting a single person on the outside of the lane versus a larger area of target moving slowly perpendicular to traffic direction?
 

TheMikado

Banned
Ok First things first.

"Based on preliminary information, the car was going approximately 40 mph in a 35 mph zone, according to Tempe Police Detective Lily Duran.
Police say the investigation does not at this time show significant signs of the SUV slowing before the crash. "
"The video shows Vasquez with a sudden look of shock on her face before hitting Herzberg, and she tenses up, apparently trying to seize control of the vehicle. Police have said there were no “significant signs of the vehicle slowing down.” Uber would not say if or when the sensor-packed SUV’s multiple laser, camera and computer systems detected Herzberg, or if the car’s brakes were applied."

http://money.cnn.com/2018/03/19/technology/uber-autonomous-car-fatal-crash/index.html
https://www.washingtonpost.com/news...to-protect-pedestrian/?utm_term=.be382df3500d

That said, it appears the car sensors did not see this person which indicates failure in software and hardware systems.
The fault lies with Uber and the safety driver.

Especially in light of this:
https://www.digitaltrends.com/business/google-sues-uber-over-self-driving-car-secrets/

Waymo — a part of Alphabet, which owns Google — filed suit against Uber last year, alleging that the ridesharing service stole some of its proprietary autonomous-car tech. Waymo began as Alphabet/Google’s self-driving car project years ago, and was spun off into its own company.

In Waymo v. Uber, the plaintiff claims a former employee named Anthony Levandowski stole proprietary files — 14,000 of them, to be exact — and used them to start a new company. The company in question is Otto, the autonomous-driving tech startup acquired by Uber in August 2016 for $680 million. Otto demonstrated a self-driving semi truck late last year.

The lawsuit alleges unfair competition, patent infringement, and trade secret misappropriation. It also claims the allegedly stolen technology earned Otto employees more than $500 million. Waymo asked a federal judge to put an end to its rival’s self-driving car program. Part of the request was granted, though how that will affect Uber is unclear because the motion remains sealed.

Uber has repeatedly denied Waymo’s charges, dubbing them nothing more than “a baseless attempt to slow down a competitor.”

The lawsuit was brought before United States District Judge William Alsup, who referred it to the U.S. Attorney’s Office to determine if the government should get involved. He emphasized the case needs to stay in court, and turned down Uber’s request to hire a private arbitrator in order to keep the dirty details of the legal battle out of the public eye.

“The court takes no position on whether a prosecution is or is not warranted, a decision entirely up to the U.S. Attorney,” Alsup wrote.

According to the lawsuit, Waymo became aware of the issue when it was inadvertently copied in an email from a supplier that showed an Uber LIDAR circuit board, which bore a “striking resemblance” to one of Waymo’s designs. The complaint accuses Levandowski of downloading the 14,000 files in question in December 2015. That allegedly included the circuit board, part of a sensor that helps autonomous cars “see” their environment.

Levandowski — who invoked the Fifth Amendment to avoid self-incrimination in connection with the case — left Waymo in January 2016 and formed Otto that May. The lawsuit alleges that, prior to his departure, he created a domain name for his new company, and told other Waymo employees that he planned to “replicate” the company’s technology for a competitor. Creating Otto was a clever way to hide his agreement with Uber from Google executives, according to Waymo’s lawyers; Uber planned on buying the startup before it was even founded, they added.


Levandowski gets his walking papers, Uber continues testing
Levandowski’s dependence on the Fifth Amendment ended up costing him his job, according to The New York Times. Uber confirmed on May 30 it has fired its top self-driving car engineer. The company asked him to cooperate with the ongoing investigation, but he failed to hand over the required documents in time.

This marks the first time that Uber has split publicly with Levandowski; the company has previously made no indication that it would ask the engineer to cooperate with court proceedings. Still, Uber maintains that it’s innocent. “We continue to believe that no Waymo trade secrets have ever been used in the development of our self-driving technology, and we remain confident that we will prove that fact in due course,” the company wrote.

Uber continues to test self-driving cars for use in its ridesharing service in Pittsburgh, Pennsylvania, and Tempe, Arizona. Cars were moved to the Arizona city after an aborted launch in San Francisco. That operation was shut down when the California Department of Motor Vehicles (DMV) revoked the registrations of Uber’s test vehicles, after the company refused to apply for the correct autonomous-car test permits.


The fact is, given the accusations of technology theft, specifically the lidar system, the firing of its top autonomous engineer, and the revoking of it's registration. There is probable rationale to think Uber may be aware that their technology was under developed or improperly developed and them continuing to test despite the questions surrounding their technology resulted in negligence. Basically if this death can be traced back to improper or not working systems it will call into question their development which seems may have occurred through intellectual property theft.
 
Ok First things first.

"Based on preliminary information, the car was going approximately 40 mph in a 35 mph zone, according to Tempe Police Detective Lily Duran.
Police say the investigation does not at this time show significant signs of the SUV slowing before the crash. "
"The video shows Vasquez with a sudden look of shock on her face before hitting Herzberg, and she tenses up, apparently trying to seize control of the vehicle. Police have said there were no “significant signs of the vehicle slowing down.” Uber would not say if or when the sensor-packed SUV’s multiple laser, camera and computer systems detected Herzberg, or if the car’s brakes were applied."

http://money.cnn.com/2018/03/19/technology/uber-autonomous-car-fatal-crash/index.html
https://www.washingtonpost.com/news...to-protect-pedestrian/?utm_term=.be382df3500d

That said, it appears the car sensors did not see this person which indicates failure in software and hardware systems.
The fault lies with Uber and the safety driver.

Especially in light of this:
https://www.digitaltrends.com/business/google-sues-uber-over-self-driving-car-secrets/

Waymo — a part of Alphabet, which owns Google — filed suit against Uber last year, alleging that the ridesharing service stole some of its proprietary autonomous-car tech. Waymo began as Alphabet/Google’s self-driving car project years ago, and was spun off into its own company.

In Waymo v. Uber, the plaintiff claims a former employee named Anthony Levandowski stole proprietary files — 14,000 of them, to be exact — and used them to start a new company. The company in question is Otto, the autonomous-driving tech startup acquired by Uber in August 2016 for $680 million. Otto demonstrated a self-driving semi truck late last year.

The lawsuit alleges unfair competition, patent infringement, and trade secret misappropriation. It also claims the allegedly stolen technology earned Otto employees more than $500 million. Waymo asked a federal judge to put an end to its rival’s self-driving car program. Part of the request was granted, though how that will affect Uber is unclear because the motion remains sealed.

Uber has repeatedly denied Waymo’s charges, dubbing them nothing more than “a baseless attempt to slow down a competitor.”

The lawsuit was brought before United States District Judge William Alsup, who referred it to the U.S. Attorney’s Office to determine if the government should get involved. He emphasized the case needs to stay in court, and turned down Uber’s request to hire a private arbitrator in order to keep the dirty details of the legal battle out of the public eye.

“The court takes no position on whether a prosecution is or is not warranted, a decision entirely up to the U.S. Attorney,” Alsup wrote.

According to the lawsuit, Waymo became aware of the issue when it was inadvertently copied in an email from a supplier that showed an Uber LIDAR circuit board, which bore a “striking resemblance” to one of Waymo’s designs. The complaint accuses Levandowski of downloading the 14,000 files in question in December 2015. That allegedly included the circuit board, part of a sensor that helps autonomous cars “see” their environment.

Levandowski — who invoked the Fifth Amendment to avoid self-incrimination in connection with the case — left Waymo in January 2016 and formed Otto that May. The lawsuit alleges that, prior to his departure, he created a domain name for his new company, and told other Waymo employees that he planned to “replicate” the company’s technology for a competitor. Creating Otto was a clever way to hide his agreement with Uber from Google executives, according to Waymo’s lawyers; Uber planned on buying the startup before it was even founded, they added.

Levandowski gets his walking papers, Uber continues testing
Levandowski’s dependence on the Fifth Amendment ended up costing him his job, according to The New York Times. Uber confirmed on May 30 it has fired its top self-driving car engineer. The company asked him to cooperate with the ongoing investigation, but he failed to hand over the required documents in time.

This marks the first time that Uber has split publicly with Levandowski; the company has previously made no indication that it would ask the engineer to cooperate with court proceedings. Still, Uber maintains that it’s innocent. “We continue to believe that no Waymo trade secrets have ever been used in the development of our self-driving technology, and we remain confident that we will prove that fact in due course,” the company wrote.

Uber continues to test self-driving cars for use in its ridesharing service in Pittsburgh, Pennsylvania, and Tempe, Arizona. Cars were moved to the Arizona city after an aborted launch in San Francisco. That operation was shut down when the California Department of Motor Vehicles (DMV) revoked the registrations of Uber’s test vehicles, after the company refused to apply for the correct autonomous-car test permits.


The fact is, given the accusations of technology theft, specifically the lidar system, the firing of its top autonomous engineer, and the revoking of it's registration. There is probable rationale to think Uber may be aware that their technology was under developed or improperly developed and them continuing to test despite the questions surrounding their technology resulted in negligence. Basically if this death can be traced back to improper or not working systems it will call into question their development which seems may have occurred through intellectual property theft.

Very interesting. Thanks for this.
 

Alx

Member
No, people being illogical in the face of new tech is inevitable.

Guys you know how many people are killed by normal cars, right?

It doesn't mean you should brush away the question of responsibility. Car manufacturers have been working on autonomous cars for decades, and that question was always the most important issue for them, since they didn't want to take responsibility for a decision by the machine that could take lives. Even to this day the laws and rulings are yet to be written on that subject.
 

THE:MILKMAN

Member
My guess is the main issue is the Uber software. If there was a technical issue with Lidar or other systems then isn't there a fail-safe like not possible to engage self-drive? Or even just a warning light to show an issue (like a EML light for a faulty engine).

If it turns out all systems were working fine and it just failed to spot the lady it might prove to be a huge problem.

I have a bigger problem with the Police chief immediately seeming to remove any fault from the car/driver before the investigation is done. Not very wise and doesn't instill confidence.
 

Acerac

Banned
It doesn't mean you should brush away the question of responsibility. Car manufacturers have been working on autonomous cars for decades, and that question was always the most important issue for them, since they didn't want to take responsibility for a decision by the machine that could take lives. Even to this day the laws and rulings are yet to be written on that subject.
You're right, but that didn't seem to be said poster's intent.
 
Nah this could be a clear case of the manufacturer being at fault.

Could be some fault in the AV, but there is certainly fault with the diseased for crossing the road where she shouldn't have. First guess is UBER walks away mostly safe. Who knows though, I'll be following closely.
 

Dr.Guru of Peru

played the long game
I get the idea that pedestrians have the right of way, but I don’t think that absolves them of responsibility for their own safety. Her jaywalking across a major motorway in the dark without wearing any reflective clothing has to be a mitigating factor here for Uber.
 
Last edited:

TheMikado

Banned
The pedestrian issue is seperate from the detection systems.

The systems are supposed to be designed to react to pedestrians like humans do. I.e braking for children running into the street, etc.

Further the issue of lightning and reflecting gear should have no baring in the results as LiDAR is supposed to work just as well in the dark as in the light.

Basically if the system didn’t see her in this scenario there reasonable assumption based on industry standards that the system may not have seen her in the day or at any other time. The failure of the lidar system to detect the object shows a failing of the system. If that’s what happened.

That is the key issue.
Did the LiDAR system do what it was supposed to do based on industry standards and thus far everything points to no.

https://newatlas.com/ford-autonomous-cars-dark/42742/
 

llien

Member
Uber's robotic vehicle project was not living up to expectations months before a self-driving car operated by the company struck and killed a woman in Tempe, Ariz. The cars were having trouble driving through construction zones and next to tall vehicles, like big rigs. And Uber's human drivers had to intervene far more frequently than the drivers of competing autonomous car projects. Waymo, formerly the self-driving car project of Google, said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March, Uber was struggling to meet its target of 13 miles per "intervention" in Arizona (Warning: source may be paywalled; alternative source), according to 100 pages of company documents obtained by The New York Times and two people familiar with the company's operations in the Phoenix area but not permitted to speak publicly about it. Yet Uber's test drivers were being asked to do more -- going on solo runs when they had worked in pairs. And there also was pressure to live up to a goal to offer a driverless car service by the end of the year and to impress top executives.

Slashdot
 

joshcryer

it's ok, you're all right now
Facepalm for the guy filming this while driving to show where a fatal collision happened but it does clearly show the released video does not represent what could been seen IMO (leaving aside the autonomous stuff).



Thanks for posting this and kudos to that guy who made it, I figured this would be how it looked in real life (Pixel XL is actually decent at low light conditions). Human eye would still be a lot better of course. I maintain the Uber safety driver was totally at fault here even though yes the machine should have seen the woman.

This is what the car should've been seeing (skip to 9 minutes):
 

goldenpp72

Member
I don't like that when we have a vehicle death by automation people flip out. We should be looking at how frequent the death count is vs what we currently have with humans in charge. You pick the lesser evil I say.
 

BANGS

Banned
I don't like that when we have a vehicle death by automation people flip out. We should be looking at how frequent the death count is vs what we currently have with humans in charge. You pick the lesser evil I say.
That's a really simplified way of looking at it. Unlike these automated cars, each individual human driver is different and has different skill levels of driving. I'd be much more willing to advocate for getting bad drivers off the road than I am for allowing a flawed machine to decide my fate, but I am also a great driver so...
 

goldenpp72

Member
That's a really simplified way of looking at it. Unlike these automated cars, each individual human driver is different and has different skill levels of driving. I'd be much more willing to advocate for getting bad drivers off the road than I am for allowing a flawed machine to decide my fate, but I am also a great driver so...

Hardly. Humans are largely horrific at driving and to remove their ability to drive would largely render them unable to be productive members of society. Right now we are comparing what will be the worst example of this tech vs a century of human experience. I say if machines in their infancy can best humans right from the start, we should continue in investing removing the human element from driving unless the driver WANTS to be engaged. Some people enjoy driving and in turn will pay attention, meanwhile the majority are reaching for the GPS, radio, texting, taking selfies, driving home tired or intoxicated on some level, or simply day dreaming. Computers can be flawed until they aren't, human flaws will exist forever.

People who live right in a city may be able to avoid driving, the rest of us need a vehicle to get us to where we need to go. I consider myself a good driver, unfortunately so do most bad drivers. Over time roads will be built with self driving cars in mind, laws and regulations will be rewritten or put in place, and the tech will continue to be perfected. Don't let growth be stunted because it's not perfect right now, it will get way better over time if we allow it.
 
Last edited:

BANGS

Banned
I say if machines in their infancy can best humans right from the start
Again, you can't compare to "humans". You're comparing apples and oranges. I'm sure plenty of people would indeed run over a 49 year old lady walking her bike across the street, but most wouldn't... Whereas the software and sensors that were responsible for the accident are standard. I'm not for halting progress, but I'm certainly against creating murder machines...
 

goldenpp72

Member
Again, you can't compare to "humans". You're comparing apples and oranges. I'm sure plenty of people would indeed run over a 49 year old lady walking her bike across the street, but most wouldn't... Whereas the software and sensors that were responsible for the accident are standard. I'm not for halting progress, but I'm certainly against creating murder machines...

Until there is evidence that automated vehicles cause as many or more vehicle incidents as humans do, I say it's a consequence we should accept. My car was destroyed and fiancee almost killed in a car accident 2 years ago because the driver didn't want stop for the red light and gunned it from far away going about 60 in a 25 mph zone off a roundabout. That yellow light was just too much for the individual to accept and he made a decision to be an idiot. A machine wouldn't do that. Computers don't get bored, reckless or make bad judgements on purpose, so once refined enough they will help mitigate overall incidents greatly. It's fair to compare humans to computers because that is what the computers are trying to replace.

All of us who decide to drive do so understanding the risk involved, anything that can eventually remove human idiocy from the equation is something I'm all for. If they can perfect the entire process without having anything on the road go for it, I just doubt that's really plausible. We either let the process be worked out or do what exactly?

The logic on display here is like people who claim this drug killed their mom because it caused some new disease 20 years after it saved their lives. Sometimes we should accept that progress won't always be perfect, but as long as it's doing more good than harm I say keep on going. I bet if automated cars took over entirely within the next few years we would reach record lows in vehicle incidents. That's enough to overlook some unfortunate instances.

In this world you have those people who want to put the cat down for scratching the baby who tries to break its tail, and then you have people like myself that understand the baby is being a little shit and the cat has defense instincts that kicked in. Either way it's unfortunate, but knee jerk reactions are rarely the right thing to work from. Obviously some compensation or action should happen just as it does when you or I run someone over, I'm sure insurance is still involved.
 
Last edited:

BANGS

Banned
Until there is evidence that automated vehicles cause as many or more vehicle incidents as humans do, I say it's a consequence we should accept.
Well this is where we disagree, and that's cool. Personally, I won't ever accept a robot running over my grandmother. If a human does it I can accept it as the tragedy it is, but if a robot does it I wouldn't feel any closure...
 

goldenpp72

Member
Well this is where we disagree, and that's cool. Personally, I won't ever accept a robot running over my grandmother. If a human does it I can accept it as the tragedy it is, but if a robot does it I wouldn't feel any closure...

I hope you can at least see how irrational that is. The end result is still that someone died, at least with this incident they might be able to correct it and keep people safer, where as idiot drivers are forever.
 

BANGS

Banned
I hope you can at least see how irrational that is. The end result is still that someone died, at least with this incident they might be able to correct it and keep people safer, where as idiot drivers are forever.
That's like saying nuking Japan was a good idea because we potentially saved more lives than we destroyed... Again back to my original point that is WAY to simplified to be taken seriously IMO. There are too many factors to just call it a day on this one...
 

goldenpp72

Member
That's like saying nuking Japan was a good idea because we potentially saved more lives than we destroyed... Again back to my original point that is WAY to simplified to be taken seriously IMO. There are too many factors to just call it a day on this one...

I do believe that going by the situation at the time that nuking Japan was for the greater good. It's easy to say you shouldn't have done something without living through the horrors of the time, especially since even after one nuke they still wanted to keep going.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Well this is where we disagree, and that's cool. Personally, I won't ever accept a robot running over my grandmother. If a human does it I can accept it as the tragedy it is, but if a robot does it I wouldn't feel any closure...

That's an AMAZING revelation that I've literally never thought of. At some point in the near future we humans will have to learn to live "with" robots in our society. But I never thought of the human side of coping with a lost due to robots making mistakes vs. a human making that exact same mistake.

Hmmmm......a society make over will be a must.
 

TheMikado

Banned
Again, you can't compare to "humans". You're comparing apples and oranges. I'm sure plenty of people would indeed run over a 49 year old lady walking her bike across the street, but most wouldn't... Whereas the software and sensors that were responsible for the accident are standard. I'm not for halting progress, but I'm certainly against creating murder machines...

That's not really true at the present point and time.
To put it another way.

ABS and other safety features were once in their infancy and non-standard as well. Different companies could and did implement different versions of said feature. The problem in this case with with the specific manufacturer not the overall idea.
That's an AMAZING revelation that I've literally never thought of. At some point in the near future we humans will have to learn to live "with" robots in our society. But I never thought of the human side of coping with a lost due to robots making mistakes vs. a human making that exact same mistake.

Hmmmm......a society make over will be a must.

How’s it any different then now?
Elevator accident, brakes give way, the problem is people are attributing driving as something fundamentally human.

I presume these would be the same people having problems with human flight and blame the technology as unnatural.

“Born with wings” crowd.

Can you imagine someone saying every plane crash today is evidence that humans shouldn’t fly? I can’t even imagine what people thought of the Hindenburg..
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
That's not really true at the present point and time.
To put it another way.

ABS and other safety features were once in their infancy and non-standard as well. Different companies could and did implement different versions of said feature. The problem in this case with with the specific manufacturer not the overall idea.


How’s it any different then now?
Elevator accident, brakes give way, the problem is people are attributing driving as something fundamentally human.

I presume these would be the same people having problems with human flight and blame the technology as unnatural.

“Born with wings” crowd.

Can you imagine someone saying every plane crash today is evidence that humans shouldn’t fly? I can’t even imagine what people thought of the Hindenburg..

An elevator is an order of magnitude different than a self-driving A.I. having car though.
 

TheMikado

Banned
An elevator is an order of magnitude different than a self-driving A.I. having car though.

It’s still a task that was thought too dangerous for anything but a human operator and certainly not automated at that point in time. That’s the point. The only reason it seems technologically simple now is that our technology has advanced far beyond manual elevators. In 50 years driving will be a simple task for our level of technology and it’s likely your cellphone 20 years from now will have enough processing power to operate 100 cars simultaneously.
 

TheMikado

Banned
Watch the police release video. There is NO WAY a car or even human would have stopped/avoided this accident as it was the cyclist's fault for crossing the black street. The driver should NOT have taken her eyes off of the road but it's fully the victim's fault for negligence


Mobileye chastises Uber by detecting struck pedestrian in footage well before impact
https://techcrunch.com/2018/03/26/m...uck-pedestrian-in-footage-well-before-impact/


A self-driving vehicle fatally striking a pedestrian is a tasteless venue for self-promotion, but it’s also an important time to discuss the problems that created the situation. Mobileye CEO and CTO Amnon Shashua seems to do a little of both in this post at parent company Intel’s blog, running the company’s computer vision software on Uber’s in-car footage and detecting the person a full second before impact.

It first must be said that this shouldn’t be taken to demonstrate the superiority of Mobileye’s systems or anything like that. This type of grainy footage isn’t what self-driving — or even “advanced driver assistance” — systems are meant to operate on. It’s largely an academic demonstration.

But the application of a competent computer vision system to the footage and its immediate success at detecting both Elaine Herzberg and her bike
show how completely the Uber system must have failed.


Even if this Mobileye object detection algorithm had been the only thing running in that situation, it detected Herzberg a second before impact (on highly degraded data at that). If the brakes had been immediately applied, the car may have slowed enough that the impact might not have been fatal; even a 5 MPH difference might matter. Remember, the Uber car reportedly didn’t even touch the brakes until afterwards. It’s exactly these types of situations in which we are supposed to be able to rely on the superior sensing and reaction time of an AI.

We’re still waiting to hear what exactly happened that the Uber car, equipped with radar, lidar, multiple optical cameras and a safety driver, any one of which should have detected Herzberg, completely failed to do so. Or if it did, failed to take action.

This little exhibition by Mobileye, while it should be taken with a grain of salt, at least gives a hint at what should have been happening inside that car’s brain.



Again a PROPERLY DESIGNED AI system relies on far more than just visual confirmation.

If we were to put this into perspective, it has the sonar systems of a bat, dolphin, Jet plane, and hawk.
In case people aren't sure how this works..

Radar system: Sends electronic radio wave pulses in the immediate area to detect objects and movement.
Lidar systems: Send laser pulses out to to detect objects, movement and distance.
3D optical scoping for 3D environment mapping.

The first two work in the dark or light, in fact Lidar systems should actually be MORE effective in the dark.

For reference this is how the cars see the world.
It uses a system of detection redundancy to ensure even when it cannot get optical confirmation it can get radar and laser confirmation on objects.
It being dark or even the distance should have little effect on its ability to see objects.
That's the issue. This Uber release represents a failing across the board on every detection system they have.
Or if the systems did see the person, they systems failed to translate the detection into an action.


ces-computer-vision-example-web.gif

how-autonomus-cars-see-the-world.jpg
 

TheMikado

Banned
I should also provide the link to the blog from Intel

https://newsroom.intel.com/editorials/experience-counts-particularly-safety-critical-areas/

Experience Counts, Particularly in Safety-Critical Areas
Now Is the Time for Substantive Conversations about Safety for Autonomous Vehicles

By Prof. Amnon Shashua

Society expects autonomous vehicles to be held to a higher standard than human drivers. Following the tragic death of Elaine Herzberg after being hit last week by a self-driving Uber car operating in autonomous mode in Arizona, it feels like the right moment to make a few observations around the meaning of safety with respect to sensing and decision-making.

First, the challenge of interpreting sensor information. The video released by the police seems to demonstrate that even the most basic building block of an autonomous vehicle system, the ability to detect and classify objects, is a challenging task. Yet this capability is at the core of today’s advanced driver assistance systems (ADAS), which include features such as automatic emergency braking (AEB) and lane keeping support. It is the high-accuracy sensing systems inside ADAS that are saving lives today, proven over billions of miles driven. It is this same technology that is required, before tackling even tougher challenges, as a foundational element of fully autonomous vehicles of the future.

To demonstrate the power and sophistication of today’s ADAS technology, we ran our software on a video feed coming from a TV monitor running the police video of the incident. Despite the suboptimal conditions, where much of the high dynamic range data that would be present in the actual scene was likely lost, clear detection was achieved approximately one second before impact. The images below show three snapshots with bounding box detections on the bicycle and Ms. Herzberg. The detections come from two separate sources: pattern recognition, which generates the bounding boxes, and a “free-space” detection module, which generates the horizontal graph where the red color section indicates a “road user” is present above the line. A third module separates objects from the roadway using structure from motion – in technical terms: “plane + parallax.” This validates the 3D presence of the detected object that had a low confidence as depicted by “fcvValid: Low,” which is displayed in the upper left side of the screen. This low confidence occurred because of the missing information normally available in a production vehicle and the low-quality imaging setup from taking a video of a video from a dash-cam that was subjected to some unknown downsampling.

Images from a video feed watching a TV monitor showing the clip released by the police. The overlaid graphics show the Mobileye ADAS system response. The green and white bounding boxes are outputs from the bicycle and pedestrian detection modules. The horizontal graph shows the boundary between the roadway and physical obstacles, which we call “free-space”.
» Click for full image
The software being used for this experiment is the same as included in today’s ADAS-equipped vehicles, which have been proven over billions of miles in the hands of consumers.

Recent developments in artificial intelligence, like deep neural networks, have led many to believe that it is now easy to develop a highly accurate object detection system and that the decade-plus experience of incumbent computer vision experts should be discounted. This dynamic has led to many new entrants in the field. While these techniques are helpful, the legacy of identifying and closing hundreds of corner cases, annotating data sets of tens of millions of miles, and going through challenging preproduction validation tests on dozens of production ADAS programs, cannot be skipped. Experience counts, particularly in safety-critical areas.

The second observation is about transparency. Everyone says that “safety is our most important consideration,” but we believe that to gain public trust, we must be more transparent about the meaning of this statement. As I stated in October, when Mobileye released the formal model of Responsible Sensitive Safety (RSS), decision-making must comply with the common sense of human judgement. We laid out a mathematical formalism of common sense notions such as “dangerous situation” and “proper response” and built a system to mathematically guarantee compliance to these definitions.

The third observation is about redundancy. True redundancy of the perception system must rely on independent sources of information: camera, radar and LIDAR. Fusing them together is good for comfort of driving but is bad for safety. At Mobileye, to really show that we obtain true redundancy, we build a separate end-to-end camera-only system and a separate LIDAR and radar-only system.

More incidents like the one last week could do further harm to already fragile consumer trust and spur reactive regulation that could stifle this important work. As I stated during the introduction of RSS, I firmly believe the time to have a meaningful discussion on a safety validation framework for fully autonomous vehicles is now. We invite automakers, technology companies in the field, regulators and other interested parties to convene so we can solve these important issues together.

Professor Amnon Shashua is senior vice president at Intel Corporation and the chief executive officer and chief technology officer of Mobileye, an Intel company.
 
Top Bottom