In extreme cases, how to determine the fault of technology and people?
Uber’s case gives a clearer answer, but this answer is not necessarily correct.
On September 16, the once sensational “autonomous driving murder case” had a preliminary verdict. The prosecutor in Maricopa County, Arizona, said that Rafaela Vasquez, a safety officer in the driving seat, was charged with manslaughter, but her company Uber will not face criminal charges in this case.
The scene of the first autopilot fatal accident | Network
This is the first traffic accident in the world where a self-driving car hits a person to death, which also caused a lot of shock in the industry. Of course, the situation reflected in the accident itself is more worthy of discussion than the impact of the industry: Should the mistakes made by humans and machines be shared by humans?
When a “safety officer” becomes a “backboiler”
Although the final verdict of the case has not yet been determined (the case will have a preliminary trial on October 27), the security officer will face two and a half years in prison. Currently, the person involved, Rafaela Vasquez, refuses to plead guilty.
Two and a half years have passed since the incident, let’s first make a brief review:
In March 2018, when Rafaela Vasquez was riding an Uber self-driving car as a safety officer in an autonomous driving test in Arizona, it hit a woman pushing a bicycle across the road and killed her.
According to the survey results of the National Transportation Safety Board (NTSB), the vehicle speed was 62km/h, while the road speed limit was 50km/h.
In the entire accident, the safety officer did not take her due responsibility. As we all know, in the current automatic driving test, most vehicles will be equipped with 1-2 safety officers. If the vehicle has an accident, the safety officer needs to take over at any time. But in this case, the security officer Vasquez did not focus on the road, but looked down at the phone on the center console at the bottom right from time to time. In the video recorded by the car, she raised her head half a second before the accident, but it was too late.
From the perspective of the safety officer in the car, the safety officer found the pedestrian in front of the road half a second before the incident | YouTube
If the safety officer is not distracted by looking at his mobile phone in the car, the accident is completely avoidable. This is the conclusion reached by the police, so the judgment result put all the responsibility on the security officer.
“It is a person who kills a person, not a machine.” If the final judgment is such a result, it will inevitably be a bit hasty.
In the NTSB’s investigation report, Uber’s autonomous driving technology is clearly flawed. 5.6 seconds before hitting a pedestrian, Uber’s Autonomous Driving Solution (ADS) has detected the pedestrian, but has never accurately classified her as a “person.” Within a few seconds, the autopilot system recognized it as an unknown object, then as a vehicle, and then as a bicycle, but there was no automatic braking.
When the automatic driving determines that a collision is imminent, the machine cannot brake in time. This is another bug-like design of the system: it does not consider the possibility of a sudden collision, and therefore does not design emergency braking, but purely rely on the intervention of the safety officer to avoid the collision.
From the perspective of the car, the autopilot system did not recognize pedestrians until the accident | YouTube
Uber’s autonomous driving technology was not very advanced at the time. The annual takeover report issued by the California Department of Transportation DMV previously stated that Uber’s self-driving cars require frequent takeovers by safety officers during the testing process, and manual intervention is required every 37km on average. Compared with the current industry-recognized first self-driving company Waymo, The number of test manual takeovers per 1,000 miles in 2018 was 0.09.
What’s even more incredible is that Uber also disabled the front collision warning and automatic emergency braking system of the vehicle itself. A smart car equipped with a safety system and an automatic driving system with many sensors depends most on people.
In an interview with Geekpark (ID: geekpark), He Shanshan, the head of the automobile and artificial intelligence business of Beijing Anli Law Firm, said that Uber’s self-driving vehicles do have product defects. From this perspective, Uber is behind the technology. Of companies should bear responsibility for safety incidents. But the result of this case represents the United States’ more tolerant attitude towards new technologies. “If the same incident were placed in a country with stricter supervision like Germany, the result would be very different.”
The NTSB’s final investigation concluded that the security officers were distracted by looking at their mobile phones when the incident occurred, and Uber’s “insufficient safety culture” led to the accident. In fact, the security officer is certainly responsible, but the technology also has obvious flaws and has an inescapable responsibility; but from the legal result, the person bears all the criminal responsibility and the technology is not guilty. What Uber, which is not criminally liable, needs to do is to improve the flaws in its autonomous driving technology.
Obviously, in this incident, the security officer became a “backup officer.”
The long-term challenge of technology
Anthropologist Madeleine Clare Elish (Madeleine Clare Elish), when studying the United States’ judgment on aviation accidents in the context of aviation automation, found that although the general trend is that aircraft control is increasingly shifted to autonomous driving, The determination of responsibility for flight accidents is still concentrated on the pilot.
She believes that there is a huge mismatch between the highly automated system and the actual distribution of responsibilities. Therefore, she concluded that the concepts of legal responsibility and liability have not kept pace with technological progress.
In the Uber autonomous driving case, the same seems to be true. In the automatic driving test, the role of the safety officer occupies only a small part of a complete system. What’s more, the ultimate goal of the automatic driving test is to remove the safety officer and realize true unmanned driving.
He Shanshan said that in a high-level autonomous driving environment, even though there is a car’s “black box” that can record relevant data, there will also be difficulties in the division of responsibilities, and improvements and breakthroughs are needed at the technical and future legislative levels.
Ryan Calo, a professor of robotics law at the University of Washington School of Law, said that the negligence of a safety officer resulted in the death of a pedestrian. This is a simple story; if a lawsuit is filed against the company, a more complex story must be told, “work involving self-driving cars. The principle and Uber’s mistakes.”
In fact, there is no law that can prove that which party should bear which part of the responsibility in L4 and L5 autonomous driving accidents. It is necessary to judge the ultimate responsibility based on the data recorded by the internal and external cameras of the car.
Not only the legal definition, but also from the perspective of technological development, will be a social problem. One of the major challenges of human society to new technologies is that the technology itself is not yet ready, but people overestimate what technology can do.
After Tesla released its driver-assisted system Autopilot, there have been many traffic accidents around the world. The reason is that drivers trust this driver-assisted system too much and overestimate its capabilities. Even though Tesla emphasized in Autopilot’s propaganda that it is only an L2 assisted driving, the driver must always pay attention to the road and take over during driving, but some people still ignore this rule. In the final analysis, they have too much trust in the new technology.
In June 2020, a Tesla in an assisted driving state crashed into a truck in front of it | Network
In most cases, the L4 level autonomous driving that Uber, Waymo and other companies are doing still requires the presence of a safety officer, which just shows that at least for now, the technology cannot reach the perfect state.
MIT once put forward a research on the subject of “human-machine co-driving”. The foundation of this research is that people will not disappear from autonomous vehicles in the short term. In low-level autonomous driving, it is still necessary to “people-centered.” Under such expectations, the coexistence of man and technology will be a long-term state in the future. The process of transition from being human-centered to machine-centered will also trigger the setting of a series of legal and moral frameworks.
Correspondingly, in the field of autonomous driving, how to judge the position of the safety officer and then adjust the safety officer system; even in the field of human-machine collaboration, how to think about assigning work and responsibilities from the human and machine perspectives is actually what we should Go to the direction of refinement.
The purpose of technology is to serve people. If people want to support technology, it cannot make sense in theory. Don’t let people become “victims” of technological development.
For more such interesting article like this, app/softwares, games, Gadget Reviews, comparisons, troubleshooting guides, listicles, and tips & tricks related to Windows, Android, iOS, and macOS, follow us on Google News, Facebook, Instagram, Twitter, YouTube, and Pinterest.