A financial settlement has been reached in the case of a pedestrian being killed by a self-driving vehicle. Elaine Herzberg, a 49-year-old Phoenix resident, was struck while crossing a street by a self-driving Uber vehicle reportedly traveling at 40 miles per hour (https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html).
According to the police, the car made no attempt to avoid the pedestrian and it collided with Hertzberg without any reduction in speed. The failure to reduce speed evidenced an abject failure of the onboard technology.
Experts have opined that the Uber “Light Detection and Ranging” (LiDAR) sensor system acting in conjunction with comprehensive on-board software technology failed to recognize the presence of Herzberg who was crossing the thoroughfare with a bike in hand. The LiDAR system is a remarkable technology which has been used in the development of autonomous vehicles. The accident represents a failure of the underlying self-driving vehicle technology in that the LiDAR system was supposed to reliably maintain a three hundred and sixty (360) degree view of the road and surrounding area while interpreting the data obtained by the multiple sensors placed on the vehicle.
The Uber self-driving vehicle while armed with the LiDAR system should have been able to observe and determine the precise location, size and speed of the approaching person and react to her in sufficient time to avoid such an obvious hazard.
The absence of light on the roadway should not have been a factor in the Hertzberg accident. The LiDAR system was designed to send beams of laser light ahead of the vehicle. With the aid of computer technology the vehicle sensors should have collected and measured how much time it took for the beams of light hitting the object ahead (the pedestrian) to return to the sensors.
The LiDAR system sends millions of beams of light per second allowing the computer system to create a three hundred and sixty (360) degree picture of the area with the ability to detect and analyze objects in the observable area from a distance of at least 60 meters or 196.85 feet. Hertzberg should have been detected with plenty of time for the technology to react.
Typically, the LiDAR unit is affixed to the roof of the vehicle. There may be additional LiDAR units affixed to the front of the vehicle. An additional radar sensor would have been aboard the vehicle sending radar generated data to the on-board computer. The vehicle was likely equipped with GPS technology and a position sensor. It had cameras which transmitted additional data such as whether a traffic control device was green or red to the on-board computer system and other data related to any object in the path of the vehicle. The interaction between data collection through these devices and the on-board computer system should have been instantaneous. So too the vehicles reaction to such data.
That data was obtained at a distance which should have allowed the vehicle more than sufficient time to stop. Given the reported speed of the car (40 MPH) and the approximate weight of the vehicle (Volvo XC90 SUV) being a little over 4,000 pounds (4,394 lbs.), hitting the brake would have allowed the car to stop in approximately 168 feet even if it was driven by a person with an average reaction time. The autonomous vehicle should have had an instant reaction to the data that was incoming through the sophisticated technology aboard the vehicle. That would have permitted the vehicle to stop in approximately 88 feet and certainly with enough time to avoid the crash that took Hertzberg’s life.
In sum, the accident represents an apparently inexplicable failure of the underlying technology.
The human backup driver in the vehicle also failed stop the crash as the ‘driver’ was not looking at the road when the accident occurred. He too could have reacted in sufficient time assuming he had the capacity to observe the person at a distance of 60 meters. Even assuming that given the darkness of the road and the limitations of human observation combined with a momentary delay due to human reaction time, the “driver” likely could have avoided the accident by turning the vehicle away from the pedestrian and slowing the car.
The incident has once again raised important questions about the reliability of autonomous driving technology as well as the adequacy of present regulations affecting the industry. The feasibility and prudence of the use of public thoroughfares as a real-life testing track for a new and clearly unproven technology has thus been called into serious question. Moreover, the devastating consequences of accidents involving vehicles utilizing this technology suggests the potential need for legislation which will protect the public from becoming human guinea pigs while innocently walking the public streets of America.
On May 7, 2016, in another self-driving car accident involving a Tesla experimental vehicle, a simple intersection became a treacherous death trap for an experienced test-driver of the vehicle. The self-driving vehicle collided with a tractor trailer which was not detected by the self-driving car in auto-pilot mode. This accident was reported in an article examining the issues related to self-driving cars in New York Times on June 30, 2016 (https://www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html).
Since that time, there appears to be little changed in resolving the clear and present dangers to the public from the use of self-driving test vehicles on our roadways.
The roads of the United States have effectively become testing stages for the autonomous vehicle industry lead by Tesla and various other car makers. One must wonder why regulations have not been put into place safeguarding the public from this untested technology.
Other cases of self-driving cars being involved in accidents have been reported (https://www.wired.com/story/tesla-autopilot-self-driving-crash-california; http://www.bbc.com/news/technology-42801772).
Indeed, Tesla technology may be even more dangerous and potentially problematic. Tesla apparently relies on radar and camera technology only. The CEO of Tesla, Elon Musk, has argued that since people rely on their vision and brain to operate a vehicle, then the self-driving car can also be driven by a machine relying on analogous technology such as radar and cameras. He rejects the use of LiDAR.
That arguably coldly cavalier attitude to public safety apparently did not work out very well for the victims of Tesla test vehicle accidents. On the other hand, Uber did not fare better with their LiDAR equipped vehicle. Perhaps this technology is not ready for primetime after all. Certainly, the question must be asked as to why we should expose the public to serious hazards when the rich and well financed auto and car sharing industry can well afford their own test driving tracts and self-designed test areas.
REGULATION OF SELF-DRIVING VEHICLES
The development of self-driving cars was accelerated by what appears to be a history of a relatively permissive regulatory landscape. By 2013, Nevada, Florida, California and Arizona had all passed legislation allowing self-driving cars on public roads. So far, state and federal regulators have used a “light touch” with the industry, in an effort not to stifle innovation. As of March 26, 2018, the National Conference of State Legislatures reports that 22 states have enacted legislation related to self-driving vehicles (http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx).
Some critics, however, believe that more forceful oversight is needed — and the Herzberg case could generate momentum in that direction though given the fact that past accidents have had temporal influence on the regulatory landscape one must be circumspect with any optimism. In fact, New York State Governor Cuomo has endorsed General Motors transforming Manhattan into a test track for self-driving vehicles this year (https://www.theverge.com/2017/10/17/16488330/gm-cruise-nyc-self-driving-car-test-cuomo).
On the Federal level, the United States Senate is considering legislation which would preempt states from legislating safety which affects self-driving vehicles if the local legislation would impose stricter rules than the Senate legislation. This is potentially significant and perhaps ignores the divergence of local factors as among the various States. Legislation which may be reasonable in Iowa may engender extraordinary dangers for pedestrians in New York City. http://www.ncsl.org//Portals/1/Documents/standcomm/scnri/senate_commerce_ads_1_25672.pdf
In the wake of the Hertzberg accident, Uber has voluntarily temporarily halted self-driving tests nationwide, and Arizona’s governor has ordered the company to temporarily cease such tests within that state’s borders. Other companies testing driverless technology have also opted to temporarily put their road tests on hold, though many of these firms have claimed their technology would have been able to avoid the collision. One may assume that this temporary freeze will not last very long.
We welcome your legal questions for topically relevant articles in the future. Feel free to compose a question – it may be addressed in future articles. Email Question
Free Case Evaluation
Fill Out The Form Below To Find Out If You Have A Case.
Thank you for contacting us. One of our colleagues will get back to you shortly.