Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
U.S. urges states not to allow general use of self-driving cars (detroitnews.com)
39 points by greenyoda on May 30, 2013 | hide | past | favorite | 75 comments


I'd like to see the data that the NHTSA used to conclude that the risk of a malfunction is higher than the risk of human error. Millions of crashes each year result in tens of thousands of deaths [1]. Humans drive while tired, drunk, or simply distracted, and of course, tend to have slower reflexes than computers.

Even if the technology isn't ready for mainstream adoption today, I bet we'll look back in horror at the period in our history during which humans careened unaided down highways at breakneck speeds, certified in their driving ability by nothing more than a test easily passed by your average teenager.

[1] http://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in...


It's rare for all of the human drivers on a road to make the same mistake at the same time under the same conditions, but that's a likely failure condition for automated driving software if the software and sensors are standardized. This could lead to much worse accidents then you'd get with human drivers, because all of the cars will do the same thing.

An all-human example: driving too fast in foggy/whiteout conditions and not having time to stop when the road is blocked. When this happens today, you get 100 car pileups because everyone is doing the same thing: driving along until they suddenly come upon the pileup, and smashing into it because they don't have time to stop.

What if a software error causes this same kind of behavior in more common conditions? Google's testing wouldn't expose this kind of failure mode, because (afaik) they haven't done extensive tests driving fleets of automated cars together. All of their cars have been surrounded by human drivers, and for all we know the varied reactions of the human drivers have prevented them from hitting the Google car when it does something odd.


I don't believe the report[1] actually said that the risk of a malfunction is higher than the risk of human error. In fact, it's quite optimistic about the ability of automated systems to save lives, for precisely the reasons you described. They do, however, correctly point out that such systems can malfunction, and will malfunction, and that safety impacts of such malfunctions should be minimized. Because malfunctions in automated transportation can lead to, say, June 2009 on the DC Metrorail.[2] Such incidents are rare, and through hard work and foresight will be rare in the future, but that doesn't happen by itself. You have to make a concerted effort to ensure such systems fail gracefully.

[1] http://www.nhtsa.gov/staticfiles/rulemaking/pdf/Automated_Ve...

[2] https://en.wikipedia.org/wiki/June_2009_Washington_Metro_tra...


Certification? Hah.

My friend got a G1 "learner's license" in Ontario (which requires only a multiple-choice test). When he moved to New York, he "traded in" the G1 for a full license. I guess that's the exchange rate.


Given how bad humans are at driving cars safely, it is unfortunate that we are holding self-driving cars to a higher standard than we hold humans. It will cause needless loss of life.


Yes. Given that people also generally believe they drive safely, what is happening is that they are basically trying to hold them to the standard people believe (incorrectly) they are currently at.


Regardless of how bad I or someone driving around me is, I would rather have complete control. If I'm going to relinquish that control, then the alternative better be superb. This is one of those paradigms where we need software engineering to be flawless, which history can tell us, is never the case.

Not disagreeing with you just offering an opinion which is more centered and cautious.


I'm not sure I'd describe preferring control over safety would really qualify as "more centered and cautious".

I understand that a lot of people are more concerned about control than safety (e.g. tons of people who are more afraid to fly than to drive, even though it's vastly safer), but it's a clearly irrational preference.


If you have ever been a passenger in a car, you have given up that control to an entity which is likely far less safe than the software option. (still need to see the data this was based on)


Yes, you've given up control. But you've traded it for trust. While Google's cars may have a flawless record, trust is an emotional thing. It's a long slow road for self-driving cars to earn our trust, even if they should already deserve it.


Thanks for understanding what I was trying to say.


I'd personally like to have complete control... but it's balanced by the fact that I'd rather most of the other drivers I encounter not have that control.


Mostly people look at self driving cars in the light of "Would I be better driving my vehicle?"

I like to look a it from the other direction "Would I be better if everyone had a self-driven car?"


Everyone around you would rather you didn't. Where should your liberty end when it puts others at risk?


That's because people have liability and machines do not.


Reading the article, this doesn't strike me as inappropriate. "Self-driving vehicle technology is not yet at the stage of sophistication or demonstrated safety capability that it should be authorized for use by members of the public for general driving purposes" is probably true. The cars are good at what they do, but that doesn't mean they can handle all sorts of bizarre detour, road construction, rural roads, faded/incorrect markings, conflicting markings, control from officers/construction workers on the roadway, etc. It's a huge, promising field, but still under significant development.

They also say, "as self-driving cars improve, they will reconsider."

Seems fair.


Which seems to be the same conclusion as those working on autonomous vehicles right now since no one has announced when they plan to make consumer models available.

The only real take away I got from this article is that the NHTSA is actively monitoring autonomous vehicles as they should be.


They also say, "as self-driving cars improve, they will reconsider." Seems fair.

It would seem fair if they pointed to any actual data regarding the safety of such vehicles versus the safety of cars driven by humans but I see no such information provided. Or, it might seem fair if there was any company trying to actively sell self-driving cars to the general public right now.

I think this is a problem that would likely work itself out without legislation surrounding it. Companies, for the most part, would probably like to provide products that don't endanger the lives of their customers. Along with that, insurance companies I'm sure would love to reduce the risk of driving while maintaining the current cost of insurance. Therefore, I'm guessing they will be heavily involved in testing self-driving cars. If the findings aren't good I'm sure insurance costs would be extreme or not available at all.


Just to address your first point, data is not actually too beneficial when it comes to general public policy making. Many people make decisions based on emotion rather than data.

For example, some people have a tremendous fear of flying. Yet flying commercially is one of the safest modes of travel: a person is far more likely to be injured on the way to the airport than to be injured in a commercial aviation accident in this country.

Likewise, the first time one of these cars misreads a signal or marking and plows into a farmer's market, the panic will be significantly greater than when an impaired driver causes the same damage; even if we have data to show that for each self-driving car injury there are 100 human-driven car injuries, it will be tough to override the "I don't have control of it, I don't understand it, I'm scared of it" emotional gut response of people.


The onus is on the people who want to change the status quo to show that self-driving cars are safe, not the other way around. "The devil you know is better than the devil you don't."


I can imagine that there will be a very bloody political fight over self-driving vehicles as they become more mainstream, particularly when it comes to taxis and trucks, because of the vested interests of professional drivers.


Why do people on HN always jump to "vested interests?" The NHTSA is conducting a study to determine whether a new kind of vehicle should be allowed on public roads. This is the prudent course of action, whether or not anyone stands to benefit from it.

I worked at a company that does what you'd call today whitespace wireless technologies. I remember when I was young and naive, my reaction was: "why doesn't the FCC just get out of our way and let us put this stuff on the public airwaves?" And I distinctly remember when Microsoft did the test with their whitespace technology, where it couldn't handle, of all things, unlicensed wireless microphones (commonly used by churches and sporting events). And my thought was: "who the fuck cares about churches? They're standing in the way of progress!"

Then I worked for a summer at the FCC and realized: "shit, the world is really complicated." There are tons of stakeholders who use the public airwaves and it's imprudent do anything without first being sure what the impact will be (measure twice, cut once, as they say).

Self-driving cars will happen, and have to happen, the same way. The governments of the country spend $160 billion a year on the public roads, and the people of the country are absolutely dependent on them, and before they let a bunch of yahoos run computerized cars everywhere they are going to have to be convinced of what impact they will have on the system.


> Why do people on HN always jump to "vested interests?" The NHTSA is conducting a study to determine whether a new kind of vehicle should be allowed on public roads. This is the prudent course of action, whether or not anyone stands to benefit from it.

Why do people on HN always make assumptions about other people's comments? I wasn't criticizing this particular NHTSA study, I was just saying that there will likely be a bloody political fight over allowing a technology that will take away a lot of blue collar jobs that can't be outsourced.


Sorry, I didn't mean to put words in your mouth. I was more using your post as a jumping-off point for my rant.


> Why do people on HN always jump to "vested interests?"

Because, while I fully agree with the rest of your post, it'd be equally as naive to think that those processes are NOT influenced by people with "vested interests".


There will be two big, competing points regarding automated vehicles in the public eye:

* Automated vehicles can save a lot of people a lot of money, save lives, and completely transform the way transportation operates (and perhaps even how cities and suburbs look. See, for example, the vast swaths of parking lots in much of the United States, and think how utterly unnecessary they'll be in a world of fully autonomous vehicles.). The NHTSA policy document says this in the first paragraph, in fact.

* Automated vehicles are untested and scary; would you trust your children's safety to a computer?

The entrenched interests like taxi drivers, truckers, etc. will push the second quite a bit, I imagine. As with a lot of new technology being introduced to the public, that argument will initially have a lot of traction (especially if there are big, public incidents when it starts being rolled out) regardless of what the statistics say regarding safety, but it will gradually be marginalized as people get used to it, based on the strength of the first argument. But it'll be an arduous journey there. People don't like change.

NHTSA press release, with link to the actual policy document (at the bottom): http://www.nhtsa.gov/About+NHTSA/Press+Releases/U.S.+Departm...


It's possible that the destruction of all driving-based livelihoods will take on symbolic importance in the public eye, too. It could come to represent the loss of blue collar and white collar jobs to everyone who doesn't have a career anymore, and that would really complicate the rollout.


> See, for example, the vast swaths of parking lots in much of the United States, and think how utterly unnecessary they'll be in a world of fully autonomous vehicles.

I don't know that it would make that much difference. Most people want to drive at the same time to get to work and few people seem likely to want to have their car do an extra commute so that it can park up at home. Couple that with the fact that a car tends to be a personal space - and you'd be giving that up if you started having to share (even if you were the only person re-using a hired vehicle)....

I just picture people using self-driving cars in more or less the way they use their normal car.


This is basically the way it works in cities anyways. No one really cares in NY that taxis aren't their 'personal' space. In my eyes, autonomous cars would just allow a city-style car culture in suburbia, since it will finally be practical to not own (several) cars. You should really think of self-driving cars as providing scale to a lot of existing car-share economies.

For example, consider how many families today have 2 cars. Right now if the husband drives to work at 7AM and the wife at 9AM, this necessarily requires two cars. In an autonomous car world, the car would just drive back home to pick up the wife, instead of sitting in a parking lot for 9 hours. Now, extrapolate this to an apartment complex: What if part of the renters agreement is access to one of their 20 autonomous cars. Its very rare that at any given moment every single occupant of an apartment complex is using the car, so this would be totally feasible. In fact, it would be feasible today with normal cars except that when one person takes the car, they block access to that car for more time than they are actually using it. A 15 minute drive to the grocery store blocks usage of the car for up to 2 hours since it can't go back and service other members.


What if part of the renters agreement was for access to one of the cars? You have 20 journeys at can be going on at any one time. If more than 20 people go to work, or wherever, in the me period of time then the system stops working. Granted the maths isn't always going to be quite that simple since you're really looking at something being out of service for twice the outgoing journey time, but the limiting factor is going to be peak availability and that is going to stack up around rush hour.

I'm not sure NYC is a good example for the personal space one - do the people there really have the practical option of doing otherwise?


Yes, that is how all things work. If everyone wants to use the swimming pool or gym in an apartment complex at once, then there isn't enough room. Yet you rarely hear about this because the planners make pretty good calculations about peak use of the pool. Similarly, I just snatched the number 20 out of thin air. If you are putting together an apartment complex you would calculate how many you would need such that there are plenty of cars for almost all occasions. You would also probably have a more orderly system of filling out everyone's work time so they can be sure to have enough for that (especially because these time slots would be ideal for self-driving car pooling). May there be some time when someone wants a car but doesn't have one? Of course! But this is not unique to the apartment complex, this happens in families as well. There are plenty of times when a 2 car family needs to go three places at once. And they end up figuring it out, because its not like there's no recourse in that situation. The reality is that this is in not that hard of a problem.

Regarding NYC - I'm not sure what the point is. Most people don't have personal space on an airplane because there is no practical alternative there either (most people can't afford a private jet). So what? Would it be nice if everyone could? I don't know? Maybe. I honestly don't think about it. I value not paying millions of dollars over personal space on an airplane. Similarly, if you really care about personal space, then by all means buy your own car (autonomous or otherwise). I bet a lot of people would value not paying thousands of dollars (often going into debt and paying for gas etc etc) over personal space as well. In suburbia today, that option very much does not exist. For example, in Orange County where I used to live, you need a car to get around. And paying for a taxi to go to work every day is absurd there.


NYC doesn't count as evidence for people wanting a car sharing culture/disregarding the car as a personal space if they lack the ability to do otherwise. The world would look the same whatever people wanted in that instance so it's no evidence either way.

#

If 90% of the car owners need a cars in overlapping time slots then carpooling isn't going to make a lot of sense - it's only going to benefit 10% of the people. It'd be like installing an apartment pool that everyone has to pay for for the woman in 3B. This is a case where the numbers matter. If people all tend to commute about the same time, then things aren't going to look much different - you might have people playing on their tablets before they get in to work, or a more efficient taxi service, but most people will still own cars.

#

As for getting your boss to stager your time so that you can car pool more effectively. Well, maybe. But at the moment that seems like grasping at straws to me. These are the same people who can't even manage telecommuting successfully and haven't caught up with management books from 40 years ago. Trying to get people to do things together is difficult and companies are by and large incompetent.

And that's before you start introducing all the cultural things like wanting lots of people in the office at the same time. I don't think most people's bosses are likely to put themselves out for this. You need to assume that this is already in wide circulation before any real advantage for them in terms of not having to have a carpark comes into play.


Unless the husband commutes 61 minutes away.


There's essentially no need to own a fully autonomous car for solely personal use. Big fleets of dispatched vehicles that you can order to show up at your door (with, this is key, low response times for on-demand service plus reliable scheduled service for your daily commute) would be a much better value proposition for most people, with superior economies of scale that you don't get with a dismal ~5% utilization that most private vehicles operate at. If such a dispatch service were done right and had a big enough variety of vehicle types and prompt enough service, it could easily supplant private car ownership for all but a relatively small group of individuals, because owning a car just wouldn't provide enough advantages to be worth the added cost and hassle.


Presumably self driving cars would lead to dirt cheap taxi and car sharing services, reducing the need for everyone to own a car. Yould even have things like letting your car be a taxi during the day while you work


Yeah, vested interests like feeding their family.

I mean yeah yeah luddites retraining rising tide etc... but "I'd like to not starve" isn't exactly a sinister motivation.


Its not sinister, its entirely predictable. But that doesn't make it any less selfish and callous towards the lives it could save.


"NHTSHA urges that cars have the capability of detecting that their automated vehicle technologies have malfunctioned 'and informing the driver in a way that enables the driver to regain proper control of the vehicle'"

Yes, unless the system to detect that has malfunctioned.

Past the car screaming "shit's broke, take the wheel", i'm not sure what other ways they have of informing the driver.


> Yes, unless the system to detect that has malfunctioned.

Are the systems that detect malfunction right now single-overseer systems that do sanity checks, or are they redundant driving systems that check to see if they agree?

You'd probably want both, but it seems some arrangement of the latter (maybe 3-5 systems) would be likely to provide a better failover scenario: the driving systems constantly check if they agree, if they don't, there's a majority vote for short-term action and an alert to the driver that they should take over and get the car serviced.

I'm sure there's still scenarios where that wouldn't work -- car-wide problems or outside interference impacting all systems at once, or even just improbable simultaneous failures that would happen with enough time -- but maybe it's what they have in mind.


It sounds like the regulators think that manufacturers are eager to sell unsafe self-driving cars to consumers. I highly doubt that is the case. Bad press and reactionary legislation early on in the life of self-driving cars could significantly slow their adoption and therefore manufacturers profits.


It was good to read all the comments here before I decided to write this one. There is some good nuance in the back-and-forth among the comments on this interesting article. I especially like BorgHunter's second-level comment

https://news.ycombinator.com/item?id=5794159

with its link to the official document related to this story.

Here in Minnesota, human driving is sufficiently lousy that I can't wait for self-driving cars to get on the roads here. I see people neglecting being aware of traffic to engage in cell phone conversations or texting conversations on every drive I take here, and even though we get snow every winter, the first few snowfalls of each winter always result in lots of crashes as people drive like they drive in summer on slippery roads until they adapt their driving habits. Bring on the self-driving cars here, I say. I'm glad that the government regulators are attempting to do empirical safety studies, but I urge them to be bold in rolling out self-driving technology as rapidly as it can come to market.


I'm wondering is anyone researching self driving larger vehicles and would future legislation require someone being present in the vehicle?

According to http://www.bls.gov/emp/ep_table_201.htm as of 2010 over 4 million people work in transport and warehousing. A lot of which will be drivers. Self driving vehicles could mean then end for a lot of jobs especially if someone developed a retrofit kit for current vehicles.


Well, it does not talk about the main problem that sef-driving cars have today:

One cars works ok, as it uses an active super expensive light(Lidar) and radar sensors.

Put two cars together using the same LIDAR light or the same frequencies and you have a terrible problem.

Also using Lasers will be dangerous when people look at the thing and the thing is working near you as the power of the light could do bad things to your eyes and you don't see it.


Different vendors' autonomous cars have encountered each other in the wild, there has been nothing tragic happening. They wave to each other.


What will insurance companies do if people aren't crashing cars?


sponsored by the us taxi association. little do they know that it will be more profitable to have self driving taxis.


Also, driving in the snow and ice safety is about 3 orders of magnitude more difficult to do than driving on dry pavement.

There also needs to be an indicator: "It's snowing, please wait until the road is warm and dry, or have the carbon unit take over".

But that's no good, we have to get to where we are going, so tens of thousands of people, who havn't been practicing driving AT ALL for about 8 months now dust off their glasses and try to remember what these symbols on the dashboard mean and which petal is the "make go faster" one.

Either humans must be required to drive 20% of the time, or else the humans will be stranded when ice is encountered.


The robot will be far, far better than humans at driving in such conditions. Modern cars already help out as best they can with monitoring slippage, temperature sensing, running in higher gears, monitoring road gradient and rate of change thereof, running wheels at different settings, and all the rest of it; if the meatbag would just sit in the back and let the robot drive everything would be much safer.

The indicator will say :"It's snowing meatbag, please wait until the road is warm and dry, or give me the wheel".


Then you'll need the override when the passenger (or driver) is dying and needs to get to the hospital. I guess you would have to read and sign the 85 page EULA and sign: "I understand that if I crash here there will be hell to pay".


Why would there be such an override system? That's crazy. If there is a need to drive less safely for such a purpose, again, the robot will do better. You push the button marked "Emergency - get me to a hospital" and the robot works out the closest hospital and negotiates with other robots on the road to allow it through at greater speed. The last thing I want in such a situation is a panicky human losing blood and going into shock trying to drive a car as fast as they can.


What makes you think computers will be worse than human drivers at driving in poor conditions? I'd think it'd be the other way around.


Exactly: I have some experience driving in snow and I've certainly made a few big mistakes that I'm pretty sure a computer (that would have a much better sense of available traction) could have avoided. I can also easily imagine sensors that can see through headlight-lit falling snow better than a human.


"Will be"? Probably. Right now? Probably not.

Ice and snow are real issues for, say, the Google car. Driving at night is pretty good because its sensors can still see things better than the human eye.


I think some of that difficulty is because we mostly build up driving intuition on clear roads. An auto-driver will have relatively objective heuristics to evaluate road conditions, which is where humans make a lot of their mistakes (that is, humans frequently just fail to realize that the road is slippery).


Now that I think about it, the auto-driver would decide it is slippery and go excruciatingly slow. And the next problem is that you can't get up a hill because it doesn't know how to gather up excessive momentum to make it over the hill. The human would say: "I can do it", but the auto driver says: "it's too risky".


So the human wants to run deliberately fast so they can effectively slide up a hill with massively reduced control, and the robot kiboshes it on the grounds it's dangerous? Sounds pretty good to me.


This is my biggest fear with automated cars. Snow, ice, and animals are problematic and having the car drive all summer means poor winter driving[1]. I still worry about how it figures what is safe to push through versus what will cause an accident. The constant flurries / drifting is also going to be a huge sensor problem.

Car companies are pretty bad about winter driving as is[2][3], so I expect we will have a lot of problems.

1) it is bad enough on the first snow day in Minneapolis even though everyone was driving the day before

2) if the traction light actually informs you of a loss of traction, you shouldn't be driving anyway

3) heated seats, but a total lack of work on defrosting in all these years


Animals: Google driverless car has already been shown to avoid collisions with unexpected animals (like, say, a deer).

Snow and Ice: the sensors about wheel rotation and speed used in fancy automatic suspensions are more than able to control for slippage. I'm willing to bet that once prepared, a driverless car could deal with these hazards with much more finesse than a human pushing a pedal with no idea what kind of friction coefficient their vehicle's tire is getting.


The problem with snow and ice isn't what your already driving on, it is dealing with what is coming. I would really like to see how it deals with the lower visibility and constant motion around it. How does it deal with the ice coming up (and black ice).

If Google (or anyone else) starts testing in the northland, I will be convinced, but I don't think it is a very easy problem to solve.


"How does it deal with the ice coming up (and black ice)."

It's been told about it by other robots who have already encountered it, which already gives it a massive head start over the meatbag.


Snow and ice have different reflective properties than regular old asphalt.

The car uses LADAR.

Pretty sure it'll be able to tell the difference.


Automated vehicles also potentially means automated plowing/sanding vehicles. Roombas for the roads basically.


I'm not sure if that means more plow trucks taking out my mailbox or fewer...


Not unless sensors get a lot better, as starring into white and finding the road is a rather large problem[1]. Nevermind the actual skill of those drivers, it is not an easy job.

1) GPS + Maps is not the answer as it wrong sometimes.


[1] will surely improve greatly with a fleet of millions of data gathering machines. No, it still won't be absolutely 100%, but that's what Kalman filters are for.


Since roads where it snows are always getting torn up and repaved, if automated cars do become widespread, it seems entirely feasible and realistic to line major roads on each side with radio reflectors or some other cheap system that can keep automated plows from driving off the road.

The real problem is the cost of the automated plow itself. A 10-wheel plow with salt feeder isn't exactly cheap to run with or without a driver.


Since they haven't been real successful at automating a combine (a bit of a simpler problem), I figure the plow and the expertise needed to drive one is a ways off.


As a motorcyclist, I look forward to the advent/proliferation of self-driving cars more than almost anything else.

You don't quite appreciate how dangerously most people drive until you're a split second from being killed by any one of them.


Industry standard is to drive 10mph over the speed limit so unless self-driving cars are allowed to break the law they may make traffic even worse. Maybe they would be good for drunk drivers and emergencies but I can barely stand letting my wife drive so self-driving cars may be too agitating for me.


I'm pretty sure self-driving cars will make traffic better, because it has been shown that most traffic jams are caused by and exacerbated by poor driving habits.


Traffic jams are caused by volume. If fewer cars pass per minute, due to driverless cars obeying the speed limit traffic will be worse. What are you citing?


Some phantom traffic jams are actually caused by a moving wave caused by the inefficiencies in starting/stopping. If you have a whole set of linked self-driving cars, theoretically, you could prevent traffic waves from being sustained by moving at a slower constant rate.

The slower rate means you need less time to break reducing the distance you need between cars and increasing capacity of the road and the constant speed reduces the break/acceleration cycle that is needed to sustain the wave.

Then of course other jams are caused by inefficiencies in merging which ripple out to all the lanes as people try to get out of the lane that is going to merge. Self-driving cars of course would have the advantage here since they would no doubt be able to merge at higher speeds.

Traffic jams caused by accidents would hopefully be reduced if self-driving cars get into fewer accidents.

And finally, it is conceivable that if you are in a lane full of self-driving cars - especially ones that communicate between themselves - and if it is shown conclusively that they could cope faster in case of a catastrophic event, you could allow them to drive faster the speed limit.


The number of vehicles and the max speed at which they're willing to travel aren't the only factors that affect throughput. As the GP mentions, bad driving habits can lead to things like the Accordion effect.

https://en.wikipedia.org/wiki/Accordion_effect


That may be marginally helpful but only if all the cars are self-driving and if there is no traffic have fun going the speed limit the whole way. A good way to foster adoption of self-driving cars may be to let them go faster when on auto.


this is an agency trying desperately to seem relevant




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: