Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve taken downvotes from HN optimists for years about self driving cars. The slightest bit of criticism would have comments calling me an idiot 20 different ways.

Then a single dead pedestrian changes all the news articles and the opinions change.

We are a very, very long way away from self driving taxi cabs. What a total PR scam that was - but it did help valuations.



In the current legal environment, self driving taxi cabs everywhere at all times will never exist. Waymo seems to think they can work in Arizona desert though.

How many deaths per year would be acceptable? Like you mention, in the US, it seems that zero deaths per year, similar to commercial aviation, would be required. It is a bit of a shame that a tech that could save a million lives a year globally won't be deployed because it can't be made perfectly safe. On the other hand, I'm sure people in India or China will be happy to build and deploy them with much higher failure rates. Maybe we can import those to the US someday.


>> On the other hand, I'm sure people in India or China will be happy to build and deploy them with much higher failure rates.

You might have seen this internet classic:

https://www.youtube.com/watch?v=RjrEQaG5jPM [India Driving]

Driving in India is notoriously chaotic and if self-driving cars are ever deployed there in our lifetime, they will come with a safety guarantee about equal to a car driven by a blind dog with a missing paw. It would be impossible to safely navigate the streets there with anything less situationaly aware than a fully developed adult human brain.

The article above makes it sound like weather is the big problem for self-driving cars and once that's solved- wooohoo, we're on our way! It's far from that. Self-driving car AI is still incapable of reasoning about its environment, neither does it have any "undersanding" of it in any way, shape or form. Consequently it only works in very limited environments, in very limited conditions - of traffic, visibility, road quality etc.



Don't forget Uber in the US! A fantastic display of man and machine working as one and how they deal with unexpected obstacles.


> It is a bit of a shame that a tech that could save a million lives a year globally won't be deployed because it can't be made perfectly safe

This is a bit of a generous assumption. It very well could turn out that self-driving cars are worse, or no better, than human drivers.


Right. So many arguments just start from the premise that self-driving cars must be safer and then ask why we aren't racing to replace human drivers with them. In reality the proposition that they are safer is far from proven.


Not to mention that the risk profile is going to be different. When you are driving your car, there is at least the illusion that you are in control of your risks. A careful driver at least believes that their risks are significantly lower than average. Getting into a self-driving car you have no control and are just playing Russian roulette.

Not to mention that the failure modes of self-driving cars are likely to be very different from failure modes of humans, making it more difficult for other road users to predict what they would do in any given situation.


We already know that the assumption is correct for a lot of cases. Self driving cars have logged millions of miles on the road.

The only question is can we make them work in situations that are hard.


Meanwhile human-driven cars have driven billions. The only question is can we make them (humans) work in situations that are hard?

In other words, "it works until it doesn't" is tautologically true, but useless precisely for that reason.


The answer to your question is basically "no". Humans aren't getting better at driving in the aggregate - new ones are born, learn to drive, get better, then get worse, then die off. On average humans are as good as they're going to get (with small variations due to culture and different training programs).

On the other hand the self-driving cars are just getting started. It's entirely reasonable to suppose that they may surpass the average human in the next X years. And they won't get old and confused and eventually replaced with new cars that have to learn from scratch -- they should get more or less monotonically better. I think the optimism is warranted even if the timeline and technology is uncertain.


> It very well could turn out that self-driving cars are worse, or no better, than human drivers.

This is trivially true. I mean, self driving cars that are worse than human drivers already exist. Isn’t the point asking when they will be made widely available? I assume that is only possible when they are at least as good as human drivers by some key metrics.


I agree, the sane assumption should be that self-driving cars would be made widely available if and when they are better than humans at driving.

However, I've noticed that many work from the premise that self-driving cars are already safer than human drivers. That, or they'll work from the premise that self-driving cars will definitely be safer than human drivers within a reasonable timespan.

Worse, some assume that self-driving cars will dramatically, or nearly eliminate automobile accidents/deaths, where any brief skim over workplace casualties involving autonomous machinery would put that fantasy to rest.

For a group of tech workers that often overlap with self-described skeptics, it's an interesting blindspot to have.


How many years per death would be acceptable is highly culture dependent. Uber should ask the NRA for branding advice, though they might not be patient enough to put in the work to reach such an untouchable brand.


My gun isn't controlled by a computer and in fact that is fairly strictly banned by the ATF.


Ah, so you agree that avoidable deaths from computer controlled systems are viewed as much worse than avoidable deaths from human controlled systems?


Not sure if you have read the article, but it's actually about a company called WaveSense who claims to have developed ground-penetrating radar that can greatly improve all weather autonomous driving. So yes, it's a daunting task but lot of people are working on it and progress is being made. I'd be interested to hear your counterargument.


I’m not an expert in self driving car engineering. I have some experience with ML but not in a significant professional way. What I have is a decade of professional experience building all kinds of software and a skeptical eye.

You have a problem that is unconstrained, with infinite variables, where even a simple mistake can have catastrophic outcomes. Society itself may object to self driving cars for a ton of reasons, from safety to simply driving like a grandma and slowing everything and everyone down. The cost to develop this technology, plus the added cost of hardware to each car, will be enormous and is not obviously a cost savings over a $15 per hour human. If the self driving car is doing anything other than getting from A to B, you still need a human (or a human-like robot) to handle the unloading / delivery / whatever at the end.

Now, I’ve worked at companies with extremely talented and intelligent engineers, and something as constrained and seemingly simple as making a login form can take a long time to perfect - and no lives are at risk! Just imagine the challenges and requirements for building self driving cars. New hardware, software, real-time processing and analysis of tons of data, all to drive split-second decisions that can kill people if done incorrectly.

Huge challenge - huge risks - huge money - uncertain payoff. This is not something that will appear suddenly. If there aren’t convoys of self driving trucks operating in desert highways overnight, where it’s dry and straight and flat and no one else is there, then we aren’t going to see city taxis for a very long time.


> You have a problem that is unconstrained, with infinite variables

I'm not sure this is correct. I come from the optimal control world. The problem is not unconstrained and definitely does not have infinite variables (if you think in state-space, consider the state-space equation x' = f(x, u, theta). The x-space (state) is large but finite, and the u-space is fairly small -- steering, gear, brake, etc.). The x-space is also stochastic, and there are many observability issues.

That said, we've been designing control systems against the real world for a number of years now, and the key is not to model all the unknowns (because there will always be something you can never anticipate), but to model the known and safe path that the system can fall back to.

The problem is a complicated one, and it will take more than several attacks to make it work, but it is not by any means an impossible one if you break it down into the fundamentals.


I could not disagree more. The input to an AV’s sensors, not just the data but what it represents and must be determined, is infinite. If your solution is to stop the car, you must also determine if that’s safe to do. It’s extraordinarily complex. This isn’t a factory.


I strongly disagree. It is a large set but not infinite, otherwise humans would not be able to drive. It is constrained by physical laws.

Also, control systems have been used to control much more complex entities than just factories.

I agree it is an extraordinarily complex problem. But I also believe progress can be made to a point where it can be feasibly solved.


But it doesn't have to be perfect, just has better than human drivers. That doesn't seem too difficult.


I think you’re underestimating the ability of even bad drivers to accurately process and react to new information while driving.


I think you're also overestimating the abilities and self-restraint of drivers to not use their phones while driving, to look both ways, to not go too fast, etc.


That may be true logically, but I don't think it is workable politically.

People are conditioned to accept that we will kill each other with cars every once and a while. I'm skeptical we will get to the point any time soon where the general public hears about a family killed by a self driving car and just shrugs it off. There will be intense pressure to get them off the road.


My thesis is that “better than human drivers” is an extremely difficult problem that even software professionals have massively underestimated the difficulty of.

It may not even be possible to solve without something radical like banning human drivers or inserting electronic nodes directly into our roads and infrastructure to aid autonomous vehicles. The existence of human drivers might make the problem simply impossible to solve in a way acceptable to society.


> even software professionals have massively underestimated the difficulty of.

Software professionals are somewhat notorious for underestimating the difficulty of their projects. There's a massive amount of literature about that, proposing various techniques of mitigating this on a personal, team, or organization level.

That being said, I think in this particular case the problem was more of an overestimation of what the ability to work with highly dimensional data (as in ML algorithms) can give you (and underestimating the practical problems of deploying the ML-based systems). Basically, people tried running 30+ years old algorithms on GPUs while feeding them the ungodly amounts of data and realized that, with sufficient horse-power, they can make them (finally) work (as in: do something genuinely useful). This built up a lot of hype, of which self-driving cars are just one offshoot, I think.

Anyway, (if it's not evident from the above ;)) I find your arguments convincing and share your doubts. Self-driving cars are probably not impossible to achieve, but to get there we either need decades of research and progress or a couple of very high-profile breakthroughs in tech and theory. I won't hold my breath for neither :)


>> I'd be interested to hear your counterargument.

Not the OP, but like you say, the article is about a company who claims to have developed a technology that will solve self-driving cars' problems with rain.

People in industry make claims all the time. People in the sciences do, too. Just because someone makes a claim, doesn't mean it's true. It's only a claim.


I think it would be more succinct to describe WaveSense as ground penetrating radar for positioning purpouses. It's pretty orthogonal to autonomous driving which obviously would benefit from any extra position sensor, but it does not actually contribute autonomous driving.


The article supports his point. This is a press release for technology that may be many years from appearing in a production vehicle. It uses a sensor that isn’t included in the package of sensors used in other self driving vehicles, a package that’s already too expensive for consumer vehicles. Yet the technology seems to be necessary to handling a fairly common road condition.


That helps with localization only, right? Not with any other perceptual tasks?


>Then a single dead pedestrian changes all the news articles and the opinions change.

Frankly, it didn't change my opinion at all. As far as I can tell, Uber is a train wreck of a company. When I first read the story I was kind of surprised, until I found out it was an Uber car.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: