Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Still, Cruise says those drivers have had to take over manual control of vehicles engaged in Cruise Anywhere service only on a few occasions

Not trying to pour cold water on this but.. it's all about those "few occasions". The difference between level 2 and level 4/5 autonomy is vast. Those "few occasions" would presumably be accidents without the intervention of the safety driver. A small corporate shuttle service that "only had a few accidents" would not be considered a very safe service.

So for a service that really only makes sense at level 4, the reporting really ought to focus on those "few occasions", the stringent safety standard that needs to be met to remove drivers, and the actual progress towards it. That's the real meat of the story.



At the end of 2016 all companies testing autonomous vehicles in California filed their disengagement reports. Waymo reported 1 unplanned disengagement every 5000 miles, and GM reported 1 every 180. Nobody else was even in the running. These metrics are crude, but it's currently the only benchmarking system we've got, and we'll see in December where Cruise is really at when they report their 2017 numbers.

Cruise does distinguish itself by driving mostly on busy downtown SF streets, a challenging environment, and they've released impressive videos showing off their capabilities.

A few weeks ago Kyle Vogt claimed Cruise would surpass Waymo 'in a few months', whether they do or not remains to be seen, but they are making sustained progress and they have the full force of General Motors behind them.

Unlike Waymo, Tesla's Autopilot program, and Uber's autonomous driving program, Cruise's operation has been drama free, without any lawsuits, leadership blowouts, or departures of key people. This implies they've got a solid team under good leadership, and I think it matters a lot with complex software projects like this.


Waymo (Google) is not that far from level IV. Average U.S. driver has one accident roughly every 165,000 miles and disengagements is not 1:1 with crashing. I suspect drunk drivers would probably be safer handing over control which could make a huge difference.

Most interesting is the Disengagements by location. (page 9) https://www.dmv.ca.gov/portal/wcm/connect/946b3502-c959-4e3b...

  Interstate 0
  Freeway 0
  Highway 12
  Street 112
Which might be good or bad depending on the relative amount of driving in each situation.

PS: Disengagements per 1,000 miles also dropped from 0.8 (2015) to 0.2 (2016) which suggests very good things.

PPS: Been getting a lot of up and down votes I wander what people agree / disagree with?


Sounds like you're assuming level 4 merely means parity or better with human error. Lots of problems with that. For instance if it's Google's negligent bug that kills the family of 4 instead of a driver, Google gets sued. Google has much deeper pockets than the driver who would typically declare bankruptcy, effectively capping the damages. No real cap for Google, every time. So that explodes the cost of autonomous vehicles. And then there's the matter of potential criminal liability in the case of gross negligence. So there's going to be a much higher bar for transitioning to level 4.

Also, while there may be fewer of some types of accidents, there will be lawsuits for a potentially very large set of other types that never used to get litigated. For example today when a driver causes their own accident, nobody gets sued.


Google however does have the ability to hand over the cost of insuring such a system to the consumer, so even at current accident rates the cost is around 100 dollars a month per car using such a system (driving 12000 miles in a year). This equates to something like a cent per mile. So let's say they want incredibly high quality insurance, charging an extra 5 cents per mile won't make a difference in the scheme of things should they be able to get the overall cost of commuting 15 miles (the average American commute in one direction), to something under 5 dollars.


This aspect of insurance is widely misunderstood. First of all an extra 4 cents/mile just for insurance is really expensive. But wrongful death and personal injury lawsuits can result in very large judgements, especially when we get to punitive damages for a large corporation. Individual offenders regularly go bankrupt. It may be wishful thinking that this can be easily solved with "insurance". There is also the criminal negligence aspect to consider.


4 cents/mile is roughly in line with rates for traditional auto insurance today. And as these rapid improvements are likely to continue, these costs will quickly come down.

Sure you can imagine some class action / criminal legal disaster but the auto industry has survived repeated scandals. Corporate criminal negligence is extremely rare.

In five or ten years this technology could be saving 100,000 lives every year. I hope we are not so risk adverse that these benefits are needlessly delayed.

http://www.insurance.com/auto-insurance/auto-insurance-basic...


Medical errors kill on the order of 700 people per day in the US (high error bars on that number though). And yes, insurance is a major cost in the medical field, but even large scale negligent deaths are regularly dealt with by the US legal system.

Yes, anyone putting out such a system is going to get sued, but so does every car company that exists today. It's just the cost of doing business and ultimately just get's passed on to consumers.


You're correct. The question is what is the cost passed on to consumers. If it is something on the order of the cost of medical malpractice insurance and average settlements, well, wow. That would make AV prohibitively expensive.

More likely the bar for level 4 will just be significantly higher than parity with human error, if only for liability reasons.


"In 2010, there were 1.1 fatal crashes per 100 million truck miles" https://www.truckdrivingjobs.com/faq/truck-driving-accidents... Truckers make ~27c/mile.

So, at similar rate to human drivers that's ~24* million dollars per fatal crash in saved wages. Plus presumably higher truck utilization and lower management costs etc. Don't forget the driver is often not at fault and automated trucks should have plenty of footage of accidents to demonstrate who was at fault. So it's a fraction of both fatal and non fatal accidents, with non fatal accidents generally having lower associated costs. Also, most importantly it's only the truck involved in many accidents associated with bad weather and more generally the driver's medical bills etc do not need to be covered if there is no driver.

Further, technology is unlikely to degrade over time so they just need a profitable starting point. Also, if the self drive button cost ~25c / mile there are plenty of times I would hit it.

*Granted, there will be labor costs linked with automated trucks, but a 95% savings seems likely.


> if the self drive button cost ~25c / mile there are plenty of times I would hit it

25c/mile just for the insurance surcharge, you mean? Sounds like we're in agreement that mere parity with human error would be expensive from a liability perspective.


I don't think the average fatal accident is going to average even 5 million, I am just saying if it cost 20 million that's still not a deal killer. Put in other terms Valet Parking is 5+$, but 'go park' is ~5 cents. In ~15MPG city / stop and go traffic that's 3.75$/hour to not pay attention.

Hell go to sleep and wake up in another city 500-1000 miles away without going to the airport or needing a rental is a major benefit even if your not saying money.


Based on their own reporting, hasn't Waymo been safer than the average driver for a few years now?


The real question is, what routes are being followed and in what conditions? Presumably Waymo et al stick to exquisitely-mapped routes with excellent road markings between well-known autonomous-friendly endpoints during good daytime weather for the vast majority of their testing. Performance in less-than-ideal conditions is a more important question. I'm curious if there's any publicly available data on the routes followed, times, conditions, etc.


Anecdotally I see them driving like snails on Valencia Street and elsewhere in the Mission day and night. It's very well mapped, but one of the more challenging areas to navigate in America.

I've never seen them pulled over in the bike lane on Valencia like very other Uber/Lyft, which is a good thing, but picking up and dropping off is far from autonomous friendly in that neighborhood.

Of course sometime it's obvious a person is in control and presumably they're generating training data.


I don't know why you're getting downvotes.

Does anyone know off-hand, how much testing have autonomous vehicles had at night (as in, full dark/need to use headlights).


Cruise has been testing cars 24 hours a day, 7 days a week for quite some time. Here's a few hours of fully autonomous night driving: https://www.youtube.com/watch?v=6tA_VvHP0-s


There was actually a dispute about a potential Cruise cofounder getting compensation for equity at the acquisition. They recently settled the lawsuit out of court [1]. I agree they have good leadership, but they're not without any lawsuits.

[1] http://www.businessinsider.com/car-startup-cruise-settles-le...


>> Cruise does distinguish itself by driving mostly on busy downtown SF streets, a challenging environment, and they've released impressive videos showing off their capabilities.

I don't doubt that it's a challenging environment, but there are many, many different kinds of challenges that are not even present in that city.


True. But city driving is all they need to offer service in the most potentially profitable, geo fenced areas.


Driving is SF is actually quite challenging. It is quite congested with cars as well as pedestrians and cyclists. Due to lax enforcement many traffic laws are routinely ignored particularly by cyclists, pedestrians and transit vehicles. On top of this there is lots of construction, poorly maintained roads and frequent traffic flow changes.

Not sure what you are imagining but SF is free of winter driving conditions. Other than that it is as challenging as any city in the US, Europe or Japan.


Is disengagement per X miles really a good metric. It doesn't reflect the difference between driving in different environments, i.e dense urban areas, suburban streets, and open highways.


Yes, it's a crude metric. The other thing is disengagements are self reported, and an unplanned disengagement doesn't necessarily mean a crash was narrowly avoided, it may be an overly cautious test driver, or the result of some non-safety critical intervention.

After the core software problems have been solved, the key challenge shifts to discovering edge cases, which become progressively more marginal the further advanced the software becomes, and the only way to find these edge cases is through 1000s of miles of driving waiting for unexpected situations to show up.


I'm not sure what those "few occasions" were, but I think there's another category of problem that these could fall into other than would-be accidents: If the car over-cautiously decided it didn't know what to do, came to a safe stop, and threw on the 4-ways while it waited for human input, that's not as terrible.


I was driving home a few months ago in the rain on Division in SF. Cars were stopped, and I immediately couldn't figure out why. When I passed the stopped vehicles I got my answer: there was seemingly a wall of water coming down from the freeway overhead and an autonomous car was stopped as it apparently didn't know how to handle the situation.


Couldn't they have something like a real Human remote driver. That can intervene, but also can oversee multiple cars.


Would need incredibly low latency for that to be safe, I think. FPS games use a pile of tricks to send minimal data over the network which wouldn't be available to an autonomous car: we don't have access to the game engine, after all...


You wouldn't automatically need low latency. In a situation like the above one the car could pull over and wait for the control center to tell it whether the drive on through the water or not.

Not quite the same problem but the Starship delivery vehicles work a bit like that - remote link to a control center. http://uk.businessinsider.com/doordash-delivery-robots-stars...


That would require the car to have the ability to override its tendency to not drive thru physical obstacles. Remote control to say "go into that lane", "turn left" is one thing, but remote control to drive thru what it perceives as a physical object is different.


It would need to be very low latency if the cars can not take sufficient action fast enough to be safe. But if the cars get to a point where they are safe enough but where that safety comes at the expense of being overly cautious in some instances (refusing to drive through a sheet of water and deciding to stop) that may be acceptable. E.g. in the given example, the car only needs input to say pretty much "safe to proceed at low speed until past obstacle, then reassess".


Onlive allowed gaming over a Videostream with minimal delay. ffmpeg[1] works too.

[1] http://fomori.org/blog/?p=1213


At least one company is doing exactly that with trucks, where the last mile is remotely controlled. This was on HN a number of months ago:

http://fortune.com/2017/02/28/starsky-self-driving-truck-sta...


Does anyone here know if California driving law allows for remotely-operated vehicles like that?


Hopefully not, at least not yet. That doesn't sound nearly as safe, or as accountable— you just wouldn't have the same situational awareness looking at sensor feeds vs. looking out the actual windows.


Except when that isn't a safe thing to do.


That's a fair point, although those would still constitute breakdowns in a level 4. Further reason for the reporting to focus on these incidents.


I've seen ~10 Cruise cars around Mission Dolores and Soma in SF. 100% of the time the driver has his hands on the wheel and appears to be making the turns himself. I'm skeptical of how automated this service actually is.

Maybe those are training runs or the drivers are letting the wheel guide them.


Our drivers always keep their hands on the steering wheel, even while in autonomous mode. There isn't really any way you could tell externally whether the car is engaged or not, other than some of the nuances of our driving behavior.

Source: I work at Cruise


I believe current regulations require the test driver to have her or his hands on the wheel at all times so that's not a good indication of whether or not the car is driving autonomously.


I have made the same observation in SOMA area. Not even once I saw the car being driven autonomously


[Deleted]


> Those "few occasions" would presumably be accidents without the intervention of the safety driver.

You don't know that. Doesn't the car simply slowly stop if it doesn't know what to do?

> A small corporate shuttle service that "only had a few accidents" would not be considered a very safe service.

As soon as it's safer than a human-operated one, it's already good enough - there's literally no safer option.


It's not about not knowing what to do, it's about not recognizing things that threaten safe travel.


> Those "few occasions" would presumably be accidents without the intervention of the safety driver.

I'm not sure that's a safe presumption. The car could have done the right thing but not been confident enough in the planned course of action. Still not great, but not a presumed accident either.


San Francisco would be one of the harder cities to make self-driving work: bad light, heavy traffic traffic, bicycles, poor lane markings, spotty GPS downtown, etc.

I haven't been to Phoenix, so can't comment on the difficulty of running a self driving car there, but I would guess its easier than SF.


All the problems you listed are going to be issues with most major cities. I would hope that self driving companies are testing their tech outside the SV bubble because the rest of us have to deal with rain, snow, sleet, high winds, and other weather that causes low visibility and low traction.


I'm not involved in autonomous vehicles at all but as an engineer, I'm pretty confident we are a long way away from level 4/5.

Without significantly reworking current infrastructure the challenges presented by city driving are too fuzzy to guarantee safety. For example, traffic lights can be obscured, out of order, or spoofed; unless they are modified to support av; I don't think this is solvable.

Although autonomous vehicles will require a safety driver for a while, interim benefits still exist and services like this will potentially provide the momentum needed to make those infrastructure changes.


> Without significantly reworking current infrastructure the challenges presented by city driving are too fuzzy to guarantee safety.

But, there's no guarantee of safety now. Human drivers are fallible and fail all the time.

A guarantee of safety is a crazy metric to require before we can allow autonomous vehicles. A better metric is: "before the vehicle is more safe than the median human driver". Once it hits that level of safety, it should be in the streets.


> For example, traffic lights can be obscured, out of order, or spoofed

Wouldn't that also potentially impact human drivers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: