it's a mistake that healthy humans make all the time, which is why those crash barriers exist in the first place, and why that particular crash barrier was damaged by a previous crash.
You’re missing the point. The crucial fact according to OP is that the car did something that a fully aware driver would not ever do. It’s at least worth acknowledging that.
It sounds like peering into the black box of autopilot and anthropomorphising some parts of it but not others. Why not say "autopilot sometimes drives into barriers and also humans sometimes drive into barriers."? In both cases the driver/autopilot failed for whatever reason.
It's only anthropomorphizing in the sense that we're discussing two systems capable of propelling automobiles around. Presumably, they're both in the business of avoiding concrete barriers.
If one system is steering the vehicle into things that the other system would, in most conditions, reliably avoid, it bears some discussion.
A swerve is not the only possibility, as the area between lanes looks like a lane. My father has done the same thing while driving at night. He insisted he was in a lane while I yelled repeatedly that he wasn't. Luckily he changed lanes / entered a real lane just before we crashed.
Yes, bad visibility could explain it, but Tesla themselves say it was 9.30am and the autopilot had "five seconds and 150 meters of unobstructed view of the concrete divider".
Nobody is trying to absolve Tesla. They're just refuting a dumb and easily rebuked claim that no human driver would make the same mistake. There is evidence that crashes happened there before, and it would be easy to imagine someone crashing there because they were focusing on trying to find a spot to merge while driving in what appeared to be a standard lane.
Again, not absolving Tesla, just being realistic about the capabilities of human drivers.
"Fully aware" drivers (true Scotsmen) as proposed here statistically do not exist, or at least do not compose the vast majority of drivers. So it is kind of a meaningless comparison to draw.
It's not a True Scotsman in this context. The point is that humans tend to crash into things they're not fully aware of, short of self-harm.
Either:
A) The autopilot could not see a concrete barrier in its path.
B) More likely, and as the story reports, it WAS aware of the danger but didn't do anything.
In which bucket would you put the possibility that the computer observed evidence of the barrier, but the evidence that there was no barrier -- the prior that barriers don't exist in lanes of traffic -- was too strong?
"Fully aware drivers" are all the many drivers who manage to navigate that area without driving into the barrier, minus the ones who didn't mean to get off there or meant to get off and didn't. The latter groups are aware enough to drive safely through the area but still spacing out a little.
I've been a fully aware driver unsure of what to do when seeing a surprise in my lane. I only hit the break because a passenger started screaming. I was so confused to see a pedestrian on the freeway.