this post was submitted on 14 Feb 2024
481 points (98.6% liked)
Technology
59105 readers
3353 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm getting tired of implementing technology before it's finished and all the bugs are worked out. Driverless cars are still not ready for prime time yet. The same thing is happening currently with AI or companies are utilizing it without having any idea what it can do.
That's is every single programme you've ever used.
Software will be built, sold, used, maintained and finally obsoleted and it will still not be 'complete'. It will have bugs, sometimes lots, sometimes huge, and those will not be fixed. Our biggest accomplishment as a society may be the case where we patched software on Mars or in the voyager probe still speeding away from earth.
Self-driving cars, though, don't need to have perfectly 'complete' software, though; they just need to work better than humans. That's already been accomplished, long ago.
And with each fix applied to every one of them, it's a situation they all shouldn't ever repeat. Can we say the same about humans? I can't even get my beautiful, stubborn wife to slow down, leave more space, and quit turning the steering wheel in that rope-climbing way like a farmer on a tractor does (because the airbag will take her hand off).
No software is perfect, but anybody who uses a computer knows that some software is much less complete. This currently seems to be the case when it comes autonomous driving tech.
First, there are many companies developing autonomous driving tech, and if there's one thing tech companies like to do is re-invent the wheel (ffs Tesla did this literally). Second, have you ever used modern software? A bug fix guarantees nothing. Third, you completely ignore the opposite possibility - what if they push a serious bug in an update, which drives you off a cliff and kills you? It doesn't matter if they push a fix 2 hours later (and let's be honest, many of these cars will likely stop getting updates pretty fast anyway once this tech gets really popular, just look at the state of software updates in other industries).
I understand your issue with these cars - they're dangerous, and could kill people with incomplete or buggy software. I believe the person you are responding to was pointing out that even with the bugs, these are already safer than human drivers. This is already better when looking at data rather than headlines and going off of how things seem.
Personally, I would prefer to be in control of the vehicle at all times. I don't like the idea of driverless tech either.
Well, has anyone done good statistics to show all the self driving cars are more dangerous than regular distracted humans as a whole?
We can always point to numerous self driving car errors and accidents, but I am under the impression that compared to the number of accidents involving people on a daily basis, self driving cars might be safer even now?
I'm thinking of how many crashes took place in the time it took me to type this out. I'm also curious about the fatality rate between self or assisted driving vs not.
I think we tend to be super critical of new things, especially tech things, which is understandable and appropriate, but it would be nice to see some holistic context. I wish government regulators would publish that data for us, to help us form informed opinions instead of having to rely on manufacturers (conflict of interest) or journalists who need a good story to tell, and some clicks.
Currently there are many edge cases which haven't even been considered yet, so maybe statistically it is safer, but it doesn't change anything if your car makes a dumb mistake you wouldn't have and gets you into an accident (or someone else's car does and they don't stop it cause they weren't watching the road).
I'm against driverless cars, but I don't think this type of errors can be detected in a lab environment. It's just impossible to test with every single car model or real world situations that it will find in actual usage.
An optimal solution would be to have a backup driver with every car that keeps an eye on the road in case of software failure. But, of course, this isn't profitable, so they'd rather put lives at risk.
How will they encounter these edge cases without real world testing?
Fair point
I agree, but testing with a supervisory driver should be required in case of emergency situations. Both safer and creates job opportunities.
You're right there should be a minimum safety threshold before tech is deployed. Waymo has had pretty extensive testing (unlike say, Tesla). As I understand it their safety record is pretty good.
How many accidents have you had in your life? I've been responsible for a couple rear ends and I collided with a guard rail (no one ever injured). Ideally we want incidents per mile driven to be lower for these driverless cars than when people drive. Waymos have driven a lot of miles (and millions more in a virtual environment) and supposedly their number is better than human driving, but the question is if they've driven enough and in enough varied situations to really be an accurate stat.
A slightly tapped a car a first day driving, that’s it. No damage. Not exact a good question.
Look at how data is collected with self driving vehicles and tell me it’s truly safer.
My point asking about personal car incidents is that each of those, like your car tap, show we can make mistakes, and they didn't merit a news story. There is a level of error we accept right now, and it comes from humans instead of computers.
It's appropriate that there are stories about waymo, because it's new and needs to be scrutinized and proven. Still it would benefit us to read these stories with a critical mind, not to reflexively think "one accident, that means they're totally unsafe!" At the same time, not accepting at face value information from companies who have a vested interest in portraying the technology as safe.
I obviously do since I said look at how the data is collected, what is counted and what is not. Take your own advice and look into that. It’s not this one accident that makes me think it’s unsafe, and certainly not ready to be out there driving.
Here's an article saying that based on data so far, waymo is safer than human drivers. If you have other information on the subject I'd be interested to read it.
https://www.youtube.com/watch?v=pmGOjHi-7MM&t=129s This is a good and entertaining video on it but if you prefer to read here is the sources https://docs.google.com/document/d/1dWvHJLjikgWikFBf4wllk8etc-SIdC8maB0-7eZA7LM/edit
Also your own article "But it’s going to be another couple of years—if not longer—before we can be confident about whether Waymo vehicles are helping to reduce the risk of fatal crashes."
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=pmGOjHi-7MM&t=129s
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.
That's how you get technological advancement.
Bureaucracy just leads to monopolies and little to any progress.