Looking at the network activity of a pixel device vs an iPhone at rest broke my soul.
Ptsf
Sometimes mercy is the heaviest of burdens. It's the right thing to do, but fuck if it ain't hard. My condolences for your little bud. May they live on in your memories.
This is why we say fuck you to norms and demand our rights chiseled in stone. Your whole world can 180 on you in the blink of an eye for something harmless that you've always been.
Also and I'd forgotten to mention, but what you see in the on-screen representation is entirely divorced from the actual stack doing your driving. They're basically running a small video game using the virtual world map they build and rendering in assets and such from there. It's meant to give you a reasonable look into what the car sees and might do, but they've confirmed that it is in no way tied to the underlying neural decision network.
The blockers for Tesla are that it's processing a 2D input in order to navigate 3D space. They use some ai trickery to make virtual anchor points using image stills and points of time to work around this and get back to a 3D space but the auto industry at large (not me) has collectively agreed this cannot overcome numerous serious challenges in realistic applications (the one people may be most familiar with is Mark Rober's test where the tesla just drives right into a wall painted to look like the road Wiley Coyote style, but this has real world analogs such as complex weather). Lidar and ultrasonics integrated into the chain of trust can (and already do for most adas systems) mitigate a significant portion of the risk this issue causes (Volvo has shown even low resolution "cheap" Lidar sensors without 360 degree coverage can offer most of these benefits). To be honest I'm not certain that the addition would fix everything, perhaps the engineering obstacles really were insurmountable... but from what I hear from the industry at large, my friends in the space, and my own common sense; I don't see how a wholly 2D implementation relying on only camera input can be anything but an insurmountable engineering challenge to overcome in order to produce the final minimal viable product. So from my understanding it'd be like being told you have to use water and only water as your hydraulic fluid, or that you can only use a heat lamp to cook for your restaurant. It's just legitimately unsuitable for the purpose despite giving off the guise of doing the same work.
I think of it as a lab because it's my sandbox for me to do crazy server stuff at home that I'd never do on my production network at work, and I think that's why the name stuck, because back when systems were expensive as heck it was pretty much just us sysadmin guys hauling home old gear to mess with.
The even crazier part is that with all of their advancements in software, they'd probably legitimately have fsd launched and running well by now, given how much driving data they can ingest at will, if they'd just included a few hundred dollars worth of low resolution lidar and ultrasonics. Irrc they stated they were having issues with chain of trust among the sensors, but I'm not sure I believe that.
I think that's a European/Australian consumer protection thing, I can't think of a manufacturer that allows it in the US.
It's subtle, but it's absolutely designed to induce feelings of negative emotions to evoke a click. If you'd like to look into this field of study, search up "shadow patterns" or " dark patterns" as that's the modern design language meant for working with data on mass scale in order to drive engagement. (To the down voter, the fact you can't see it is both sad and the point of the design. Unfortunate because it's true. I've sat in these design meetings with software teams and marketing.)
I'm sad to see trade relations between our countries dissolve, I grew up with nafta being a brimming and proud achievement of cooperation. I really hope that we in America codify and cement future protections for our trading partners, ya'll deserve better than working with our diaper Don.
You're missing the point though maybe? You can't take data, run it through what is essentially lossy compression, and then get the same data back out. Best you can do is a facsimile of it that suffers in some regard.