this post was submitted on 24 Feb 2024
237 points (91.6% liked)
Technology
59559 readers
3348 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yea, try talking to chatgpt about things that you really know in detail about. It will fail to show you the hidden, niche things (unless you mention them yourself), it will make lots of stuff up that you would not pick up on otherwise (and once you point it out, the bloody thing will "I knew that" you, sometimes even if you are wrong) and it is very shallow in its details. Sometimes, it just repeats your question back to you as a well-written essay. And that's fine...it is still a miracle that it is able to be as reliable and entertaining as some random bullshitter you talk to in a bar, it's good for brainstorming too.
It's like watching mainstream media news talk about something you know about.
Oh good comparison
Haha, definitely, it's infuriating and scary. But it also depends on what you are watching for. If you are watching TV, you do it for convenience or entertainment. LLMs have the potential to be much more than that, but unless a very open and accessible ecosystem is created for them, they are going to be whatever our tech overlords decide they want them to be in their boardrooms to milk us.
Well, if you read the article, you’ll see that’s exactly what is happening. Every company you can imagine is investing the GDP of smaller nations into AI. Google, Facebook, Microsoft. AI isn’t the future of humanity. It’s the future of capitalist interests. It’s the future of profit chasing. It’s the future of human misery. Tech companies have trampled all over human happiness and sanity to make a buck. And with the way surveillance capitalism is moving—facial recognition being integrated into insane places, like the M&M vending machine, the huge market for our most personal, revealing data—these could literally be two horsemen of the apocalypse.
Advancements in tech haven’t helped us as humans in while. But they sure did streamline profit centers. We have to wrest control of our future back from corporate America because this plutocracy driven by these people is very, very fucking dangerous.
AI is not the future for us. It’s the future for them. Our jobs getting “streamlined” will not mean the end of work and the rise of UBI. It will mean stronger, more invasive corporations wielding more power than ever while more and more people suffer, are cast out and told they’re just not working hard enough.
Sony wants photographs of my ears for "360 reality audio". No. Just no.
Dude! I bought some Bose headphones that were amazing. But I read over the privacy policy and they wanted to “map my head movements” and they wanted permission to passively listen to audio sent through the speakers and any audio around the microphone.
I ran those fuckers back to the store as quickly as possible.
But not before having to duck and dodge agreeing to the privacy policy in their app, so I quickly deleted it. But when I started interacting with their customer service, they tried to get me to sign a different privacy policy that seemed formulated just for the information shared in the chat, but in two separate addenda I had to dig through, I saw they were tryin to get me to sign the original super invasive privacy policy.
Fuck Bose. Fuck all these fake fronts for surveillance capitalism. Fuck capitalism.
Wrong chat dude. What does that have to do with AI anyways?
it's bizarre without context, but i recognise what they mean - Sony's headphones app suggests you send them photos of your ears so they can analyse the shape to improve the noise cancellation.
Which I don't think has anything to do with GenAI. Though, I admit I'm not well educated in ear scanning and 3D audio reconstruction, so good sources are appreciated.
I think the claim is that they can use AI to improve the sound of their headphones if you supply them with images of your ears. I just dont like them having a database of personally identifying information like that.
How personally identifiable is your ear though? It's not connected to your thoughts, you can't use it to determine your age height and weight, which ad company would need that data? IMO, it's no different than sending a mold of your ear tube to a CIEM company to get your custom molded earphones.
Ears turn out to be a good way to recognize individuals. Ear biometrics is an evolving area.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7594944/
I see. I still don't think it's cause for concern yet, but good to know. Thanks!
Northrup Grumman probably already have a prototype to identify you by your ears from a mile up and kill you with a single bullet the moment you're not inside.
What worries me is how much of the AI criticism on Lemmy wants to make everything worse; not share the gains more equally. If that's what passes for left today, well...
I don't have a problem with machine learning. I have a problem with one company getting x trillion dollars investment. Who pays when the investors want their returns? Eventually it's going to be all of us.
I don't think they have that much potential. They are just uncontrollable, it's a neat trick but totally unreliable if there isn't a human in the loop. This approach is missing all the control systems we have in our brains.
I really only use for "oh damn, I known there's a great one-liner to do that in Python" sort of thing. It's usually right and of it isn't it'll be immediacy obvious and you can move on with your day. For anything more complex the gas lighting and subtle errors make it unusable.
Oh yes, it's great for that. My google-fu was never good enough to "find the name of this thing that does this, but only when in this circumstance"
ChatGPT is great for helping with specific problems. Google search for example gives fairly general answers, or may have information that doesn't apply to your specific situation. But if you give ChatGPT a very specific description of the issue you're running into it will generally give some very useful recommendations. And it's an iterative process, you just need to treat it like a conversation.
It's also a decent writer's room brainstorm kind of tool, although it can't really get beyond the initial pitch as it's pretty terrible at staying consistent when trying to clean up ideas.
I find it incredibly helpful for breaking into new things.
I want to learn terraform today, no guide/video/docs site can do it as well as having a teacher available at any time for Q&A.
Aside from that, it's pretty good for general Q&A on documented topics, and great when provided context (ie. A full 200MB export of documentation from a tool or system).
But the moment I try and dig deeper I to something I'm an expert in, it just breaks down.
That's why I've found it somewhat dangerous to use to jump into new things. It doesn't care about bes practices and will just help you enough to let you shoot yourself in the foot.
Just wait for MeanGirlsGPT