image description (contains clarifications on background elements)
Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg "big brother is watching" poser, two images of fluttershy (a pony from my little pony) one of them reading "u only kno my swag, not my lore", a picture of parkzer parkzer from the streamer "dougdoug" and a slider gameplay element from the rhythm game "osu". The background is made light so that the text can be easily read. The text reads:
i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?
IMAGE DESCRIPTION END
i hope this doesn't cause too much hate. i just wanna know what u people and creatures think <3
![](https://lemmy.blahaj.zone/pictrs/image/c6f46ea5-31b4-446d-94de-61ac0eb62aef.webp)
true, we kinda move the barrier on what "AI" means all the time. back then TTS and STT surprised everyone by how it worked kinda good. Now we don't even consider it AI, even tho STT is almost always driven by a neural network, and new models like OpenAIs whisper models are still releasing.
there are also some VLMs which let you get pretty good descriptions of some images, in case none were provided by a human.
i have heard some people actually being able to benefit off of that.
Yeah, the way 'AI' companies have played with term AI is annoying as heck. The fact AGI has been allowed to catch on at all is frankly a failure of the tech press. I do remember reading a good article on how stuff stops being 'AI' when it gains real world use, that I can't find because Google sucks now.
I don't enough about running AI locally to know if this applies, but I just can't stomach any of it because I can't help but think of what those companies put people in places like Kenya through in order to get the token data to make these models useful. It's probably unfair to taint the whole field like that, like I'm sure there are some models that haven't been trained like this, but I just can't shake the association.