this post was submitted on 21 May 2025
11 points (100.0% liked)

LocalLLaMA

3198 readers
3 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Gemma 3n includes the following key features:

Audio input: Process sound data for speech recognition, translation, and audio data analysis.

Visual and text input: Multimodal capabilities let you handle vision, sound, and text to help you understand and analyze the world around you.

PLE caching: Per-Layer Embedding (PLE) parameters contained in these models can be cached to fast, local storage to reduce model memory run costs. Learn more

MatFormer architecture: Matryoshka Transformer architecture allows for selective activation of the models parameters per request to reduce compute cost and response times. Learn more

Conditional parameter loading: Bypass loading of vision and audio parameters in the model to reduce the total number of loaded parameters and save memory resources. Learn more

Wide language support: Wide linguistic capabilities, trained in over 140 languages. 32K token context: Substantial input context for analyzing data and handling processing tasks.

top 2 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Hmm, what's the big news here? Isn't that just a new example app for the mediapipe framework? I believe that was already available a year ago in May 2024 and could do LLM inference and image tasks back then?!

And in addition to that, it seems to be quite slow. It doesn't use any acceleration on my Pixel phone, just uses the CPU to output like 6-8T/s for the 3b model...

[โ€“] [email protected] 3 points 3 weeks ago

oops wrong link