circle

joined 1 year ago
[–] [email protected] 5 points 10 months ago

Even I used to believe that there is a good demand, but sadly it's a very small minority.

[–] [email protected] 11 points 10 months ago

Oh yes, to top it I have small hands - I can't reach almost any of the opposite edge without using two hands. Sigh.

[–] [email protected] 1 points 1 year ago

Thanks ill check that out

[–] [email protected] 1 points 1 year ago (3 children)

Agreed. YouTube revanced works well too. But are there alternatives for iOS?

[–] [email protected] 6 points 1 year ago

This is such a good idea!

[–] [email protected] 4 points 1 year ago (1 children)

Love the clock!

 

intuition: 2 texts similar if cat-ing one to the other barely increases gzip size

no training, no tuning, no params — this is the entire algorithm

https://aclanthology.org/2023.findings-acl.426/

[–] [email protected] 2 points 1 year ago

sure, thank you!

[–] [email protected] 2 points 1 year ago (2 children)

Thanks. Does this also conduct compute benchmarks too? Looks like this is more focused on model accuracy (if I'm not wrong)

 

As the title suggests, basically i have a few LLM models and wanted to see how they perform with different hardware (Cpus only instances, gpus - t4, v100, a100). Ideally it's to get an idea on the performance and overall price(vm hourly rate/ efficiency)

Currently I've written a script to calculate ms per token, ram usage(memory profiler), total time taken.

Wanted to check if there are better methods or tools. Thanks!

[–] [email protected] 0 points 1 year ago (1 children)

Haha. That's true!

I've been having some random issues with the apps, now it's mostly wefwef on the browser.

Can't wait to see sync for lemmy

[–] [email protected] 4 points 1 year ago (3 children)

I already miss my muscle memory operations through sync :/

view more: next ›