kevin

joined 1 year ago
[–] [email protected] 2 points 1 year ago (1 children)

I 100% agree with you, we've implemented this as an option and it'll be in the next release.

[–] [email protected] 14 points 1 year ago (1 children)

Can you make an issue on github? This is something we should definitely implement.

[–] [email protected] 5 points 1 year ago

So far it's been good! Lemmy has made me hopeful for better social media. I'm not hugely into twitter-style social media so I was never really able to appreciate Mastadon.

I'm actually quite surprised with how much content is here already. There are regular posts and conversations, and a good mix of content. It's not at the level reddit is in terms of volume, but I don't feel starved or anything. I look forward to the future here!

[–] [email protected] 5 points 1 year ago (1 children)

Infinite scrolling is implemented in jerboa, it could definitely be brought to the web client.

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (2 children)

There is a fix available actually! I'm using it right now and it's working great, I'm hoping it can be merged ASAP. https://github.com/dessalines/jerboa/pull/432

[–] [email protected] 5 points 1 year ago (1 children)

There's an open PR that'll fix the font size issue. I'm using it now and it's great. I'm also personally working on trying to add my personal must-have UI options from Boost.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (3 children)

I imagine it'll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It'd go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn't catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.

[–] [email protected] 1 points 1 year ago

Consider charging at home, if you can. If your typical driving patterns consist of driving <100 miles from your home and it's possible to plug in at home (a standard 120V outlet is sufficient typically), then you don't need public charging stations. Just plug your car in at night and it'll be full every morning.

view more: ‹ prev next ›