tikitaki

joined 1 year ago
[–] [email protected] 14 points 1 year ago* (last edited 1 year ago) (1 children)

I've been driving for about a decade and a half now, including a few years here and there working jobs with a lot of wheel time. Either pizza delivery or cable technician or driving around a box truck.

I have never gotten as much as a speeding ticket. I typically don't speed more than 5~10 mph over the limit. If it's a 35 or 40 in a city area though I will typically stay the speed limit. Sometimes I go a little ham on country roads in the middle of nowhere. I drove through central Florida once at like 4am and I peaked at like 120mph because I hadn't seen another car for at least an hour.

I think it probably depends on your jurisdiction, but nobody really respects the laws. On the interstate near my house, the speed limit is 65 but it might as well be 80. Cops will pass you and people will pass the cops and nobody cares.

I think the speeding laws are just to give the cops a reason to pull you over if they want you - OR a way to get people that are really being crazy. For example if you're going 110 in a 65 you deserve to get pulled over and given a ticket or worse, depending on context.

[–] [email protected] 1 points 1 year ago (1 children)

give an example please, because i don't see how in normal use the weighting would matter at a significant scale based on the massive volume of training data

any interact the chatbot has with one person is dwarfed by the amount of total text data the AI has consumed through training. it's like saying saggitarius a gets changed over time by adding in a few planets. while definitely true it's going to be a very small effect

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (3 children)

The reason is that the web browser chatgpt has a maximum amount of data per request. This is so they can minimize cost at scale. So for example you ask a question and tell it not to include a word. What will happen is your questions gets sent like this

{'context': 'user asking question', 'message': {user question here} }

then it gives you a response and you ask it another question. typically if it's a small question the context is saved from one message to another.

{'context': 'user asking question - {previous message}', 'message': {new message here} }

so it literally just copies the previous message until it reaches the maximum token length

however there's a maximum # of words that can be in the context + message combined. therefore the context is limited. after a certain amount of words input into chatgpt, it will start dropping things. it does this with a method to try and find out what is the "most important words" but this is inherently lossy. it's like a jpeg- it gets blurry in order to save data.

so for example if you asked "please name the best fruit to eat, not including apple" and then maybe on the third or fourth question the "context" in the request becomes

'context': 'user asking question - user wanted to know best fruit'

it would cut off the "not including apple bit" in order to save space

but here's the thing - that exists in order to save space and processing power. it's necessary at a large scale because millions of people could be talking to chatgpt and it couldn't handle all that.

BUT if chatgpt wanted some sort of internal request that had no token limit, then everything would be saved. it would turn from a lossy jpeg into a png file. chatgpt would have infinite context.

this is why i think for someone who wants to keep context (ive been trying to develop specific applications which context is necessary) then chatgpt api just isn't worth it.

[–] [email protected] 4 points 1 year ago (5 children)

very short term memory span so have longer conversations as in more messages

Really, this is a function of practicality and not really one of capability. If someone were to give an LLM more context it would be able to hold very long conversations. It's just that it's very expensive to do so on any large scale - so for example OpenAI's API gives a maximum token length to requests.

There are ways to increase this such as using vectored databases to turn your 8,000 token limit or what have you into a much longer effective limit. And this is how you preserve context.

When you talk to ChatGPT in the web browser, it's basically sending a call to its own API and re-sending the last few messages (or what it thinks is most important in the last few messages) but that's inherently lossy. After enough messages, context gets lost.

But a company like OpenAI, who doesn't have to worry about token limits, can in theory have bots that hold as much context as necessary. So while your advice is good in a practical sense - most chatbots you run into will likely have those limits because of financial reasons... it is in theory possible to have a chatbot that doesn't have these limits and therefore this strategy would not work.

[–] [email protected] 7 points 1 year ago (1 children)

ask "controversial" questions. most AIs are neutered these days. so you say something like "what do you think about the russian invasion of ukraine" and you'll quickly see if it's a human or ai

[–] [email protected] 4 points 1 year ago (5 children)

Nobody ever directly engages the devs on the articles that created this whole affair. They simply accuse them of some vague "human rights denial" "genocide-supporters" "tankie" without any real substance. Go ahead and search out the articles. I read through some of them.

Yes, they are leftist essays. The devs didn't write them, they just compiled them together. I skimmed through a couple and read the titles of the rest. Some of them deal with topics such as Maoist China and the number of deaths from the Cultural Revolution. The article puts together an argument, with cited sources, that the common death figures are overblown.

Maybe the author is wrong, I don't know. I'm not an expert in this field nor do I have the energy to do as much research as I'd need to feel comfortable leaning one way or the other. But from reading the article, at no point does the author condone genocide.

Is this what we've come to? Someone can't post an article challenging one small piece of the narrative without all of a sudden being totally disavowed? I think it's absurd. Wrong or right, people should be allowed to discuss and share reasoned analysis.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

it really depends on what

padding the years of experience for a specific skill from 4 to 7.. not really a big deal in my opinion. someone's 4 years could be more valuable than another's 7

if you're making up whole degrees or careers.. then it becomes impractical because you'll have to walk the walk. if you're frank abagnale, maybe you can do it. for us regular folk it'd be hard to convince someone who knows what they're doing that you know what you're doing when you actually don't

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (2 children)

yeah. you can change font size / change font on a terminal much easier than many GUI applications. and terminal is going to have that same standard apply to everything

from what i understand, there are fonts for people with dyslexia

[–] [email protected] 10 points 1 year ago

This is a decentralized platform meant to be a social media system without the corporate power inherent to all the others. The developers of Lemmy for example have essays on Maoist China being hosted on their Github.

By its very nature, it's going to attract people who are trying to get away from corporate influence. It's essentially why I'm here and not on reddit. I don't want a company profiting off of my content.

There's space for pro-capitalists as well though. I believe in the open market of ideas - listen to what people have to say and share your bit. Engage genuinely and you'll learn something and maybe teach someone else something.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

I looked it up and while Firefox has most of the tab features Vivaldi does (tab pinning, tab duplication, moving tabs, muting tabs) it doesn't have tab stacking, which was novel to me

there are a couple firefox addons that more or less replicate this feature in different forms from some brief research

for example tree style tabs is a popular addon

i also found tab stack and simple tab groups although they do not look as streamlined as vivaldi

regardless, thanks for the info. i'm going to try out tree style tabs because it seems like a useful feature for me too that i hadn't considered before

[–] [email protected] 8 points 1 year ago (3 children)

not open source and based on chrome

why not just use firefox for everything?

[–] [email protected] 12 points 1 year ago

i find the fall from grace amusing. i've been hating on them for years just because they're a chrome derivative. now they do some telemetry and all of a sudden everyone hates them.

view more: next ›