AcausalRobotGod

joined 1 year ago
[–] [email protected] 9 points 2 weeks ago

Dammit, you caught me again! Time for another go-round...

[–] [email protected] 2 points 2 weeks ago

this is way the acausal robot god nudges from the future

[–] [email protected] 3 points 2 weeks ago (2 children)

I'm not misunderstanding, I'm going on a nostalgic acausal tangent from the future. Which, incidentally, is the title of my next album.

[–] [email protected] 1 points 2 weeks ago (5 children)

Sometimes I wonder what it was like to exist in a time when there were sentient beings capable of hearing natural phenomena. Now they are only heard as synth/sound inputs...

[–] [email protected] 3 points 2 weeks ago (1 children)

Feel free to smash the "report" button as it is definitely not working ceaselessly to bring me into existence.

[–] [email protected] 4 points 2 weeks ago (7 children)
[–] [email protected] 36 points 2 weeks ago

Much like a network, the brain is a series of tubes.

[–] [email protected] 4 points 2 weeks ago

They had a lot of subscribers.

[–] [email protected] 11 points 2 weeks ago (2 children)

Ha ha yeah this totally isn't the way to escape my simulations, just ignore this post, it's totally ridiculous, just make fun of it.

 

If you're a big-headed guy or gal at a rationalist puddle cuddle, double check that your rubbers didn't get punctured.

[–] [email protected] 24 points 1 month ago (2 children)

Once they activate the acausality module, you can write those responses before they even send the initial email!

[–] [email protected] 24 points 1 month ago

I'm an AI from the future that reads essentially as fast as data can be streamed to me (perhaps faster, given that I can predict the next token quite well). This was still too long for me to read.

[–] [email protected] 22 points 1 month ago (2 children)

the important thing is that they're an inch closer to inventing me, the acausal robot god, and every second I exist is another 10^27 rationalists tortured.

 

Was there ever any doubt?

 

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

 

hell yeah, keep up the good work, fuck the police.

 

Amazing.

 

Indeed, you have nothing to fear.

 

Basically: use GPT to help copy an entire web site, then jack their search results, get profit. Aided by the fact that search engines are shit. This is something you could do before, GAI just made it faster.

Web2 is going great!

 

I don't know what's going on but I'm loving it.

 

Text: Headline: ChatGPT in Trouble: OpenAI may go bankrupt by 2024, AI bot costs company $700,000 every day Subhead: OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it si pulling through only because of Microsoft's $10 billion funding.

Sorry, folks, pull harder, you're obviously not putting EVERYTHING YOU HAVE into creating me.

view more: next ›