TheFutureIsDelaware

joined 1 year ago
[–] TheFutureIsDelaware 2 points 1 year ago (1 children)

AI alignment is a field that attempts to solve the problem of "how do you stop something with the ability to deceive, plan ahead, seek and maintain power, and parallelize itself from just doing that to everything".

https://aisafety.info/

AI alignment is "the problem of building machines which faithfully try to do what we want them to do". An AI is aligned if its actual goals (what it's "trying to do") are close enough to the goals intended by its programmers, its users, or humanity in general. Otherwise, it’s misaligned. The concept of alignment is important because many goals are easy to state in human language terms but difficult to specify in computer language terms. As a current example, a self-driving car might have the human-language goal of "travel from point A to point B without crashing". "Crashing" makes sense to a human, but requires significant detail for a computer. "Touching an object" won't work, because the ground and any potential passengers are objects. "Damaging the vehicle" won't work, because there is a small amount of wear and tear caused by driving. All of these things must be carefully defined for the AI, and the closer those definitions come to the human understanding of "crash", the better the AI is "aligned" to the goal that is “don't crash”. And even if you successfully do all of that, the resulting AI may still be misaligned because no part of the human-language goal mentions roads or traffic laws. Pushing this analogy to the extreme case of an artificial general intelligence (AGI), asking a powerful unaligned AGI to e.g. “eradicate cancer” could result in the solution “kill all humans”. In the case of a self-driving car, if the first iteration of the car makes mistakes, we can correct it, whereas for an AGI, the first unaligned deployment might be an existential risk.

[–] TheFutureIsDelaware 1 points 1 year ago

No. Maybe as a short stop on the way to extinction, but absolute and complete extinction aint a dystopia. And the worse than extinction possibilities are more like eternal suffering in a simulator for resisting the AI. Not quite captured by a "dystopia".

[–] TheFutureIsDelaware 1 points 1 year ago (6 children)

You're at a moment in history where the only two real options are utopia or extinction. There are some worse things than extinction that people also worry about, but lets call it all "extinction" for now. Super-intelligence is coming. It literally can't be stopped at this point. The only question is whether it's 2, 5, or 10 years.

If we don't solve alignment, you die. It is the default. AI alignment is the hardest problem humans have ever tried to solve. Global warming will cause suffering on that timescale, but not extinction. A well-aligned super-intelligence has actual potential to reverse global warming. A misaligned one will mean it doesn't matter.

So, if you care, you should be working in AI alignment. If you don't have the skillset, find something else: https://80000hours.org/

Every single dismissal of AI "doom" is based on wishful thinking and hand-waving.

[–] TheFutureIsDelaware 3 points 1 year ago

this kind of journalism constantly teaches and reminds people that organic doesn’t mean life.

Except... it doesn't. That's just a dreamy hypothetical way that it might manifest, but that doesn't match reality. It misinforms. The end.

[–] TheFutureIsDelaware 37 points 1 year ago (3 children)

Writers always know that "organic" will be misinterpreted by the public, and do it anyway, hiding behind "technically correct". Personally, I think avoiding creating more misunderstandings about science and space exploration outweighs any "technically correct" bullshit. Stop intentionally hurting public understanding for clicks.

[–] TheFutureIsDelaware 14 points 1 year ago* (last edited 1 year ago) (1 children)

Well, my first thought is "why the fuck didn't you put WHAT FEATURE in the title?". Then I thought "okay, that's probably the article's title, and OP was just using it." Then I saw that the actual title is "Reddit is getting rid of its Gold awards system", and returned to "fuck this OP".

[–] TheFutureIsDelaware 6 points 1 year ago* (last edited 1 year ago)

So they slapped T2I-Adapter (which is basically an alternative to controlnet) on top of SDXL 0.9. This is not very novel or new, and stability is having cashflow issues so they're desperate to have tools on clipdrop that people can actually use to increase profits, so that they're actually willing to pay for them. That's pretty much all this is.

Here's another place you can play with T2I-adapter with SD1.5 models: https://huggingface.co/spaces/Adapter/T2I-Adapter

[–] TheFutureIsDelaware 1 points 1 year ago

Yes, it seems pretty untenable that rare earth is the explanation for the lack of evidence of any life outside of Earth. But even if it is true that we're the only life in the observable universe, the universe is still much bigger, and in many physicists opinion, probably infinite.

The fact that life seems to have evolved on Earth as soon as it was possible to is some evidence that abiogenesis is not the bottleneck. But the usefulness of this observation depends on the distribution of other things we don't know. For example, if on planets where life evolves later, life never makes it to human-level intelligence before the planet becomes uninhabitable, then our early abiogenesis is survivorship bias, rather than something we should expect to be in the center of the distribution of when abiogenesis happens on a planet where it is possible.

[–] TheFutureIsDelaware 1 points 1 year ago* (last edited 1 year ago)

What?... no. That would be confirmed life on another planet. It would be the single biggest discovery in human history. You would have heard if that was the case. "Fossilized microbes" would be "life". Nobody needs the life to still be alive to be a huge, huge deal.

If you're thinking the word "organic" means "microbe", it doesn't. I guess this is the consequence of all the harm to the public understanding that all the shitty headlines caused trying to get clicks but still being "technically correct" despite knowing it will be misinterpreted as "life".

[–] TheFutureIsDelaware 1 points 1 year ago

Not 100% proof. That would require the universe to be infinite, which it still might not be if the curvature is within the tiny margin of error. It's close enough to proof that it might as well be the case. The entire universe couldn't be less than something like 130x the size of the observable universe, though... unless it has nontrivial topology. There's always a caveat.

[–] TheFutureIsDelaware 8 points 1 year ago

I truly can think of nothing worse than this dipshit with a lot of compute power with the most powerful technology humans have ever created.

view more: ‹ prev next ›