That's a great line of thought. Take an algorithm of "simulate a human brain". Obviously that would break the paper's argument, so you'd have to find why it doesn't apply here to take the paper's claims at face value.
BitSound
There's a number of major flaws with it:
- Assume the paper is completely true. It's just proved the algorithmic complexity of it, but so what? What if the general case is NP-hard, but not in the case that we care about? That's been true for other problems, why not this one?
- It proves something in a model. So what? Prove that the result applies to the real world
- Replace "human-like" with something trivial like "tree-like". The paper then proves that we'll never achieve tree-like intelligence?
IMO there's also flaws in the argument itself, but those are more relevant
Not in general, sorry. Best bet is to make sure you're using the most recent kernel, which Ubuntu tends to lag on. You can also try checking out the arch wiki entry for it. It's a different distro, but the wiki is good and commonly has tips relevant for any distro.
What kernel are you running? From what I understand, that should be the major differentiator if you're not using S3.
Couldn't tell you unfortunately. It looks like AMD is also on board with deprecating S3 sleep, so I would guess that it's not significantly better. The kernel controls the newer standby modes, so it's really going to depend on how well it's supported there.
Sleep kind of sucks on the original 11th gen hardware. They pushed out a bios update that broke S3 sleep, so now all you've got is the s2idle version, which the kernel is only OK at. Your laptop bag might heat up. S3 breaking isn't really their fault, Intel deprecated it. Still annoying though. I've heard the Chromebook version and other newer gens have better sleep support.
Other than that, it's great. NixOS runs just fine, even the fingerprint reader works, which has been rare for Linux
Meshuggah:
https://www.youtube.com/watch?v=m9LpMZuBEMk
Listened to them before I got into metal, came back to them later and now love them. That's from probably one of their more accessible records, they also have more experimental stuff like this:
Do you have any links to read up on him? I know this is a very contentious topic, but I haven't heard much about him and I'm curious. What would you hold as his worst actions?
It is a bold claim, but based on their success with ruff, I'm optimistic that it might pan out.
I would just ban and not feel bad if I were you. They're not here to contribute to the community, they're happy having communities fall apart if it gives them a chance to "dunk on the libs". Just more political weirdos
This is a silly argument:
[..] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’
That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.
‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we'd even get close,’ Olivia Guest adds.
That's as shortsighted as the "I think there is a world market for maybe five computers" quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren't the path to AGI, but there's no reason to think we can't achieve it in general unless you're religious.
EDIT: From the paper:
The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.
That's a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn't mean it has any relationship to the real world.
Hell yeah 🤘 It all kicks ass, but that's an impressive vocal range. Wonder if he's going to get any haters for the little big of pig squeal at the end. Also not really the shirt I expected to see on the vocalist