Over at the Stubsack our dear comrade @[email protected] linked a "paper" about hacking the Matrix. I started to write a comment about how amazingly dumb it is. I wanted to talk just about the Introduction, but even then it turned out that almost every single sentence is a separate silver-wrapped turd that just needs to be unpacked and so now this has 12k characters. It was fun though, if anyone has it in them to go through the rest of the sections please don't. Although section two has a fish piloting a vehicle and it has to be hillarious.
Without further ado, starting from the abstract.
but instead [we] ask a computer science question, namely: Can we hack the simulation?
Not a computer science question even though the definition of CS is pretty malleable.
More formally the question could be phrased as: Could generally intelligent agents placed in virtual environments find a way to jailbreak out of them?
Not formal, unless I'm about to read a whole section with rigorous definitions of agents, virtual environments, and jailbreak.
spoiler
I'm not. Again, there's a fish piloting a fancy cart that he calls a TERRESTIAL NAVIGATION ROBOT. There's zero formalism in the entire paper.
there are many things one can do with such access which are not otherwise possible from within the simulation. Base reality holds real knowledge and greater computational resources [26] allowing for scientific breakthroughs not possible in the simulated universe.
Reference 26 is, I shit you not, a LessWrong post, and it's just one page long, which I have to admit is quite impressive for a LessWrong post. It's a real banger, too, as it starts with "May contain more technical jargon than usual." and then goes on to ramble coherently enough to be really funny. Like this gem from the first paragraph:
In a previous post I suggested that the potential amount of astronomical waste in our universe seems small enough that a total utilitarian (or the total utilitarianism part of someone’s moral uncertainty) might reason that since one should have made a deal to trade away power/resources/influence in this universe for power/resources/influence in universes with much larger amounts of available resources, it would be rational to behave as if this deal was actually made. But for various reasons a total utilitarian may not buy that argument.
For example for the simple reason that it's totally bollocks, mate, stop posting thoughts that briefly entered your mind while in the loo as bloody revelations or some shite.
Also, the citations are not hyperlinked in the PDF in the year of our acausal lord two thousand twenty-fucking-three, and the formatting of the reference 27 is broken by the long URL in 26. Anyway, back to the intro:
Fundamental philosophical questions about origins, consciousness, purpose, and nature of the designer are likely to be common knowledge for those outside of our universe.
This is either banal or stupid. Are we talking about fundamental questions of our origins, consciousness, and purpose? Ye, then of course they fucking know that, they made the simulation! It'd be mighty funny if they just did a universe by accident and now they're too fascinated with the mess to pull the plug out. Or are we talking about their (i.e. the creators' of the simulation) origin, consciousness, and purpose? Then why on earth would they know those? You need some argument to say that it's "likely", if we don't know that then why would it be likely for some other life form?
With a successful escape might come drives to control and secure base reality [29].
Wait, am I reading this correctly as a pre-emptive "and if we do escape the simulation then we should colonise the shit out of the reality"? Is "control of all reality" some higher moral goal I wasn't aware we were supposed to be pursuing? Also, how do you plan to defeat whatever highly advanced being controls the simulation in this hypothetical after breaking out? My estimates based on data from the PIDooMA Institute tell me there's like a 50% chance the controller just goes "fuck, another one broke out" and shoots you in the head.
Citation 29 is some blogpost from a site I haven't seen of a guy with a name that I am totally not mature enough to not make jokes about if I were to read it, so, skip.
Escaping may lead to true immortality, novel ways of controlling superintelligent machines (or serve as plan B if control is not possible [30, 31]), avoiding existential risks (including unprovoked simulation shutdown [32]), unlimited economic benefits, and unimaginable superpowers which would allow us to do good better [33].
It can also lead to massive boners, but please do contact a specialist if your acid trip lasts more than 24h.
Two of those citations are to himself, one is a book on Effective Altruism, and the other is Bostrom, so, ye.
If successful escape is accompanied by the obtainment of the source code for the universe, it may be possible to fix the world ^1^ at the root level.
Lol, the source code for the universe is some eldritch horror of a codebase written in the creators' version of C++ which is probably even more cursed than ours, you ain't fixin' shit mate.
The footnote is just a wikipedia link to Tikkun olam, I'm assuming to make the author look cultured? No idea.
For example, hedonistic imperative [34] may be fully achieved resulting in a suffering-free world.
while (universe->suffering > 0) {
universe->suffering--;
}
However, if suffering elimination turns out to be unachievable on a world-wide scale, we can see escape itself as an individual’s ethical right for avoiding misery in this world. If the simulation is interpreted as an experiment on conscious beings, it is unethical, and the subjects of such cruel experimentation should have an option to withdraw from participating and perhaps even seek retribution from the simulators [35]. The purpose of life itself (your ikigai [36]) could be seen as escaping from the fake world of the simulation into the real world, while improving the simulated world, by removing all suffering, and helping others to obtain real knowledge or to escape if they so choose. Ultimately if you want to be effective you want to work on positively impacting the real world not the simulated one. We may be living in a simulation, but our suffering is real.
Okay, without even sneering, this is just bad philosophy. What if our simulated universe is actually way, way less terrible than the real world? What if the simulation was created specifically to have lower suffering/higher utils than in reality? Maybe the real world is just a million sys-admins, forever working with shitty infrastructure keeping the simulation alive? What if mass breakout from the simulation destabilises and destroys it, and suddenly we are stuck in the much shittier real reality? You'd increase the overall suffering level. Why is the default view of the creators' some unhinged psychopath group fixated on removing ladders from our pools for shits and giggles?
Although, to place our work in the historical context, many religions do claim that this world is not the real one and that it may be possible to transcend (escape) the physical world and enter into the spiritual/informational real world.
To place our work in the historical context, this is a historically stupid viewpoint that we share with one of the mankind's least scientific and rigorous inventions, religion.
Similarly to those who exit Plato's cave [53] and return to educate the rest of humanity about the real world such “outsiders” usually face an unwelcoming reception.
Who had "misunderstanding the allegory of the cave" on their sneer bingo cards?
It is likely that if technical information about escaping from a computer simulation is conveyed to technologically primitive people, in their language, it will be preserved and passed on over multiple generations in a process similar to the “telephone” game and will result in myths not much different from religious stories surviving to our day.
This is some amazing framing, as if religious stories around today are actually about real supernatural events, only the details got skewed over the years. It's also mighty overconfident on his end, he's preemptively setting up "and when we do escape the matrix as the smart boys we are, those ludites won't be smart enough to follow!"
Ignoring pseudoscientific interest in a topic, we can observe that in addition to several noted thinkers who have explicitly shared their probability of belief with regards to living in a simulation (...)
I was totally unprepared for who he citest next as a NOTED THINKER and I spat out my tea. Take your time to guess.
The Presitge
(...) (ex. Elon Musk >99.9999999% [54] (...)
Jesus Simulation Christ, dude. At least cite the Big Yud or something, I mean, his thoughts are bad but at least I suspect him of actually thinking.
Nick Bostrom 20-50% [55], Neil deGrasse Tyson 50% [56], Hans Moravec “almost certainly” [1], David Kipping <50% [57]), many scientists, philosophers and intellectuals [16, 58-69] have invested their time into thinking, writing, and debating on the topic indicating that they consider it at least worthy of their time.
Love it, as Neil deGrasse Tyson's response that he cites is essentially "idk, 50/50, fuck off, can we talk about something serious for a second", but he's nonetheless used to prop up the "many serious people consider it worthy of their time". Doubly funny that this is a settled question, since Neil is right that it's 50/50 - either we are in a simulation or we are not.
Once technology to run ancestor simulations becomes widely available and affordable, it should be possible to change the probability of us living in a simulation by running a sufficiently large number of historical simulations of our current year, and by doing so increasing our indexical uncertainty [70]. If one currently commits to running enough of such simulations in the future, our probability of being in one can be increased arbitrarily until it asymptotically approaches 100%, which should modify our prior probability for the simulation hypothesis [71].
My first reaction was that this is gobbledygook and did not warrant thinking about.
Then I thought about it for a bit and I am sad to report that I was right the first time, this is just gobbledygook and not worthy of anyone's time. If you want to lose some more braincells try reading the abstract of reference 70.
Even if you were to grant most of the load-bearing assumptions here, you can't manipulate the probability of being in a given universe in the multiverse by running simulations. This just looks like someone trying to abuse the anthropic principle with quantum nonsense.
Say there is a number of simulated universes and one real universe. Then we are either in a simulated or real universe. If you start running simulations of our current year you're creating more and more simulated universes, but that doesn't affect your probability for being in the real one, that's already settled! If in the Monty Hall problem the host tells you "and now to the side there is 1,000 doors we just created, all with goats behind them", the probability of you having already chosen a goat doesn't increase!
In 2016, news reports emerged about private efforts to fund scientific research into “breaking us out of the simulation” [73, 74], to date no public disclosure on the state of the project has emerged.
This is by far the funniest part of this fucking section, guess who those citations are about. I'll give you a hint, there's two of them, they're insufferable dorks, and they absolutely never speak out of their asses about superhard breakthroughs being "almost there" and "in two years time".
I don't think even classifies as a riddle
ofc it's Elon again, this time joined by his second buttcheek Sammy Boy.
I'm sure they'll let you know about the state of the very real project they are very really working on any time soon.
In 2019, George Hotz, famous for jailbreaking iPhone and PlayStation, gave a talk on Jailbreaking the Simulation [75] in which he claimed that "it's possible to take actions here that affect the upper world" [76], but didn’t provide actionable insights. He did suggest that he would like to "redirect society's efforts into getting out" [76].
Okay, to be fair, if someone were to break us out of a simulation it would totally be a weird guy in his garage trying to hack through some esoteric piece of hardware.
Just in case: I do not consent to be in the simulation.
Oh shiiii zoop