YouKnowWhoTheFuckIAM

joined 1 year ago
[–] [email protected] 2 points 7 months ago (1 children)

Yeah but Ahmed is a prick

[–] [email protected] 6 points 8 months ago (2 children)

I would guess that their personal reach over the name is pretty limited by a number of factors, including that the town itself has quite a significant similar claim itself. “Oxford Brookes” university, for example, is not a part of Oxford the Ancient University, but it certainly helps their brand to be next door (and as far as I know it’s a perfectly fine institution as far as these things go).

The issue with the Future of Humanity Institute would be almost the other way around: that as long as it’s in-house, the university can hardly dissociate themselves from it.

[–] [email protected] 14 points 8 months ago (5 children)

I wonder how much they disliked it and how much they felt it was just using the Oxford brand and cheapening it. Only a slight but a qualitative difference. You can pump out all the awful shit you want at Oxford, but cheapen the brand with the increasingly zany antics of your dorky club and they might at least look twice.

[–] [email protected] 5 points 8 months ago

Never meditate folks, it’s bad for the brain

[–] [email protected] 7 points 8 months ago

Speaking as apparently one of the few people online who still has to look it up in order to find out a reference is to Avatar and not some Vietnamese proverb from the ‘60s…but I repeat myself

[–] [email protected] 6 points 8 months ago (3 children)

My dear boy…what the fuck are you talking about

[–] [email protected] 10 points 8 months ago* (last edited 8 months ago) (1 children)

I’d say “what the fuck was a 30 year old man doing on a sugar daddy site” but you answered it pretty aptly

[–] [email protected] 6 points 8 months ago

Yeah man but it’s sold for thousands of years, and the last hundred? Oh you’d better believe we know it sells

[–] [email protected] 4 points 8 months ago

Honestly? Improvement

[–] [email protected] 6 points 8 months ago* (last edited 8 months ago) (1 children)

I want to add William H. Tucker’s posthumous “The Bell Curve in Perspective”, which came out I think right at the end of last year. It’s a short, thorough, assessment both of the history of The Bell Curve book itself and what has happened since.

Even the first chapter is just mindblowingly terse in brutally unpacking how (a) it was written by racists, (b) for racist ends, (c) Murray lied and lied afterwards in pretending that ‘only a tiny part of the book was about race’ or whatever

[–] [email protected] 12 points 8 months ago (4 children)

Well this is where I was going with Lakatos. Among the large scale conceptual issues with rationalist thinking is that there isn’t any understanding of what would count as a degenerating research programme. In this sense rationalism is a perfect product of the internet era: there are far too many conjectures being thrown out and adopted at scale on grounds of intuition for any effective reality-testing to take place. Moreover, since many of these conjectures are social, or about habits of mind, and the rationalists shape their own social world and their habits of mind according to those conjectures, the research programme(s) they develop is (/are) constantly tested, but only according to rationalist rules. And, as when the millenarian cult has to figure out what its leader got wrong about the date of the apocalypse, when the world really gets in the way it only serves as an impetus to refine the existing body of ideas still further, according to the same set of rules.

Indeed the success of LLMs illustrates another problem with making your own world, for which I’m going to cheerfully borrow the term “hyperstition” from the sort of cultural theorists of which I’m usually wary. “Hyperstition” is, roughly speaking, where something which otherwise belongs to imagination is manifested in the real world by culture. LLMs (like Elon Musk’s projects) are a good example of hyperstition gone awry: rationalist AI science fiction manifested an AI programme in the real world, and hence immediately supplied the rationalists with all the proof they needed that their predictions were correct in the general if not in exact detail.

But absent the hyperstitional aspect, LLMs would have been much easier to spot as by and large a fraudulent cover for mass data-theft and the suppression of labour. Certainly they don’t work as artificial intelligence, and the stuff that does work (I’m thinking radiology, although who knows when the bigs news is going to come out that that isn’t all it’s been cracked up to be), i.e. transformers and unbelievable energy-spend on data-processing, doesn’t even superficially resemble “intelligence”. With a sensitive critical eye, and an open environment for thought, this should have been, from early on, easily sufficient evidence, alongside the brute mechanicality of the linguistic output of ChatGPT, to realise that the prognostic tools the rationalists were using lacked either predictive or explanatory power.

But rationalist thought had shaped the reality against which these prognoses were supposed to be tested, and we are still dealing with people committed to the thesis that skynet is, for better or worse, getting closer every day.

Lakatos’s thesis about degenerating research programmes asks us to predict novel and look for corroborative evidence. The rationalist programme does exactly the opposite. It predicts corroborative evidence, and looks for novel evidence which it can feed back into its pseudo-Bayesian calculator. The novel evidence is used to refine the theory, and the predictions are used to corroborate a (foregone) interpretation of what the facts are going to tell us.

Now, I would say, more or less with Lakatos, that this isn’t an amazingly hard and fast rule, and it’s subject to different interpretations. But it’s a useful tool for analysing what’s happening when you’re trying to build a way of thinking about the world. The pseudo-Bayesian tools, insofar as they have any impact at all, almost inevitably drag the project into degeneration, because they have no tool for assessing whether the “hard core” of their programme can be borne out by facts.

[–] [email protected] 5 points 8 months ago

Everything is. I don’t need anyone to tell me that the Red Scare ruined philosophy of science, going to America was the original problem.

view more: ‹ prev next ›