Skiluros

joined 3 months ago
[–] Skiluros 179 points 2 months ago (6 children)

"Fascism, that's not a word that regular people, you know, use, you know?" Fetterman said

A fascinating perspective.

[–] Skiluros 2 points 2 months ago* (last edited 2 months ago)

It basically relies on this false stereotype that since Israel is a Jewish state, and Jewish people are inherently manipulative or deceitful, Israel must not be telling the truth.

This almost supremacist rhetoric. With the implication being that the Israeli government due to its superiority over others is inherently incapable of being manipulative or deceitful and any claims to the contrary are anti-antisemitism.

One could interpret this as a racist take.

[–] Skiluros 3 points 2 months ago (1 children)

I originally stated that I did not find your arguments convincing. I wasn't talking about AI safety as a general concept, but the overall discussion related to the article titled (Anthropic, Apollo astounded to find a chatbot will lie to you if you tell it to lie to you).

I didn't find you initial post (or any you posts in that thread) to be explicit in the recognition in the potential for bad faith actions from the likes of Anthropic, Apollo. On the contrary, you largely deny the concept of "criti-hype". One can, in good faith, interpret this as de facto corporate PR promotion (whether that was the intentional or not).

You didn't mention the hypothetical profit maximization example in the thread and your phrasing implied a current tool/service/framework, not a hypothetical.

I don't see how the YT video or the article summary (I did not read the paper) is honestly relevant to what was being discussed.

I am honestly trying to not take sides (but perhaps I am failing in this?), more like suggesting that how people interpret "groupthink" can take many forms and that "counter-contrarian" arguments in of themselves are not some of magical silver bullet.

[–] Skiluros 20 points 2 months ago

Source FT article (archive)

I find this difficult to believe. I am assuming he was either lying or this was an off the cuff statement that wasn't serious.

[–] Skiluros 5 points 2 months ago (3 children)

That's not what we are discussing though. We are discussing whether aweful.systems was right or wrong in banning you. Below is the title of your post:

Instance banned from awful.systems for debating the groupthink

I will note that I don't think they should be this casual with giving out a bans. A warning to start with would have been fine.

An argument can be made that you went in to awful.systems with your own brand of groupthink; specifically complete rejection of even the possibility that we are dealing with bad faith actors. Whether you like it or not, this is relevant to any discussion on "AI safety" more broadly and that thread specifically (as the focus of the linked article was on Apollo Research and Anthropic and AI Doomerism as a grifting strategy).

You then go on to cite a YT video by "Robert Miles AI Safety", this is a red flag. You also claim that you can't (or don't want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue. This is not some sort of bad faith request.

Further on you start talking about "Dunning-Kruger effect" and "deeper understanding [that YT fellow has]". If you know the YT fellow has a deeper understanding of the issue, why can't you explain in layman terms why this is the case?

I did watch the video and it has nothing to do with grifting approaches used by AI companies. The video is focused on explaining a relatively technical concept for non-specialists (not AI safety more broadly in context of real world use).

Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of "AI safety" messaging from organization like Apollo Research and Anthropic?

[–] Skiluros 16 points 2 months ago (1 children)

The house was shaking due to the ballistic missile takedowns and it wasn't even particularly close.

[–] Skiluros 1 points 2 months ago
[–] Skiluros 1 points 2 months ago (2 children)

To be fair I was replying to a thread that said LW/ML are equal and something about fascism in LW.

I just don't think they are as problematic as you imply. Are there issues? Sure (I be have my own complaints), but generally those communities seem somewhat usable.

[–] Skiluros 2 points 2 months ago

It's been a while since I've been/lived in the US (I do have close friends who lived there though), but I disagree. It seemed like a general social issue that crosses all demographic segments.

[–] Skiluros 4 points 2 months ago

You're either trolling or you are ignorant.

[–] Skiluros 0 points 2 months ago (4 children)

I don't really see it. A lot of the posts in that community don't even explicitly state what community is being discussed.

Some of the stuff is legitimate, some of the stuff feels more like bitching.

I don't really understand how you can claim LW is the worst considering that on ML you get instance banned for opposing the russian invasion of Ukraine or having a critical attitude towards China.

[–] Skiluros 5 points 2 months ago

Major tech communities honestly need to move off ML.

view more: ‹ prev next ›