Skiluros

joined 1 month ago
[–] Skiluros 1 points 1 day ago (1 children)

I originally stated that I did not find your arguments convincing. I wasn't talking about AI safety as a general concept, but the overall discussion related to the article titled (Anthropic, Apollo astounded to find a chatbot will lie to you if you tell it to lie to you).

I didn't find you initial post (or any you posts in that thread) to be explicit in the recognition in the potential for bad faith actions from the likes of Anthropic, Apollo. On the contrary, you largely deny the concept of "criti-hype". One can, in good faith, interpret this as de facto corporate PR promotion (whether that was the intentional or not).

You didn't mention the hypothetical profit maximization example in the thread and your phrasing implied a current tool/service/framework, not a hypothetical.

I don't see how the YT video or the article summary (I did not read the paper) is honestly relevant to what was being discussed.

I am honestly trying to not take sides (but perhaps I am failing in this?), more like suggesting that how people interpret "groupthink" can take many forms and that "counter-contrarian" arguments in of themselves are not some of magical silver bullet.

[–] Skiluros 21 points 1 day ago

Source FT article (archive)

I find this difficult to believe. I am assuming he was either lying or this was an off the cuff statement that wasn't serious.

[–] Skiluros 2 points 1 day ago (3 children)

That's not what we are discussing though. We are discussing whether aweful.systems was right or wrong in banning you. Below is the title of your post:

Instance banned from awful.systems for debating the groupthink

I will note that I don't think they should be this casual with giving out a bans. A warning to start with would have been fine.

An argument can be made that you went in to awful.systems with your own brand of groupthink; specifically complete rejection of even the possibility that we are dealing with bad faith actors. Whether you like it or not, this is relevant to any discussion on "AI safety" more broadly and that thread specifically (as the focus of the linked article was on Apollo Research and Anthropic and AI Doomerism as a grifting strategy).

You then go on to cite a YT video by "Robert Miles AI Safety", this is a red flag. You also claim that you can't (or don't want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue. This is not some sort of bad faith request.

Further on you start talking about "Dunning-Kruger effect" and "deeper understanding [that YT fellow has]". If you know the YT fellow has a deeper understanding of the issue, why can't you explain in layman terms why this is the case?

I did watch the video and it has nothing to do with grifting approaches used by AI companies. The video is focused on explaining a relatively technical concept for non-specialists (not AI safety more broadly in context of real world use).

Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of "AI safety" messaging from organization like Apollo Research and Anthropic?

[–] Skiluros 17 points 1 day ago (1 children)

The house was shaking due to the ballistic missile takedowns and it wasn't even particularly close.

[–] Skiluros 1 points 1 day ago
[–] Skiluros 1 points 1 day ago (2 children)

To be fair I was replying to a thread that said LW/ML are equal and something about fascism in LW.

I just don't think they are as problematic as you imply. Are there issues? Sure (I be have my own complaints), but generally those communities seem somewhat usable.

[–] Skiluros 2 points 1 day ago

It's been a while since I've been/lived in the US (I do have close friends who lived there though), but I disagree. It seemed like a general social issue that crosses all demographic segments.

[–] Skiluros 5 points 1 day ago

You're either trolling or you are ignorant.

[–] Skiluros 1 points 1 day ago (4 children)

I don't really see it. A lot of the posts in that community don't even explicitly state what community is being discussed.

Some of the stuff is legitimate, some of the stuff feels more like bitching.

I don't really understand how you can claim LW is the worst considering that on ML you get instance banned for opposing the russian invasion of Ukraine or having a critical attitude towards China.

[–] Skiluros 2 points 1 day ago

Major tech communities honestly need to move off ML.

[–] Skiluros 5 points 1 day ago* (last edited 1 day ago) (7 children)

I am not sure if I read the correct thread, but I personally didn't find your arguements convincing, although I think a full ban is excessive (at least initially).

Keep in mind that I do use local LLM (as an elaborate spell-checker) and I am a regular user of ML based video upscaleling (I am a fan of niche 80s/90s b-movies).

Forget the technical arguments for a seconds. And look at the social-economic component behind US-style VC groups, AI companies, and US technology companies in general (other companies are a separate discussion).

It is not unreasonable to believe that the people involved (especially the leadership) in the abovementioned organizations are deeply corrupt and largely incapable of honesty or even humanity [1]. It is a controversial take (by US standards) but not without precedent in the global context. In many countries, if you try and argue that some local oligarch is acting in good faith, people will assume you are trying (and failing) to practise a standup comedy routine.

If you do hold a critical attitude and don't buy into tedious PR about "changing the world", it is reasonable to assume that irrespective of the validity of "AI safety" as a technical concept, the actors involved would lie about it. And even the concept was valid, it is likely they would leverage it for PR while ignoring any actual academic concepts behind "AI safety" (if they do exist).

One could even argue that your arguementation approach is an example of provincialism, group-think and generally bad faith.

I am not saying you have to agree with me, I am more trying to show a different perspective.

[1] I can provide some of my favourite examples if you like, I don't want to make this reply any longer.

[–] Skiluros 1 points 1 day ago (6 children)
 

The insurgents claimed on their Military Operations Department channel on the Telegram app Thursday that they have entered Hama and are marching toward its center.

“Our forces are taking positions inside the city of Hama,” the channel quoted a local commander identified as Maj. Hassan Abdul-Ghani as saying.

The Britain-based Syrian Observatory for Human Rights, an opposition war monitor, said gunmen have entered parts of the city, mainly the neighborhoods of Sawaaeq and Zahiriyeh to the northwest. It added that gunmen are also on the edge of the northwestern neighborhood of Kazo.

“If Hama falls, it means that the beginning of the regime’s fall has started,” the Observatory’s chief, Rami Abdurrahman, told The Associated Press.

Hama is a major intersection point in Syria that links that country’s center with the north as well the east and the west. It is about 200 kilometers (125 miles) north of the capital, Damascus, Assad’s seat of power. Hama province also borders the coastal province of Latakia, a main base of popular support for Assad.

view more: next ›