this post was submitted on 28 Jun 2023
7 points (100.0% liked)

Credible Defense

436 readers
11 users here now

An unofficial counterpart to the subreddit r/CredibleDefense, intended to be a supplementary resource and potential fallback point. If you are an active moderator over there, please don't hesitate to contact me to be given a moderation position.

Wiki Glossary of Common Terms and Abbreviations. (Request an addition)

General Rules

Strive to be informative, professional, gracious, and encouraging in your communications with other members here. Imagine writing to a superior in the Armed Forces, or a colleague in a think tank or major investigative journal.

This is not at all intended to be US-centric; posts relating to other countries are highly encouraged.

No blind partisanship. We aim to study defense, not wage wars behind keyboards. Defense views from or about all countries are welcome so long as they are credible.

If you have experience in relevant fields, understand your limitations. Just because you work in the defense arena does not mean you are always correct.

Please refrain from linking the sub outside of here and a small number of other subs (LCD, NCD, War College, IR_Studies, NCDiplomacy, AskHistorians). This helps control site growth (especially limiting surges) and filters people toward those with a stronger interest.

No denial of war crimes or genocide.

Comments

Should be substantive and contribute to discussion.

No image macros, GIFs, emojis or memes.

No AI-generated content.

Don’t be abrasive/insulting.

No one-liners, jokes, insults, shorthand, etc. Avoid excessive sarcasm or snark.

Sources are highly encouraged, but please do not link to low-quality sources such as RT, New York Post, The National Interest, CGTN, etc. unless they serve a useful purpose.

Be polite and informative to others here, and remember that we should be able to disagree without being disagreeable.

Do not accuse or personally challenge others, rather ask them for sources and why they have their opinions.

Do not ask others about their background as it is rude and not encouraging of others to have an open discussion.

Please do no not make irrelevant jokes, offtopic pun threads, use sarcasm, respond to a title of a piece without reading it, or in general make comments that add nothing to the discussion. Please refrain from top-level jokes. Humor is appreciated, but it should be infrequent and safe for a professional environment.

Please do not blindly advocate for a side in a conflict or a country in general. Surely there are many patriots here, but this is not the arena to fight those battles.

Asking questions in the comment section of a submission, or in a megathread, is a great way to start a conversation and learn.

Submissions

Posts should include a substantial text component. This does not mean links are banned, instead, they should be submitted as part of the text post. Posts should not be quick updates or short-term. They should hold up and be readable over time, so you will be glad that you read them months or years from now.

Links should go to credible, high-quality sources (academia, government, think tanks), and the body should be a brief summary plus some comments on what makes it good or insightful.

Essays/Effortposts are encouraged. Essays/Effortposts are text posts you make that have an underlying thesis or attempt to synthesize information. They should cite sources, be well-written, and be relatively long. An example of an excellent effort post is this.

Please use the original title of the work (or a descriptive title; de-editorializing/de-clickbaiting is acceptable), and possibly a sub-headline.

Refrain from submissions that are quick updates in title form, troop movements, ship deployments, terrorist attacks, announcements, or the crisis du jour.

Discussions of opinion pieces by distinguished authors, historical research, and research on warfare relating to national security issues are encouraged.

We are primarily a reading forum, so please no image macros, gifs, emojis, or memes.

~~Moderators will manually approve all posts.~~ Posting is unrestricted for the moment, but posts without a submission statement or that do not meet the standards above will be removed.

No Leaked Material

Please do not submit or otherwise link to classified material. And please take discussions of classified material to a more secure location.

In general, avoid any information that will endanger anyone.

#Please report items that violate these rules. We don’t know about it unless you point it out.

We maintain lists of sources so that anyone can help to find interesting open-source material to share. As outlets wax and wane in quality, please help us keep the list updated:

https://reddit.com/r/CredibleDefense/wiki/credibleoutlets

founded 1 year ago
MODERATORS
 

Submission Statement

Though this paper focuses on arms control through the lens of AI-enabled measures, I found it a useful primer on the dynamics of arms control more generally. While I don't believe AI meets the six criteria to be amenable to regulation, I can see a path for certain AI applications to be regulated via treaty. For example, mandates requiring a man-in-the-loop or man-on-the-loop seem to minimally disrupt weapon effectiveness, while greatly limiting the disruptive nature or "horribleness" of autonomous killers.

Paul Scharre is the Executive Vice President and Director of Studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. Megan Lamberth is a former Associate Fellow for the Technology and National Security Program at CNAS. Her research focuses on U.S. strategy for emerging technologies and the key components of technology competitiveness, such as human capital, R&D investments, and norms building.

Watts identifies six criteria that he argues affect a weapon’s tolerance or resistance to regulation: effectiveness, novelty, deployment, medical compatibility, disruptiveness, and notoriety.11 An effective weapon that provides “unprecedented access” to enemy targets and has the capacity to ensure dominance is historically resistant to regulation. There is a mixed record for regulating novel weapons or military systems throughout history. Countries have pursued regulation of certain new weapons or weapons delivery systems (e.g., aerial bombardment) while also resisting regulation for other novel military systems (e.g., submarines). Weapons that are widely deployed—“integrated into States’ military operations”—tend to be resistant to arms control. Weapons that cause “wounds compatible with existing medical protocols” in military and field hospitals are historically difficult to ban or regulate. Powerful nations have historically tried to regulate or ban weapons that are “socially and militarily disruptive” out of fear that such weapons could upend existing global or domestic power dynamics. Campaigns by civil society groups or widespread disapproval from the public can increase notoriety, making a weapon potentially more susceptible to arms control.12

Whether arms control succeeds or fails depends on both its desirability and its feasibility. The desirability of arms control encompasses states’ calculation of a weapon’s perceived military value versus its perceived horribleness (because it is inhumane, indiscriminate, or disruptive to the social or political order). Thus, desirability of arms control is a function of states’ desire to retain a weapon for their own purposes balanced against their desire to restrain its use by their adversaries.

AI technology poses challenges for arms control for a variety of reasons. AI technology is diffuse, and many of its applications are dual use. As an emerging technology, its full potential has yet to be realized—which may hinder efforts to control it. Verification of any AI arms control agreement would also be challenging; states would likely need to develop methods of ensuring that other states are in compliance to be comfortable with restraining their own capabilities. These hurdles, though significant, are not insurmountable in all instances. Under certain conditions, arms control may be feasible for some military AI applications. Even while states compete in military AI, they should seek opportunities to reduce its risks, including through arms control measures where feasible.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here