Email or username:

Password:

Forgot your password?
Meredith Whittaker

📣 New Paper! w/@HeidyKhlaaf + @sarahbmyers

We put the narrative on AI risks & NatSec under a microscope, finding the focus on hypothetical AI bioweapons is warping policy and ignoring real & serious harms of current AI use in surveillance, targeting, etc.

Instead of crafting solutions for hypothetical harms, we advocate focusing on already existing and very significant safety issues--namely AI’s reliance on PII & the vulns created by foundation models' use in NatSec.

arxiv.org/abs/2410.14831

Screenshot of paper abstract that says, "Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligence, Surveillance, and Targeting
Heidy Khlaaf, Sarah Myers West, Meredith Whittaker
Discussions regarding the dual use of foundation models and the risks they pose have overwhelmingly focused on a narrow set of use cases and national security directives-in particular, how AI may enable the efficient construction of a class of systems referred to as CBRN: chemical, biological, radiological and nuclear weapons. The overwhelming focus on these hypothetical and narrow themes has occluded a much-needed conversation regarding present uses of AI for military systems, specifically ISTAR: intelligence, surveillance, target acquisition, and reconnaissance. These are the uses most grounded in actual deployments of AI that pose life-or-death stakes for civilians, where misuses and failures pose geopolitical consequences and military escalations. This is particularly underscored by novel proliferation risks specific to the widespread availability of commercial models and the lack of effective approaches that reliably prevent them from contributing to ISTAR capabilities.
In this paper, we outline the significant national
1 comment
Lasse Gismo - 🇮🇱🇺🇦🇸🇩 :nona:

@Mer__edith

What was it called in the Bundeswehr before:
camouflage, deceive, piss off - and fill their pockets.
A very old story.

Go Up