top of page

Langley's New Analyst Doesn't Sleep

  • Writer: Bonca | Lab
    Bonca | Lab
  • 3 days ago
  • 2 min read

On April 9, CIA Deputy Director Michael Ellis told a Washington audience that the agency had, for the first time in its 79-year history, used AI to generate a finished intelligence report. Not a memo. A report. The kind of product that eventually lands on a policymaker's desk.


Ellis was speaking at a Special Competitive Studies Project event, and the headline commitment went further: within two years, "AI coworkers" will be embedded across every CIA analytic platform GovExec - a classified flavor of generative AI sitting next to the humans who parse cables from assets abroad.


What the bots will actually do

Not the glamorous stuff. Ellis was explicit: "It won't do the thinking for our analysts, but it will help draft key judgments, edit for clarity and compare drafts against tradecraft standards." Nextgov.com Triage. Trend-spotting. First drafts. The grind work that eats an analyst's morning before they get to the actual call.


The scale is already real. The agency ran more than 300 AI projects last year GovExec, and Ellis floated a decade-out vision where officers manage teams of agents as "autonomous mission partners." Human-in-the-loop now. Human-on-the-loop later.


The China subtext

Ellis didn't bury the motive. "Five to ten years ago, China was nowhere near America, in terms of technological innovation," Yahoo! he said - past tense doing a lot of work in that sentence. The newly elevated Center for Cyber Intelligence is the hard edge of this, and Ellis framed the coming cyber fight as fundamentally a contest between AI models.


The Anthropic elephant

Here's where it gets spicy. The White House has ordered federal agencies to phase out Anthropic tools after the company refused to loosen restrictions on domestic surveillance and autonomous weapons use. Anthropic is suing. Ellis didn't name them, but he didn't need to: the CIA "cannot allow the whims of a single company" to constrain its AI use, and the agency is diversifying vendors to preserve operational freedom Nextgov.com.


Translation: the spy agency wants model optionality, and it wants vendors who won't say no.


The unresolved part

HUMINT is the CIA's oldest muscle - recruiting people who betray their countries for reasons machines can't model. What happens when the first draft of how Washington understands those sources is written by a system trained to smooth, summarize, and pattern-match? Does tradecraft survive its own efficiency gains, or does the signal get quietly averaged away?



Sources: Maggie Miller, Politico, "CIA is trusting AI to help analyze intel from human spies" (April 9-10, 2026); David DiMolfetta, Nextgov/FCW, "CIA plans for 'AI coworkers', deputy director says" (April 10, 2026); David DiMolfetta, Defense One, "CIA employees will get AI 'coworkers'" (April 10, 2026); Government Executive syndication (April 10, 2026); Cointelegraph, "CIA to Bring in AI Co-workers to Help Catch Spies" (April 10, 2026); crypto.news coverage (April 10, 2026); CIA Directorate of Digital Innovation, "Creating the Future of Intelligence with DDI" (cia.gov); Thomas Mulligan, Studies in Intelligence (CIA journal), on HUMINT resurgence in the AI era, via Nextgov/FCW and Government Executive (April 2026); Special Competitive Studies Project event materials (April 9, 2026).

 
 
 

Recent Posts

See All

Comments


bottom of page