There’s a quiet revolution unfolding in newsrooms around the world. Not in what stories get told, but in how journalists prepare to tell them. Generative AI has entered the workflow — and like every powerful tool before it, from the printing press to the internet, it arrives carrying both extraordinary promise and genuine risk.
The promise: speed, scale, and depth of secondary research that would have been unthinkable five years ago. The risk: a slow, imperceptible erosion of the very skills that make journalism matter — critical thinking, narrative instinct, and the stubborn human refusal to accept the first answer.
I’ve spent years working at the intersection of AI, policy, and institutional intelligence through the World Development Council. In that work, I’ve watched brilliant analysts become more productive with AI — and I’ve watched a few become lazier. The difference was never the technology. It was how they chose to use it.
This piece is for every journalist navigating that same choice.
Why GenAI Deserves a Seat in the Newsroom
Let’s start with what’s undeniable. Journalism has always been a race against time, and modern newsrooms are stretched thinner than ever. GenAI doesn’t replace reporters — but it does something almost as valuable: it compresses the preparatory grunt work so journalists can spend more time on what actually matters.
1. Secondary Research at Scale
Consider the investigative journalist preparing a deep dive on, say, pharmaceutical pricing in Southeast Asia. Before GenAI, this meant weeks of combing through WHO databases, academic papers, trade publications, and regional news archives. Today, a well-prompted AI model can synthesise a preliminary landscape in hours — surfacing patterns, contradictions, and gaps that would have taken far longer to identify manually.
The Associated Press has been using AI-driven tools for earnings reports since 2014, freeing reporters to pursue the stories behind the numbers. More recently, newsrooms like Reuters and the BBC have experimented with AI-assisted research for complex, data-heavy beats.
2. Contextual Briefing for Breaking Stories
When a story breaks — a policy announcement, a corporate scandal, an election upset — the journalist’s first challenge is context. What happened before? Who are the players? What’s the regulatory landscape? GenAI can generate a rapid contextual brief that puts a reporter on solid footing within minutes, not hours.
3. Multilingual Source Discovery
For journalists covering cross-border stories, language barriers have always been a bottleneck. AI-powered translation and summarisation tools now make it possible to scan source material in Mandarin, Arabic, Portuguese, or Hindi and extract the essential intelligence — a capability that fundamentally expands the reach of investigative work.
4. Data Pattern Recognition
Financial disclosures, court filings, procurement records — these are the raw materials of accountability journalism. GenAI models, when pointed at structured datasets, can flag anomalies and outliers that human eyes would miss in a spreadsheet of ten thousand rows. ProPublica’s data journalism unit has demonstrated repeatedly how computational tools can surface stories hiding in plain sight within public records.
The Red Line: Where GenAI Must Stop and the Journalist Must Begin
Here is where I want to be blunt.
GenAI is a research assistant. It is not a journalist. The moment a newsroom begins treating AI output as draft copy rather than raw material, it has crossed a line that will eventually cost it credibility — and possibly much more.
Critical Thinking Cannot Be Outsourced
The core act of journalism is judgement: What matters? What’s being hidden? Who benefits from this narrative? These are not pattern-recognition problems. They are deeply human acts of scepticism, empathy, and moral reasoning. No language model, however sophisticated, possesses the lived experience or ethical grounding to make these calls.
When The New York Times uncovered the NSA’s surveillance programme, or when the Indian Express broke the story of the Gujarat files, the breakthroughs didn’t come from better data retrieval. They came from reporters who sensed something was wrong, followed that instinct against institutional resistance, and made editorial judgements that machines are structurally incapable of making.
Creative Writing Is Not “Content Generation”
There’s a difference between a report and a story. Between information and narrative. The best journalism doesn’t just convey facts — it makes you feel the weight of them. Consider the opening lines of Katherine Boo’s Behind the Beautiful Forevers, or the narrative architecture of a long-form piece in The Caravan or Granta. That kind of writing is born from observation, from sitting in uncomfortable places, from choosing the precise word that carries both meaning and feeling.
When journalists begin using AI to “polish” or “enhance” their prose, they risk sanding away the rough, human texture that gives great writing its power. The reader can tell. They always can.
Verification Is Non-Negotiable
GenAI models hallucinate. This is not a bug that will be fixed in the next update — it is a structural feature of how probabilistic language models work. They generate plausible text, not verified truth. Any fact, statistic, quote, or attribution surfaced by an AI model must be independently verified before it enters a story. Full stop.
The cautionary tales are already piling up. In 2023, a US lawyer submitted AI-generated legal briefs containing fabricated case citations. Several news outlets have published AI-assisted articles with factual errors that damaged their reputations. These are not edge cases — they are the predictable consequence of treating AI output as reliable without human verification.
The Context Gap: AI Sees the Words, Not the World
This is perhaps the most underappreciated limitation, and the one I hear most often from seasoned journalists: “It doesn’t get the full picture.”
They’re right. And the reason is structural, not just technical.
A journalist covering, say, the farmer protests in India doesn’t just carry a set of facts into the story. They carry years of contextual awareness — the memory of previous agrarian crises, the political subtext of a minister’s carefully worded denial, the body language of a district collector at a press briefing, the off-the-record conversation with a retired bureaucrat over chai that reframed the entire narrative. That accumulated, embodied understanding is what makes a journalist’s judgement irreplaceable.
GenAI, by contrast, operates on a flat plane. It processes text. It has no memory of the conversation you had with your source last Tuesday. It doesn’t know that the statistics in that government report are widely distrusted in the region, or that the think tank it’s citing has known affiliations with the industry it’s ostensibly evaluating. It cannot read the room, sense tension in a legislative session, or notice that a key official was conspicuously absent from a press conference.
When a reporter at The Wire or Al Jazeera connects three seemingly unrelated developments into a single coherent narrative, they’re drawing on a web of contextual intelligence that no prompt — however sophisticated — can replicate. AI can give you the threads. Only a journalist can see the pattern.
This is precisely why GenAI should be used for the inputs to journalism, never as a substitute for the synthesis that journalism demands. A model can tell you what was said in a parliamentary debate. It cannot tell you what was deliberately left unsaid.
“It Doesn’t Write Like Me” — And That’s the Point
The second concern I hear constantly is about voice. A journalist spends years — sometimes decades — developing a distinctive style. The rhythm of their sentences, the way they structure a lede, the instinct for when to be spare and when to be expansive. It’s as personal as a fingerprint.
GenAI doesn’t have a voice. It has a statistical average of millions of voices. And that average tends toward a particular kind of fluency — competent, smooth, and utterly forgettable. It’s the literary equivalent of elevator music: technically correct, emotionally vacant.
When P. Sainath writes about rural India, every sentence carries the weight of years spent in villages most urban readers will never visit. When Barkha Dutt files a dispatch from a conflict zone, the urgency isn’t manufactured — it’s earned. When Samanth Subramanian constructs a long-form narrative, the architecture of the piece itself becomes part of the argument. These are not “writing styles” that can be captured in a prompt like “write in a conversational yet authoritative tone.” They are the product of lived experience, editorial discipline, and thousands of hours of deliberate craft.
Here’s the real danger: if journalists begin running their drafts through AI for “improvement,” the model will sand down every rough edge, every idiosyncrasy, every surprising word choice — the very things that make their writing theirs. Over time, newsrooms risk converging on a single, homogenised AI-assisted voice where every story reads like it was written by the same eerily competent but soulless machine.
The practical rule is simple. Use AI to research what you’ll write about. Never use it to become your ghostwriter. If you find yourself prompting an AI to “rewrite this in my style,” pause and ask a harder question: have you spent enough time writing today that your style doesn’t need a proxy?
Your voice is not a template to be fed into a model. It is the one thing in journalism that is entirely, irreplaceably yours.
A Practical Prompting Framework for Journalists
If GenAI is a tool, then like any tool, its value depends on how skilfully it’s wielded. Here is a prompting framework I recommend for journalistic use:
The SCOPE Method
S — Specify the Role and Context
Don’t just ask a question. Frame the AI as a research assistant with a defined brief.
“You are a secondary research analyst supporting an investigative journalist. The story concerns water privatisation contracts in sub-Saharan Africa between 2018 and 2024. Your task is to summarise the key policy shifts, major corporate actors, and documented public health outcomes from credible sources.”
This kind of role-framing dramatically improves the relevance and structure of AI output.
C — Constrain the Source Universe
Tell the model what kind of sources you want it to draw from — and what to exclude.
“Focus on peer-reviewed studies, reports from WHO and World Bank, and investigative journalism from established outlets. Do not reference social media posts, opinion blogs, or unverified claims.”
This doesn’t guarantee the model will comply perfectly, but it significantly shapes the output toward higher-quality material.
O — Outline the Output Format
Journalists work in specific formats. Ask for what you need.
“Provide a structured briefing with: (a) a timeline of key events, (b) a stakeholder map identifying major players and their interests, (c) a list of unresolved questions or data gaps that warrant further reporting.”
Structured prompts yield structured output — and structured output is far easier to verify and build upon.
P — Probe for Contradictions
This is the critical thinking layer. After receiving initial output, push back.
“Now identify the three weakest claims in this briefing. Where is the evidence thinnest? What counter-narratives exist? What would a critic of this framing argue?”
This adversarial prompting technique forces the model to surface doubt — which is exactly what a good editor would do.
E — Extract, Don’t Adopt
The final discipline: treat every AI output as raw material to be extracted from, not copy to be adopted. Pull the leads, the references, the structural insights — then do the journalism yourself.
Practical Use Cases: When to Reach for GenAI
| Use Case | Appropriate GenAI Role | Human Journalist Role |
|---|---|---|
| Background research on a new beat | Generate landscape overview, key actors, regulatory history | Verify all claims, identify human sources, apply editorial judgement |
| Pre-interview preparation | Generate potential questions based on subject’s public record | Select questions based on editorial strategy, adapt in real-time |
| Data-heavy investigation | Flag anomalies in large datasets, summarise financial filings | Interpret significance, corroborate with sources, contextualise |
| Multilingual source review | Translate and summarise foreign-language documents | Verify translations with native speakers, assess cultural context |
| Deadline-driven fact compilation | Rapidly compile known facts for a breaking story brief | Confirm accuracy, add original reporting, write the narrative |
| Story ideation | Identify underreported angles in a broad topic area | Apply news judgement, assess public interest, commit to the story |
The Institutional Responsibility
This isn’t just about individual journalists making good choices. News organisations have a responsibility to establish clear editorial policies around AI use. Some principles worth codifying:
Transparency. If AI tools were used in the research or production of a story, disclose it. Readers have a right to know. The Guardian, the BBC, and several Nordic outlets have already adopted disclosure frameworks.
Training. Don’t assume journalists know how to prompt effectively. Invest in structured training that covers both the capabilities and the limitations of GenAI.
Verification Protocols. Every AI-surfaced fact should pass through the same verification standards as any other source. If a human source told you something unverified, you wouldn’t print it. Apply the same standard to AI.
Creative Protection. Actively protect the space for original writing, reporting, and narrative craft. If AI is saving time on research, reinvest that time in deeper storytelling — not in producing more volume at lower quality.
A Final Thought
The journalists I admire most share a common trait: they are deeply, sometimes uncomfortably, curious. They ask the question no one wants asked. They sit with ambiguity when everyone else is reaching for easy answers. They write sentences that change how you see the world.
No AI model will ever do that. Not because the technology isn’t powerful — it is, immensely so — but because those acts require something that cannot be computed: the courage to care about the truth more than the convenience of a plausible answer.
Use GenAI. Use it aggressively for research, preparation, and pattern recognition. But when it’s time to think, to write, to hold power accountable — close the laptop, pick up your notebook, and do the work that only you can do.
The world doesn’t need more content. It needs more journalism.