“Just the tip of the iceberg”: The New York Times’ Zach Seward on embracing AI
The paper’s first editorial director of artificial intelligence initiatives takes us behind the scenes.
Zach Seward’s first byline appeared in his fifth grade class newspaper. “I was one of those people who knew what I wanted to do for a living from about that age and never, ever detoured,” Seward tells Depth Perception. His path led him to Quartz, where he became a founding editor in 2012. After a stint as CEO, he served as editor in chief from 2022 to 2023.
It was at Quartz that Seward’s eyes were opened to the uses of AI in a newsroom. In 2019, a whistleblower leaked documents from a law firm in Mauritius, a small island country in the Indian Ocean, that showed how multinational companies used the nation’s laws to avoid paying taxes. Quartz, as part of the consortium of investigative outlets working on what became known as the Mauritius Leaks, built a machine-learning model that organized around 200,000 documents, some of them hundreds of pages long. “That turned out to be a pretty powerful and key part of the investigation,” Seward says.
In 2023, Seward was named the New York Times’ first editorial director of artificial intelligence initiatives. In announcing his new job, his bosses raised some of the issues Seward would be exploring: “How should The Times’s journalism benefit from generative A.I. technologies? Can these new tools help us work faster? Where should we draw the red lines around where we won’t use it?”
Nearly a year and a half into the job, Seward has some answers. We recently spoke with him about how the Times is using AI (responsibly), how other outlets are using it (generally speaking, not so responsibly), and a whole lot more. The following has been edited for length and clarity. —Mark Yarm
In March 2023, you and Deputy Managing Editor Sam Dolnick released “Principles for Using Generative A․I․ in The Times’s Newsroom.” What have you learned in the year since you put out that memo?
The one thing we learned is they’ve been durable, at least for one year. The principles are all equally important, but to my mind, the first — that we’re only ever interested in using AI as a tool in service of our mission — has probably been the most key. When you start in R&D, an exploration of a new technology, there’s always this risk of being an AI hammer in search of nails. And we were really determined to do it the exact opposite way. We wanted to look for real challenges in the newsroom that were aligned with our existing mission and goals. One way of saying that same principle is “Start with ‘Why?,’ not AI.”
So how is the New York Times using AI currently?
One area of focus for my team is helping reporters who have a good problem to have — but nevertheless a problem — which is too much data: reams and reams of documents or huge sets of images of handwritten notes or massive transcripts of an enormous number of videos. We’re trying to help find the needle in the haystack, to dig through those data sets in intelligent ways, using the reporters’ background knowledge assisted by an LLM or other machine learning model.
What’s an example of a story where you’ve used that technique?
There was an investigation of this group called the Election Integrity Network. Suffice it to say they were not so much interested in the integrity of the elections as making it go their way. Two reporters, Alexandra Berzon and Nick Corasaniti, had covered that group for years. About a month from election day, a source provided them with recordings of all of the group’s Zoom meetings — 500 hours of recorded video, and there weren’t even 500 hours until election day.
We collaborated with Nick and Alexandra in a few ways. First was just transcribing the videos, which amounted to around five million words of spoken text. We then applied two different related forms of AI to go through the transcripts themselves. One is just semantic search tools. Instead of doing your traditional control-F keyword search for “immigrant communities” or “misinformation” — this group was not so stupid as to be saying, “We want to target immigrant communities” — these search tools allow you to find passages that are semantically similar, mathematically close, to what you’re looking for.
We need your vote. Really.
Hey folks, JPP here taking a different approach on our regular sidebar by doing my best Ira Glass-during-a-pledge-drive impression.
So, here’s the thing: We produce Depth Perception every week not only to promote incredible in-depth, longform reporting by non-Long Lead journalists (and it really does deserve promotion!), but also to get the word out about our various work. All of it — our newsletters, our podcasts, oure features — is free to enjoy, which is great. But now we need something from you. The quo to our quid pro, if you will.
Home of the Brave, our epic, 7-part, multimedia explainer on Los Angeles’s homeless veterans crisis, is up for a Webby Award, and we need your help to win it. It’s being considered as the year’s “Best Editorial Feature,” but we’re up against some strong competition: CNN, Cosmopolitan, NBC News, and Reuters were also nominated — and they’re beating us.
We want to win this thing, not just because our work is worthy, but moreso because the cause of veteran homelessness needs more eyes on it. Winning this Webby would bring this issue to an international stage. So please, click here to vote for Home of the Brave. Also: For your vote to count, you have to confirm it via an email the Webby Awards sends you after you cast your ballot. So please dive back into your inbox to confirm!
Thanks for your support. Now back to Depth Perception.
In your hiring announcement, one of the questions asked was, “Where should [the Times] draw the red lines around where we won’t use [AI]?” Where does the red line stand right now?
We have absolutely no interest in writing the articles themselves with AI. Our bet is that the more some other media companies are doing that and the rest of the internet fills up with AI-generated slop, the fact that we can stand by our news report as human-written, -verified, -vetted is going to actually increase in value and be more important.
But we do allow for, and in some cases are making use of, AI drafts that are based on our published articles. There’s a team in London that has to send the version of The Morning that goes to the U.S. Their challenge is that they have to look at everything that’s published overnight, which is a huge amount. The team went through a monthslong process of refining a prompt and picking a model that could generate decent-enough drafts of one-line summaries of everything we publish. Starting with an AI-drafted summary, editing it from there, and putting it into the email is a more efficient process for them than starting from scratch.
All of it is still edited — we’d never put auto-generated copy in front of readers. The quality of the output is not there yet to even consider just going straight into the email.
I imagine you’ve faced resistance from some people in the Times newsroom. How do you assuage people’s fears about the use of AI in journalism?
At the risk of sounding naive or Pollyannaish about it, that simply describing — as I have been to you — what we have absolutely no interest in using AI for and how we actually are using it goes a long way, and most people see that as very much aligned with their interests as reporters who want to unlock a story that they wouldn’t otherwise be able to.
There’s the vibe I get at the Times that this is a new technology, and we want to have a strong say as to how it gets used and doesn’t get used in our industry. We don’t want to repeat the platform wars of the prior decade and end up with regrets. We get to shape this. I’m obviously not suggesting everyone shares that view, but I’ve been pleasantly surprised by the portion of journalists who do.
There’s an opportunity for the Times and other outlets who might take a similar approach to be able to say, “This is the stuff you can rely on because you can’t hold a machine to account, but you can hold these people — those faces on the byline — to account for it.” Since we’re all aligned on that principle, my experience has been that it takes a lot of the other tensions out of the room pretty quickly.
“This is a new technology, and we want to have a strong say as to how it gets used and doesn’t get used in our industry. We don’t want to repeat the platform wars of the prior decade and end up with regrets. We get to shape this.” —Zach Seward
That’s now, but could you see a time in 10, 20 years where an AI could get a New York Times byline?
No. Which is not to say that there won’t in 10 to 20 years — or even in one to two years — huge advancements that provide new and even more powerful tools, and probably use cases we’ve yet to discover. But the reason I say “no” to your specific question about AI getting a byline at the Times is because that feels to me like an abdication of our responsibility as journalists.
If you’re putting AI-generated copy that’s not human-reviewed and -edited in front of readers, it’s got to come with one of those disclaimers that says “This is written by AI and might be all wrong. You should go check it yourself.” To us, putting the burden of verification back on the reader — abdicating our responsibility as journalists — would undermine the whole reason you would come to the Times for our report in the first place.
The Los Angeles Times recently launched an AI-powered Insights feature, which became the news itself when it downplayed the KKK’s racist history. What’s the New York Times doing to avoid gaffes like that?
We’ve really drawn a line thus far in not putting any auto-generated copy in front of readers without a lot of editing and review. With the LA Times example, it sort of inherently assumes that this AI tool is more trustworthy or less biased than the writers of the piece in the first place. So forget the AI part, it’s in tension with some more fundamental journalistic values. That particular product suffers from the “Not starting with ‘Why?,’ and instead starting with AI” problem.
The Times Co. and others are suing OpenAI and its largest investor, Microsoft, for copyright infringement. As someone who’s intimately familiar with AI, what’s your take on the suit?
I started the job in late December 2023, right before the lawsuit was filed. I found out about the lawsuit the way I assume you did, as a push notification from the Times app. I think the Times has been very clear that we are both defending our intellectual property rights in court and making use of this technology, as we have every other technology that came before it when it’s useful for our mission and goals. My job is entirely the latter, the “How can we actually use this?” We’re not going to tie our hands behind our back just because there’s also a lawsuit.
What gives you hope for the future of journalism, and how does AI play a role there?
What gets me most excited these days is where we and other news organizations are seeing totally new uses of AI to assist with reporting, like the Election Integrity Network story. In a lot of those cases, we’re talking about stories that just could not be told without the assistance of AI. My gut instinct, and a bet we’re making at the Times, is that this is just the tip of the iceberg, that there’s a quite large universe of stories hidden in huge, unstructured data sets, and that opens up a totally new area for investigations — and ultimately getting the truth to people and holding power to account.
Further reading from Zach Seward
“AI news that’s fit to print” (ZachSeward.com, March 11, 2024)
“Creating structure with generative AI” (ZachSeward.com, April 12, 2024)
“AI is not like you and me” (ZachSeward.com, May 2, 2024)
“Should ChatGPT make me feel better or worse about failing out of Harvard?” (ZachSeward.com, July 18, 2023)
“Keeping score” (ZachSeward.com, Aug. 7, 2023)