News organizations are grappling with how to use AI responsibly while preserving public trust.
We are still at the beginning of the artificial intelligence revolution, and the public is looking for clear guidance on how newsrooms are using AI to report the news. But the reality is that most news organizations are still developing their policies, and few have fully resolved these complex questions. That may not provide the certainty many news consumers are looking for, but it generally reflects the current state of the industry.
Part of the challenge is that AI technology is advancing far faster than newsrooms are accustomed to moving. Journalistic standards and policies are typically shaped over decades, and the need to produce quick, improvised answers to AI-related questions is unsettling for many journalists. Yet newsrooms are being forced to adapt in real time, balancing the pressure to innovate with the responsibility to maintain public trust.
As a result, many organizations are adopting interim guidelines rather than final rules, and testing how AI can be used while trying to minimize risks to accuracy, fairness, and credibility. At a time when public trust in the media is already strained, AI represents yet another disruptive force complicating an already fragile relationship with audiences.
There is no doubt that AI is a powerful tool that can support the reporting process, particularly when it comes to analyzing large volumes of documents, identifying patterns in data, and speeding up routine tasks such as transcription and translation. These capabilities can free journalists to focus more on investigation, verification, and storytelling. Those are the core elements of their work.
At the same time, most newsrooms draw a firm line: AI should not be used to write stories independently. Editors and reporters remain responsible for every word that is published, and human oversight is required at every stage. This reflects a broader concern that AI systems can produce convincing but inaccurate or misleading information, making careful review essential.
It’s a point I emphasize often to my journalism students at Fresno State. For example, I once asked ChatGPT to write a feature story about me. The piece struck a positive tone, but it also contained several clear factual errors. Most notably, it claimed I had died in 2018, which was news to me. I usually tell my students that, apparently, they’re being taught by my zombie form.
That story gave my students a good chuckle, but it also drove home a critical lesson: AI can sound authoritative, but it cannot be trusted without verification.
At Fresno State, we are still developing AI policies because their impact varies across academic disciplines. I can imagine how difficult it would be for an English professor assigning a paper on Jane Austen’s “Pride and Prejudice,” unsure whether students might use AI to write it.
In journalism, however, the issue is much more straightforward. News stories written for class must be based on direct interviews with sources in the community.
I allow my students to use AI to generate story ideas and identify potential sources, but not to write their stories. All reporting must be conducted through direct interviews with news sources, reflecting the ethical standards and integrity essential to our profession.
Students are also required to provide contact information for every source they interview, and I may occasionally reach out to verify their reporting. I’ve also found that I can often tell when a story has been written by ChatGPT, due to its overly formal writing style and noticeable overuse of em dashes and semi-colons.
I hope that my approach to AI ensures journalism students not only learn to use it responsibly but also uphold the trust and accountability that are central to quality journalism. It would be irresponsible for me as their journalism instructor to ban the use of AI, because students need to understand not only how these systems work but also how to use them ethically.
The growing presence of AI in today’s media landscape has intensified media literacy challenges, making the work of the many groups that teach media literacy in our schools and communities more important than ever. Please support their vital efforts with your generous contributions.
We are just at the front end of this technological revolution, and AI likely will ultimately become an everyday part of society. Preparing students now ensures they are not only competent users of AI tools but also responsible journalists who can navigate the challenges and opportunities of this evolving field.
(This column has been updated).