By Lucca Lorenzi, guest author

It has been common vernacular to refer to an image as “photoshopped” if it has been altered or edited, even if these edits were not made using the Adobe software. Photoshop has been synonymous with image editing. However, artificial intelligence (A.I.) software is beginning to change the way photos are edited both in Photoshop and in other programs.

It also raises substantial ethical and legal questions that will have to be dealt with as these images become more widespread in their use. The sooner we begin the debate on these issues, the better.

In May, Adobe launched its generative A.I. software called Firefly into its Photoshop (Beta) program. Adobe Firefly uses a generative fill tool that allows users to type in prompts for a selected area, and the A.I. will present three options of an edited image that fulfills this prompt.
I tested Adobe’s Firefly software using an image of a kangaroo downloaded from Unsplash, a website that provides users with free stock images for download.

Using the lasso tool to select the background of the photo, I then typed my prompt into the generative fill box requesting, “A street in San Francisco during sunset in the summertime.”

In a matter of seconds, it provided me with three options to choose from. While not perfect, it is impressive what the software generated in a matter of seconds in response to my specific prompt.

Use your cursor to slide between the two photos above. The photo on the left is the original photo taken of a kangaroo. The photo on the right is one of three options that Photoshop’s software generated for my prompt.

In addition to Adobe Firefly, other A.I. image generator softwares have launched within the last year, including DALL-E (a name inspired by the combination of Salvador Dali and Pixar’s WALL-E), Canva’s Text to Image, Midjourney, Let’s Enhance, Deep Dream Generator, Artbreeder and Waifu Labs.

Each of these softwares have demonstrated capability to generate convincingly real photos. However, with this comes several questions with regards to ethics. Moreover, there has been expressed concern over whether A.I. image generators violate both privacy and copyright by sourcing other’s likeness and intellectual properties without approval or citation of their works. For example, one can easily erase the Getty Images watermark using Adobe Photoshop’s generative fill tool.

In addition to plagiarism, researchers worry that software systems might produce images that reinforce racial and gender stereotypes. The University of Texas at Austin’s Center for Media Engagement published a case study titled “The Ethics of AI Art.” The article explores the consequences of AI coding and algorithms.

The article states, “One journalist said, “Ask Dall-E for a nurse, and it will produce women. Ask it for a lawyer, it will produce men,” (Hern, 2022). This is partly due to the web servers that provide the program with learning material.” Furthermore, the case study notes Congress’ straggling status in the race to regulate A.I. The article cites Dr. Eduardo Navas, an associate research professor at Pennsylvania State University who studies DALL-E 2.

 It states, “The source material for the machine learning model isn’t owned by OpenAI either. (According to Dr. Navas) With “1.5 million users generating more than two million images every day,” intellectual property laws will need to be reconsidered for AI art.”

These A.I. generators create more threats to the public’s trust of the media. Shortly following former President Donald Trump’s indictment, a user of Midjourney created images of Trump’s fake arrest. The images flooded various social media platforms and convinced many that the fabricated scenes were real. While A.I. creates more ethical conundrums, big tech companies, such as Microsoft, Meta, Google, and Twitter, are laying off staff who previously worked to oversee ethical A.I. production.

In a statement released by Adobe about its newly debuted Firefly software, the company responded to some of these ethical quandaries by stating, “Generative Fill supports Content Credentials, serving an essential role in ensuring people know whether a piece of content was created by a human, A.I.-generated or A.I.-edited. Content Credentials are like “nutrition labels” for digital content and remain associated with content wherever it is used, published or stored, enabling proper attribution and helping consumers make informed decisions about digital content.” While this is an effective step to preventing the spread of disinformation via A.I. generated photos, consumers will need to learn how to identify and interpret Adobe’s content credentials.

An article written by Nitasha Tiku and published by the Washington Post titled, “AI can now create any image in seconds, bringing wonder and danger,” cites Professor Wael Abd-Almageed. “Historically, people trust what they see, said Wael Abd-Almageed, a professor at the University of Southern California’s school of engineering. “Once the line between truth and fake is eroded, everything will become fake,” he said. “We will not be able to believe anything.””

The line separating truth and fake is no doubt being threatened by these image-generating softwares. Unfortunately, A.I.’s dubious ethical boundaries, questions of authorship, reliance on stereotypes, and lack of federal regulation present more dilemmas in the effort to re-establish the public’s trust in the media. 

This commentary was written by Lucca Lorenzi, a 2023 Fresno State graduate and the Dean’s Medalist for the College of Arts and Humanities at Fresno State. Lucca is working this summer as an assistant for the Institute for Media and Public Trust.