There’s currently a frenzy over the artificial intelligence application ChatGPT and whether it will create a new generation of plagiarists and increase the amount of misinformation going around the internet. I decided to test the application over the past several weeks, and I’ve been impressed with its ability to quickly create papers on just about any subject possible.
I also found that while it offered me well-written reports, they came with some factual errors. That shouldn’t be surprising. After all, the language model architecture created by OpenAI gets its content by searching through information already published on the internet. And some of that information is wrong. It also confuses different people with the same name, and that can attribute awards and other accomplishments to the wrong person. It can even report that people are dead, when they are alive.
Here’s one example. In a prompt, I asked it to write about me, and gave it instructions to focus on “Jim Boren, who runs the Institute for Media and Pubic Trust at Fresno State.” In a few seconds, ChatGPT created an impressive profile of me. It also said I passed away in 2018, and said I won awards that I hadn’t and had the wrong names of awards that I had earned.
In my next prompt, I said I was still alive and corrected its answer on my awards. ChatGPT apologized for the errors. It thankfully dropped the reference to me dying, although I’m sure someone with my same name died in 2018. But a few days later, it again said I had died. I have asked it about other subjects that I have personal knowledge of, and found that errors persist. All that said, the text is fairly well written and could fool readers if they didn’t do fact-checking.
OpenAI acknowledges that ChatGPT can be fact-challenged: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,” says Open AI on its site.
“Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
Open AI also admits to being “excessively verbose” and says it overuses some phrases. Those aren’t its biggest problems.
Using ChatGPT can get addictive, and I have stayed up late several nights giving it prompts about all sort of issues. I’ve learned a lot about the application, and I’m especially concerned about its impact in spreading misinformation and disinformation on the internet. This piece in The Guardian lays out the concern succinctly. “A platform that can mimic humans’ writing with no commitment to the truth is a gift for those who benefit from disinformation.”
On its site, ChatGPT admits that it “may occasionally produce harmful instructions or biased content.” Those words should be chilling to those concerned about truthfulness and how the internet can cause harm to people.
And because I also teach at Fresno State, I worry about how it will change education. That prompted me to ask ChatGPT if college or high school students could use the application to plagiarize assignments? Here’s its response:
“No, it would not be appropriate or ethical for college students to use ChatGPT or any other AI language model to plagiarize assignments. Plagiarism, or the act of presenting someone else’s work or ideas as one’s own, is a serious academic offense that can result in disciplinary action or even expulsion from school.
“While ChatGPT can generate text based on prompts and provide information on various topics, it is not intended to be used for unethical or illegal purposes such as plagiarism. It is important for students to conduct their own research, cite their sources appropriately, and develop their own ideas and arguments in order to produce original work and avoid any ethical or legal issues related to plagiarism.”
Now that’s a politically correct answer. Whether students should use ChatGPT to write their papers and whether they will are two entirely different questions. I suspect the folks at OpenAI spent a lot of time crafting a response to this obvious question from their ChatGPT users.
Schools and colleges definitely have to take into account how their homework assignments and projects are being created in this ever-changing technology era. Cheating is a problem in education, but instead of banning the use of artificial intelligence, educators must learn how to use it to improve their teaching, and to spot when students misuse it. This example from Kevin Roose’s column on the issue in the New York Times showed how one teacher used it to help prepare class material.
“Jon Gold, an eighth grade history teacher at Moses Brown School, a pre-K through 12th grade Quaker school in Providence, R.I., said that he had experimented with using ChatGPT to generate quizzes. He fed the bot an article about Ukraine, for example, and asked it to generate 10 multiple-choice questions that could be used to test students’ understanding of the article. (Of those 10 questions, he said, six were usable.)
“Ultimately, Mr. Gold said, ChatGPT wasn’t a threat to student learning as long as teachers paired it with substantive, in-class discussions.”
Roose writes that students graduating today will enter a work where artificial intelligence programs are the norm.
“They’ll need to know their way around these tools — their strengths and weaknesses, their hallmarks and blind spots — in order to work alongside them. To be good citizens, they’ll need hands-on experience to understand how this type of A.I. works, what types of bias it contains, and how it can be misused and weaponized.”
All this still raises the crucial question about how educators assess student learning at a time when the students may be looking to technology for shortcuts. It won’t be easy, but instead of trying to ban artificial intelligence applications, educators must become experts in the technology and find ways to reach beyond their fears about it being the end to education as we know it. I have no doubt that smart teachers will find the answer.
Because we are just at the beginning of this AI revolution, we aren’t going to know yet all the impacts and unintended consequences of the growing numbers of applications. There will be many, of course, and they will severely disrupt the way we have done business in the past.
Excellent writing and needed perspective
Since AI is a product of the tech world, which has been revealed to be something very different than originally envisioned, but still profit driven: how intelligent do think it will be in the long run.? Is it Artificial Ignorance?
LikeLiked by 1 person