Skip to content
Opinions

EDITORIAL: OpenAI's ChatGPT raises concerns over academic integrity, authenticity

As widespread use of ChatGPT begins, it is important to consider the consequences for both students and teachers

ChatGPT offers impressive yet imperfect human-like responses that could contribute to a widespread lack of motivation to create original work.  – Photo by @__rabbithole__ / Twitter

A new popular pastime for college students has arisen. Many Rutgers students have been testing out the seemingly endless capabilities of ChatGPT. It is a revolutionary Artificial Intelligence (AI) chatbot created by OpenAI, which is headed by CEO Sam Altman. For those who are not familiar with ChatGPT’s abilities, it essentially takes human language that users plug into its system and produces human-like responses via written text. 

Its impressive responses are thanks to the enormous bank of text it has access to from the internet. But because it has a fixed amount of knowledge to work with, its information is limited to that of current events up until 2021. Overall, its capabilities include generating code, stories, advice, articles and more.

To illustrate its capabilities, a Science Focus article uses a specific example. If you were to tell ChatGPT to write an article on quantum mechanics, it could generate a "well-written" article in "seconds." Meanwhile, a human being "could spend hours researching, understanding and writing."

Even though ChatGPT has its limitations, especially when it comes to very specific, complex requests and the most recent current events, its capabilities open up many concerns about AI. 

One concern has to do with academic integrity. It is well known that there are other online platforms that have been used to cheat in schools like Chegg, Quizlet and Homeworkify, to name a few.

But ChatGPT has them beat. Not only does it have no paywall, but it will also provide an answer in mere seconds. Other platforms are used mainly for multiple-choice questions and for subjects like mathematics, where there is one clear-cut numerical answer to a question. But there is no guarantee that they will have the answer you are looking for since they rely on users to upload answers.

ChatGPT has a huge bank of information to work with and has the capability to write essays as well. In the past, even though it would be possible to pay someone to write an essay for you, ChatGPT works at a speed that a human being could never compete with.

An article from the Atlantic discusses a tweet by an associate professor at the University of Toronto, Kevin Bryan, who said, "You can no longer give take-home exams/homework ... Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing."

This raises significant questions about classroom dynamics. Will all essay writing have to occur in the classroom? Will this new software be able to identify when an essay has been written with AI? What does this mean for plagiarism?

This already exacerbates the issue of trust between teachers and students. Before this AI technology, it was already common for lockdown browsers and cameras to be present during online at-home tests to closely monitor students. 

ChatGPT increases the burden on both professors and students. Professors will now question if essays have been written by students or by AI technology. Students will wonder how they are supposed to prove that their essay was written from their own thoughts and not some chat function on the internet. 

And this introduces another piece to this complicated puzzle. What will this mean for authenticity? If people have the ability to write stories and articles via AI, what will motivate them to take the time to create their own art?

It is hard to resist an easy solution that can provide a satisfying answer in mere seconds. If people begin to rely on this technology to think for themselves, it is possible that people will not develop the skills necessary to properly express themselves on a widespread scale. 

Even though some people may see this technology as a tool for humans to use, it is hard not to see it as a replacement for human capabilities. If humans feel replaced, this could lead to widespread discouragement. If people are not motivated to create, there could be a significant decrease in the production of human-made art and literature, an invaluable aspect of life as we know it.

We can already see excessive use of ChatGPT. In just five days, the site accumulated more than one million users, and many experienced technical difficulties due to high demand. ChatGPT is extremely accessible as there is no paywall to use it. 

Even though ChatGPT is impressive and accessible, it is important to emphasize that it is not perfect. On OpenAI's website, they acknowledge that ChatGPT can write "plausible-sounding but incorrect or nonsensical answers," "overuse certain phrases," guess the user’s intent when faced with "an ambiguous query" instead of asking for clarification and even engage with violent or inappropriate requests, though Moderation API tries to block this kind of content.

Some may see these flaws as comforting, in a way. This technology is not perfect, so AI cannot take over our lives completely — yet. So what happens when the flaws get corrected? As AI developers chase the pursuit of perfection, there is a chance it could go too far, but the consequences are too complex to even imagine. Is it unreasonable to feel that ChatGPT is straight out of some science fiction? 

Regardless, it is important for AI developers to consider how this technology may become more of a threat than a tool to students in the classroom and to people on a global scale.


The Daily Targum's editorials represent the views of the majority of the 154th editorial board. Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.


Related Articles


Join our newsletterSubscribe