OpenAI works on a solution to detect text generated by ChatGPT

Chat GPT

ChatGPT is an artificial intelligence chatbot prototype

The news was released that OpenAI is developing software that can detect if your ChatGPT model generated text, this shortly after New York City education officials announced they would block student access to the tool in public schools.

And it is that they have released various reports of what students using AI to do their homework, they have made teachers think about how it affects education.

Some have expressed their concern about how language models they may allow students to cheat.

Given this, OpenAI announced that it is working to develop "mitigations" that will help people detect the text automatically generated by ChatGPT.

“We have made ChatGPT available as a research preview to learn from real-world use, which we believe is an essential part of developing and deploying capable and secure AI systems. We are constantly incorporating feedback and lessons learned," a company spokesperson said.

According to testimonials from university professors, students trust ChatGPT to solve their homework, especially in rehearsals.

“The academy didn't see it coming. So he caught us by surprise,” says Darren Hudson Hick, an assistant professor of philosophy at Furman University.

"I reported it on Facebook, and my [teacher] friends were like, 'yeah! I caught one too,'" she added. Earlier this month, Hick reportedly asked his class to write a 500-word essay on XNUMXth-century Scottish philosopher David Hume and the horror paradox, which examines how people can derive pleasure from something they fear. , for a test at home. But according to the philosophy professor, one of the essays that came to him had some features that "marked" the use of AI in the student's "rudimentary" response. Hick explains that this can be detected by a trained eye.

Being able to distinguish handwriting by hand produced by a human or a machine it will change the way they can be used in the academy. Schools could more effectively enforce the AI-generated test ban,

Yes, generative language models can be good, but they don't know what they're talking about.

As impressive as AI-generated writing makes headlines with university conferences and schools banning typewritten papers, here's a reminder that they lack understanding of actual human writing.

And it is that OpenAI has been impressing the Internet with its efforts to replicate human intelligence and artistic abilities since 2015. But last November, the company finally went mega viral with the launch of the AI ​​text generator ChatGPT. Users of the beta tool posted examples of AI-generated text responses to prompts that seemed so legitimate that they struck fear into the hearts of teachers and even made Google fear that the tool could kill off their research activity.

If OpenAI engineers can create a bot that can type as well or better than the average human, it stands to reason that they can also create a bot that is better than the average human at detecting whether text was generated by AI.

Since as mentioned for the moment OpenAI is working on a solution, at least three detection tools have already been released that can be used:

GPT-2 Exit Detector

The online demo of the GPT-2 exit detector model allows you to paste text into a box and immediately see the probability that the text was typed by the AI. According to OpenAI's research, the tool has a relatively high detection rate, but "needs to be combined with approaches based on metadata, human judgment, and public education to be most effective."


When OpenAI released GPT-2 in 2019, the folks at the MIT-IBM Watson AI Lab and the Harvard Natural Language Processing Group teamed up to create an algorithm that tries to detect whether text has been typed by a bot.

Computer-generated text may appear to be written by a human, but a human writer is more likely to select unpredictable words. Using the "takes one to know one" method, if the GLTR algorithm can predict the next word in a sentence, it will assume that the sentence was written by a bot.


During the Christmas season, Edward Tian was busy creating GPTZero, an app that can help determine if text was written by a human or a bot. As an academic at Princeton, Tian understands how university professors can have a vested interest in detecting AI-assisted plagiarism or AI-assisted plagiarism.

Tian says his tool measures the randomness of sentences ("buzz") plus the overall randomness ("bursts") to calculate the probability that the text was written by ChatGPT. Since he tweeted about GPTZero on January 2, Tian says he has already been approached by venture capitalists wanting to invest and he will develop updated versions soon.


The content of the article adheres to our principles of editorial ethics. To report an error click here.

Be the first to comment

Leave a Comment

Your email address will not be published. Required fields are marked with *



  1. Responsible for the data: AB Internet Networks 2008 SL
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.