The Wall Street Journal reported on an internal source from OpenAI indicating that the company has a project to develop a tool for detecting whether articles or research papers are written with ChatGPT or not. The project has been in development and discussion since two years ago, with the tool being ready for use since a year ago. However, OpenAI plans to release it only when they see fit.
Some may wonder why OpenAI doesn’t release this seemingly beneficial tool. The report suggests that there are diverse opinions within OpenAI. Some argue that for the sake of transparency in AI, the tool should be made public. On the other hand, some point out that if this tool is released, up to one in three users who rely on ChatGPT for homework or reports may immediately stop using it, affecting the user base.
The technology OpenAI uses for text detection is called Text Watermarking, a method to detect AI-generated images, but in this case, it is for text. Internal documents claim an accuracy rate of 99.9% if the text is rewritten with ChatGPT with a sufficient number of words. Google itself has developed a similar tool using Synth ID for images.
An OpenAI representative further explains that the company’s concern is that this tool may impact certain user groups, such as non-English speakers. Therefore, the company is considering other watermarking detection methods to ensure desired outcomes with minimal impact on users.
TLDR: OpenAI is developing a tool to detect if articles are written using ChatGPT, but they are hesitant to release it due to potential negative effects on user retention and diverse opinions within the company.
Leave a Comment