Following a report by tech/ai/openai-tool-chatgpt-cheating-writing-135b755a” rel=”nofollow noopener” target=”_blank” data-ylk=”slk:The Wall Street Journal;elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1;itc:0;sec:content-canvas”>The Wall Street Journal Claiming that OpenAI has been developing a tool that can detect essays written by ChatGPT with a high degree of accuracy, the company has shared a bit of information about its research into text watermarking and why it has not published its detection method. The Wall Street JournalAccording to the CIA report, debate over whether the tool should be made public has prevented it from seeing the light of day, despite it being “ready.” In an update published Sunday to a May article blog entrydiscovered by TechnologyCrunchOpenAI said: “Our teams have developed a method of text watermarking that we continue to consider while investigating alternatives.”
The company said watermarking is one of multiple solutions, including classifiers and metadata, that it has studied as part of “extensive research in the area of text provenance.” According to OpenAI, it “has been very accurate” in some situations, but doesn’t perform as well when faced with certain forms of manipulation, “such as using translation systems, rephrasing with another generative model, or asking the model to insert a special character between each word and then removing that character.” And text watermarking could “disproportionately affect some groups,” OpenAI wrote. “For example, it could stigmatize the use of ai as a useful writing tool for non-native English speakers.”
According to the blog post, OpenAI has been weighing these risks. The company also wrote that it has prioritized the release of authentication tools for audiovisual content. In a statement to TechnologyCrunchAn OpenAI spokesperson said the company is taking a “deliberate approach” to the provenance of the text because of “the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”