An nameless reader shares a report: In January, synthetic intelligence powerhouse OpenAI introduced a software that would save the world — or a minimum of protect the sanity of professors and lecturers — by detecting whether or not a bit of content material had been created utilizing generative AI instruments like its personal ChatGPT. Half a yr later, that software is lifeless, killed as a result of it could not do what it was designed to do.
ChatGPT creator OpenAI quietly unplugged its AI detection software, AI Classifier, final week due to “its low charge of accuracy,” the agency mentioned. The reason was not in a brand new announcement, however added in a notice added to the weblog put up that first introduced the software. The hyperlink to OpenAI’s classifier is not obtainable. “We’re working to include suggestions and are at present researching more practical provenance methods for textual content, and have made a dedication to develop and deploy mechanisms that allow customers to grasp if audio or visible content material is AI-generated,” OpenAI wrote.