An nameless readers shared this report from the Washington Put up:
Brian Hood is a whistleblower who was praised for “displaying large braveness” when he helped expose a worldwide bribery scandal linked to Australia’s Nationwide Reserve Financial institution. However should you ask ChatGPT about his function within the scandal, you get the alternative model of occasions. Relatively than heralding Hood’s whistleblowing function, ChatGPT falsely states that Hood himself was convicted of paying bribes to overseas officers, had pleaded responsible to bribery and corruption, and been sentenced to jail.
When Hood came upon, he was shocked. Hood, who’s now mayor of Hepburn Shire close to Melbourne in Australia, stated he plans to sue the corporate behind ChatGPT for telling lies about him, in what might be the primary defamation swimsuit of its variety in opposition to the unreal intelligence chatbot…. “There’s by no means, ever been a suggestion wherever that I used to be ever complicit in something, so this machine has fully created this factor from scratch,” Hood stated — confirming his intention to file a defamation swimsuit in opposition to ChatGPT. “There must be correct management and regulation over so-called synthetic intelligence, as a result of persons are counting on them….”
If it proceeds, Hood’s lawsuit would be the first time somebody filed a defamation swimsuit in opposition to ChatGPT’s content material, based on Reuters. If it reaches the courts, the case would take a look at uncharted authorized waters, forcing judges to think about whether or not the operators of a synthetic intelligence bot may be held accountable for its allegedly defamatory statements.
The article notes that ChatGPT prominently warns customers that it “might often generate incorrect info.” And one other Put up article notes that each one the key chatbots now embody disclaimers, “corresponding to Bard’s fine-print message beneath every question: ‘Bard might show inaccurate or offensive info that does not characterize Google’s views.'”
However the Put up additionally notes that ChatGPT nonetheless “invented a faux sexual harassment story involving an actual legislation professor, Jonathan Turley — citing a Washington Put up article that didn’t exist as its proof.” Lengthy-time Slashdot reader schwit1 tipped us off to that story. However this is what occurred when the Washington Put up looked for accountability for the error:
In a press release, OpenAI spokesperson Niko Felix stated, “When customers join ChatGPT, we try to be as clear as doable that it might not all the time generate correct solutions. Bettering factual accuracy is a major focus for us, and we’re making progress….” Katy Asher, senior communications director at Microsoft, stated the corporate is taking steps to make sure search outcomes are protected and correct. “We have now developed a security system together with content material filtering, operational monitoring, and abuse detection to offer a protected search expertise for our customers,” Asher stated in a press release, including that “customers are additionally supplied with specific discover that they’re interacting with an AI system.”
Nevertheless it stays unclear who’s accountable when synthetic intelligence generates or spreads inaccurate info. From a authorized perspective, “we simply do not know” how judges may rule when somebody tries to sue the makers of an AI chatbot over one thing it says, stated Jeff Kosseff, a professor on the Naval Academy and skilled on on-line speech. “We have not had something like this earlier than.”