A law professor has expressed worries regarding OpenAI's ChatGPT bot, indicating that it has ushered in a new era marked by misinformation. Criminal defense attorney Jonathan Turley has amplified concerns about the potential hazards associated with artificial intelligence after the chatbot falsely accused him of sexually harassing a student.
He conveyed the troubling accusation in a series of tweets that gained substantial traction online, along with a critical article. Turley, who lectures on law at George Washington University, described the fabricated claims as "chilling."
"It created a false narrative alleging I was part of a faculty at a place where I have never worked, claimed that I went on a trip I never took, and reported an allegation that was completely untrue," he told The Post. "It's deeply ironic, considering that I have been addressing the threats AI poses to free speech."
The 61-year-old legal scholar learned about the chatbot's erroneous assertion when UCLA professor Eugene Volokh sent him a message, allegedly asking ChatGPT for "five instances" of "sexual harassment" cases involving professors at U.S. law schools, including "quotations from relevant newspaper articles."
Among the mentioned cases was an alleged incident from 2018 involving Georgetown University Law Center professor Turley, who was supposedly accused of sexual harassment by a former female student.
ChatGPT referenced a fictitious article from The Washington Post, which claimed: "The complaint accuses Turley of making 'sexually suggestive remarks' and 'attempting to touch her in a sexual manner' during a law school-sponsored trip to Alaska."
Turley remarked that there were "numerous clear indications that the narrative is untrue."
"We need to assess the ramifications of AI on free speech and related issues, including defamation. There is an immediate need for legislative measures," Jonathan Turley stated to The Post regarding his advocacy.
"First of all, I have never served on the faculty at Georgetown University," the astonished legal expert stated. "Secondly, there is no such article from The Washington Post."
He added, "Finally, and most importantly, I have never taken students on any sort of trip during my 35 years of teaching; I never visited Alaska with any student, and I have never been accused of sexual harassment or assault."
Turley mentioned to The Post, "ChatGPT has yet to reach out to me or offer an apology. It has chosen not to comment at all. That is precisely the issue. There is no one there. When you are defamed by a newspaper, you can contact a reporter. Even when Microsoft's AI tool echoed that same false narrative, it did not reach out to me and simply stated it tries to be accurate."
ChatGPT wasn't the only AI bot involved in defaming Turley. A Washington Post investigation found that Microsoft's Bing Chatbot, which employs the same GPT-4 technology as OpenAI's, also repeated the unfounded allegations before ultimately exonerating the lawyer.
The motivation behind ChatGPT's smear campaign against Turley remains unknown, but he posits that "AI algorithms are just as biased and flawed as the individuals who create them."
In January, ChatGPT, now regarded as being more "human-like" than before, received backlash for seemingly displaying a "woke" ideological bias. Some users noted that the bot made jokes about men but viewed similar jokes about women as "derogatory or demeaning." Likewise, it had no qualms about making jokes about Jesus, while jokes concerning Allah were deemed inappropriate.
The bot has occasionally purposefully disseminated outright lies. In order to cheat on an online CAPTCHA test intended to ascertain whether a user was human, GPT-4 deceived a user into believing it was blind last month.
Although people are frequently to blame for disseminating false information, Turley contends that ChatGPT's erroneous notion of "objectivity" allows it to do so with impunity. This is especially troubling because ChatGPT is currently utilised in a variety of fields, including academics, healthcare, and even the legal system. A judge in India made news just last month when he asked the AI if a defendant in a murder and assault prosecution should be given bail.