Ex-Trump attorney admits using AI-generated legal cases in court filing

Michael Cohen says he mistakenly believed Google Bard was a "super-charged search engine" unlike ChatGPT

By
Web Desk
Michael Cohen, the former personal attorney of US president Donald Trump, is sworn in to testify before a House Committee on Oversight and Reform hearing on Capitol Hill in Washington, US, February 27, 2019.
Michael Cohen, the former personal attorney of US president Donald Trump, is sworn in to testify before a House Committee on Oversight and Reform hearing on Capitol Hill in Washington, US, February 27, 2019. 

Michael Cohen, former United States president Donald Trump's personal attorney, admitted using Google's AI chatbot, Bard, to accidentally cite fake legal cases in a court filing.

The made-up citations were used to secure an early end to court-ordered supervision, allowing Cohen to be released from prison in 2021, according to Axios.

Cohen claimed in a November motion that he had served his time in prison and complied with his release terms. However, a federal judge questioned the three case citations used in the motion, stating that "as far as the Court can tell, none of these cases exist."

The judge ordered Cohen's lawyer to provide copies of the three decisions or explain their citation and Cohen's role in the motion.

Cohen's lawyer, Danya Perry, revealed that he used Google Bard for "conducted open-source research" in his motion, but his lawyer, David Schwartz, included the case citations in the motion without verifying them, according to an unsealed court filing.

Schwartz acknowledged in a court letter that he had not thoroughly checked the citations, adding that he thought the citations originated with Perry and that he would have reviewed them if he had known they originated with Cohen, ABC News reported.

Cohen, a non-practising lawyer, admitted to not keeping up with legal technology trends and risks, saying that he mistakenly believed Google Bard as a "super-charged search engine" and didn't realise it could create fake citations, similar to ChatGPT.

"It did not occur to me then — and remains surprising to me now — that Mr Schwartz would drop the cases into his submission wholesale without even confirming that they existed," Cohen added.

This isn't the first case to highlight the risks of using AI for legal research.

Two New York lawyers were sanctioned earlier this year for submitting a legal brief citing six fake cases generated by ChatGPT in a lawsuit against Avianca airline.

In a sworn declaration, one of the lawyers admitted using ChatGPT for research but expressed that he "greatly regrets" his actions.