New York Court Grapples With How the Rules of Evidence Should Handle Evidence Generated by Artificial Intelligence
How should courts deal with the admission of evidence generated by artificial intelligence (AI)? It’s a question that courts increasingly will have to consider in the coming years. An early example of such precedent can be found in the recent opinion of the Surrogate’s Court, New York, Saratoga County, in Matter of Weber as Trustee of Michael S. Weber Trust, 2024 WL 4471664 (2024).
In Weber, the trustee of a trust for the benefit of the decedent’s son petitioned for judicial settlement of the interim trust account. One of the witnesses in the case “relied on Microsoft Copilot, a large language model generative artificial intelligence chatbot, in cross-checking his [damages] calculations.” In determining how such evidence should be treated under the rules of evidence, the court ruled as follows:
The use of artificial intelligence is a rapidly growing reality across many industries. The mere fact that artificial intelligence has played a role, which continues to expand in our everyday lives, does not make the results generated by artificial intelligence admissible in Court. Recent decisions show that Courts have recognized that due process issues can arise when decisions are made by a software program, rather than by, or at the direction of, the analyst, especially in the use of cutting-edge technology (People v. Wakefield, 175 A.D.3d 158, 107 N.Y.S.3d 487 [3d Dept. 2019]). The Court of Appeals has found that certain industry specific artificial intelligence technology is generally accepted (People v. Wakefield, 38 N.Y.3d 367, 174 N.Y.S.3d 312, 195 N.E.3d 19 [2022] [allowing artificial intelligence assisted software analysis of DNA in a criminal case]). However, Wakefield involved a full Frye hearing that included expert testimony that explained the mathematical formulas, the processes involved, and the peer-reviewed published articles in scientific journals. In the instant case, the record is devoid of any evidence as to the reliability of Microsoft Copilot in general, let alone as it relates to how it was applied here. Without more, the Court cannot blindly accept as accurate, calculations which are performed by artificial intelligence. As such, the Court makes the following findings with regard to the use of artificial intelligence in evidence sought to be admitted.
In reviewing cases and court practice rules from across the country, the Court finds that “Artificial Intelligence” (“A.I.”) is properly defined as being any technology that uses machine learning, natural language processing, or any other computational mechanism to simulate human intelligence, including document generation, evidence creation or analysis, and legal research, and/or the capability of computer systems or algorithms to imitate intelligent human behavior. The Court further finds that A.I. can be either generative or assistive in nature. The Court defines “Generative Artificial Intelligence” or “Generative A.I.” as artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query) by learning from a large reference database of examples. A.I. assistive materials are any document or evidence prepared with the assistance of AI technologies, but not solely generated thereby.
In what may be an issue of first impression, at least in Surrogate’s Court practice, this Court holds that due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues that prior to evidence being introduced which has been generated by an artificial intelligence product or system, counsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing prior to its admission, the scope of which should be determined by the Court, either in a pre-trial hearing or at the time the evidence is offered26.
-CM