
Damages experts have been known to rely upon many sources of information in preparing their positions regarding how much to award in damages/royalties. However, a recent U.S. district court decision cautions that a damages expert must understand their sources of information. By extrapolation, this should apply to any expert witness.
In a Memorandum Opinion issued on March 14, 2025, Judge Richard Andrews of the U.S. District Court for the District of Delaware ruled that the defendant’s damages expert opinion should be excluded on the grounds that it is unreliable, under the Federal Rules of Evidence, Section 702. Jackson v. NuVasive, Inc., No. 1-21-cv-00053, slip op. at 15 (D.Del. Mar. 14, 2025). This ruling was based upon the use of the NuVasive’s damages expert of third-party analysis tools. Id. at 15-18.
In his report, NuVasive’s damages expert used two third-party analysis tools, one from Derwent and one from IPLytics. In the case of the latter, the opinion discussed issues in how Mr. Pampinella used different analytics to reach a conclusion. Id. However, these types of issues are not uncommon. It was the report’s reliance on the Derwent results and the court’s treatment of this that is more instructive.
The report relied on Derwent’s “Combined Patent Impact” score. Jackson at 16. The opinion quotes the expert’s deposition testimony, in which he admitted that Derwent uses a proprietary algorithm to determine the “Combined Patent Impact” score. Id. (quoting from Pampinella deposition). The expert admitted that he had no understanding of how Derwent’s “Combined Patent Impact” score is determined, which, the court concluded, makes it “impossible to test Derwent’s conclusion as to the value of the asserted patents.” Id. The opinion further notes that Derwent considered Medtronic to be the “Optimized Assignee” of one of Jackson’s patents and stated, “Though disqualifying on its own, [the expert’s] ignorance of Derwent’s algorithm is especially concerning given that Derwent’s ‘machine learning processes’ have seemingly generated an incorrect assignee for the asserted patents via the ‘Optimized Assignee’ output.” Id.
There are two significant takeaways from this opinion. One is that a damages expert, or extrapolating, any expert, should be careful about relying upon third-party information in preparing their report. As is emphasized in Jackson, the expert must understand the information being relied upon and, if machine-generated, how the information was generated. This underlies the reliability of the expert’s role as a witness; reliance upon third-party information requires, in general, investigation as to how it was obtained, and in particular, when machine-generated, how the algorithms for generating the information work.
This latter point gives way to the second takeaway. As the use of artificial intelligence (AI) to analyze information and output results has grown dramatically in recent years, it is important to remember that AI is, in general, opaque. AI is trained using known data to provide known “correct” outputs. However, once it is trained and is then applied to “real” data to provide some output, it is not possible, in general, to know the basis upon which the output was obtained and whether, with complete certainty, the output is even correct. Indeed, there have been multiple cases in which AI has provided “hallucinations” (non-existent, completely made-up results; for example, non-existent case law in AI-generated legal briefs). Based on the Jackson opinion, it is apparent that at least a damages expert should not rely upon AI-generated outputs in forming an expert report.
While it may be tempting to utilize machine-generated analytics and other machine-generated results in performing expert analysis, the Jackson opinion makes clear that the bases of these machine-generated analytics and other results must be investigated and understood in order to use them to create a reliable expert report and associated testimony.