GPT-4 AI Outperforms Experts at Identification of Cell Types
GPT-4, large language model created by OpenAI, can accurately interpret types of cells important for the analysis of single-cell RNA sequencing—a sequencing process fundamental to interpreting cell types. It does so with high consistency equivalent to the performance of human experts of gene information doing time-consuming manual annotation. Results of the study by researchers at Columbia University Mailman School of Public Health and Duke University School of Medicine are published in the journal Nature Methods.
Upon assessment across numerous tissue and cell types, GPT-4 demonstrated the ability to produce cell type annotations that closely align with manual annotations of human experts and surpass existing automatic algorithms. This feature has the potential to significantly lessen the amount of effort and expertise needed for annotating cell types, a process that can take months. Moreover, the researchers have developed GPTCelltype, an R software package, to facilitate the automated annotation of cell types using GPT-4.
“The process of annotating cell types for single cells is often time-consuming, requiring human experts to compare genes across cell clusters,” said Wenpin Hou, PhD, assistant professor of Biostatistics at Columbia Mailman School. “Although automated cell type annotation methods have been developed, manual methods to interpret scientific data remain widely used, and such a process can take weeks to months. We hypothesized that GPT-4 can accurately annotate cell types, transitioning the process from manual to a semi- or even fully automated procedure and be cost-efficient and seamless.”
The researchers assessed GPT-4’s performance across ten datasets covering five species, hundreds of tissue and cell types, and including both normal and cancer samples. GPT-4 was queried using GPTCelltype, the software tool developed by the researchers. For competing purposes, they also evaluated other GPT versions and manual methods as a reference tool.
As a first step, the researchers explored the various factors that may affect the annotation accuracy of GPT-4. They found that GPT-4 performs best when using the top 10 different genes and exhibits similar accuracy across various prompt strategies, including a basic prompt strategy, a chain-of-thought-inspired prompt strategy that includes reasoning steps, and a repeated prompt strategy. GPT-4 matched manual analyses in over 75 percent of cell types in most studies and tissues demonstrating its competency in generating expert-comparable cell type annotations. In addition, the low agreement between GPT-4 and manual annotations in some cell types does not necessarily imply that GPT-4’s annotation is incorrect. In an example of stromal or connective tissue cells, GPT-4 provides more accurate cell type annotations. GPT-4 was also notably faster.
Hou and her colleague also assessed GPT-4’s robustness in complex real data scenarios and found that GPT-4 can distinguish between pure and mixed cell types with 93 percent accuracy, and differentiated between known and unknown cell types with 99 percent accuracy. They also evaluated the performance of reproducing GPT-4’s methods using prior simulation studies. GPT-4 generated identical notations for the same marker genes in 85 percent of cases. “All of these results demonstrate GPT-4’s robustness in various scenarios,” observed Hou.
While GPT-4 surpasses existing methods, Hou said there are limitations to consider, including the challenges for verifying GPT-4’s quality and reliability because it discloses little about its training proceedings.
“Since our study focuses on the standard version of GPT-4, fine-tuning GPT-4 could further improve cell type annotation performance,” she said.
Zhicheng Ji of Duke University School of Medicine is a co-author.
The study was supported by the National Institutes of Health (grants AG075936, GM150887). The authors declare no competing interests.
Media Contact
Stephanie Berger, sb2247@cumc.columbia.edu