[Submitted on 27 Apr 2023]

Download PDF

Abstract: Large Language Models (LLMs) have showcased remarkable capabilities in
natural language understanding in various domains. These models can usually
behave well on daily dialog, or question answering scenarios, however, in areas
that value precision, for example, in medical applications, they often exhibit
unsatisfactory performance due to a lack of domain-specific knowledge. In this
report, we introduce PMC-LLaMA, an open-source language model that is acquired
by fine-tuning an open-source language model on a total of 4.8 million
biomedical academic papers for further injecting medical knowledge, enhancing
its capability in medical domain. Our preliminary evaluations are conducted on
three biomedical QA datasets, including PubMedQA, MedMCQA, and USMLE, showing
that the our model after finetuning, i.e., PMC-LLaMA, demonstrates better
understanding of biomedical domain-specific concepts, thus achieving high
performance on QA benchmarks. The model and codes, along with an online demo,
are publicly available.

Submission history

From: Chaoyi Wu [view email]



[v1]

Thu, 27 Apr 2023 18:29:05 UTC (5,163 KB)

Read More