Skip to main content

Table 6 Micro-average precision, recall, and F1-score for entities identified by the fine-tuned encoder-based transformer models

From: Comparative analysis of generative LLMs for labeling entities in clinical notes

Transformer LM

P

R

F1

BERT

0.511

0.583

0.545

RoBERTa

0.532

0.616

0.570

XLM-RoBERTa

0.512

0.585

0.546

BIO-BERT

0.541

0.622

0.579

BIO-Clinical-BERT

0.519

0.600

0.557

BIOMED-RoBERTa

0.527

0.617

0.569

Clinical-XLM-RoBERTa

0.547

0.597

0.571

  1. The highest F1-score is highlighted in the table