Evaluating the Impact of Domain-Specific Fine-Tuning on BERT and ChatGPT for Medical Text Analysis

Authors

  • Muhammad Khalil Department of Computer Engineering, Alexandria University, Egypt

Abstract

This research paper explores the impact of domain-specific fine-tuning on two prominent language models, BERT and ChatGPT, in the context of medical text analysis. We conduct a comparative study to evaluate the performance of these models in tasks such as named entity recognition (NER), relation extraction, and medical document classification. By fine-tuning BERT and ChatGPT on a medical corpus, we aim to highlight their strengths and limitations, providing insights into their applicability in the medical domain.

Downloads

Published

2024-07-16

Issue

Section

Articles