oalogo2  

AUTHOR(S):

Amjad Jumaah Frhan, Mohammed A. S. Al-Hitawi, Hiba A, Abu-Alsaad

 

TITLE

Supervised Fine-Tuning Approach for Medical Question-Answering using Qwen Instruct

pdf PDF

ABSTRACT

Large Language Model (LLM) became promising tool for supporting medical applications based on natural language processing. Except that models trained for general purposes suffer from the limitation of accuracy and reliability when it deals with sensitive medical question answering (MQA). The model is evaluated using Exact Match and F1 score. The goal of this study is to develop LLM to answer the MQA tasks. Through applying SFT with Parmerter Efficient Fine Tuning (PEFT) specifical LoRA were applied. The results showed significant qualitative improvement and Notable accuracy in the answers. And organizing medical content, by reducing cognitive mistakes capered to the base model. The experiments refer to guided personalization for LLM represent essential step towards building medical systems.

KEYWORDS

Parameter Efficient Fine Tuning (PEFT), Supervised Fine Tuning (SFT), Medical QA, Qwen2.5, LoRA, Generative Models, Natural Language Processing

 

Cite this paper

Amjad Jumaah Frhan, Mohammed A. S. Al-Hitawi, Hiba A, Abu-Alsaad. (2026) Supervised Fine-Tuning Approach for Medical Question-Answering using Qwen Instruct. International Journal of Computers, 11, 57-62

 

cc.png
Copyright © 2026 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0