本地化大语言模型在胃癌术前药物重整中的应用模式构建与实践
x

请在关注微信后,向客服人员索取文件

篇名: 本地化大语言模型在胃癌术前药物重整中的应用模式构建与实践
TITLE: Construction and practice of application model for localized large language model in preoperative medication reconciliation for gastric cancer
摘要: 目的 构建本地化大语言模型(LLM)辅助胃癌术前药物重整模式,并进行效果评价。方法回顾性纳入2024年1月至2026年1月江苏省肿瘤医院胃外科249例入院前存在持续用药史的胃癌患者。根据时间先后将患者划分为训练集(154例)和验证集(95例)。基于指南、药品说明书等证据,构建标准化药物重整流程与结构化知识库,并在院内私有化部署DeepSeek-V3LLM,结合检索增强生成技术,实现对用药信息的自动整合、风险筛查以及个体化建议生成。采用机器评分(BERTScore和ROUGE-1、2、L)与人工评分[七维指标(7DI)]评价LLM生成建议的质量;运用Spearman相关分析探究机器评分与人工评分的相关性;采用Cronbach’sα系数检验人工评分结果的内部一致性;比较不同难易程度(简单、中等、高难度3个等级)药物重整任务的人工与LLM药物重整耗时。结果最终构建了涵盖8大类药物、能够覆盖常见及高风险术前用药场景的结构化知识库。机器评分方面,BERTScore的精确率为0.783±0.033,召回率为0.811±0.038,F1分数为(0.796±0.028)分;ROUGE-1、ROUGE-2和ROUGE-L3个层级的F1分数分别为(0.566±0.067)、(0.338±0.076)和(0.468±0.082)分。3名人工评分者的7DI评分为32.06~33.45分。机器评分的F1分数与人工评分的7DI评分均呈显著正相关(最高决定系数=0.611,P<0.001),且人工评分内部一致性良好(Cronbach’sα=0.876)。在效率方面,与人工药物重整耗时比较,LLM药物重整耗时在简单组、中等组、高难度组中均减少90%以上(P<0.001)。结论基于本地化LLM与结构化知识库构建的药物重整模式,在胃癌术前复杂用药场景中具有较高的准确性、一致性和临床可用性,能够提升药物重整效率,同时降低潜在用药风险。
ABSTRACT: OBJECTIVE To construct a preoperative medication reconciliation model assisted by a localized large language model (LLM) for gastric cancer and evaluate its clinical efficacy. METHODS A total of 249 gastric cancer patients with a history of continuous medication before admission in the Gastric Surgery Department of Jiangsu Cancer Hospital were retrospectively enrolled. Patients were divided into training set (154 cases) and validation set (95 cases) based on the order of time. Based on guidelines, drug package inserts, and other evidence, a standardized medication reconcili ation process and a structured knowledge base were constructed. DeepSeek-V3 LLM was deployed privately in the hospital, combined with retrieval-augmented generation technology, to achieve automated integration of medication information, risk screening, and generation of personalized recommendations. The quality of LLM-generated recommendations was evaluated using automatic metrics (BERT Score and ROUGE-1, 2, L) and manual scoring [seven-dimensional index (7DI) ] . Spearman correlation analysis was performed to explore the correlation between automatic scores and manual scores. Cronbach’s α coefficient was used to test the internal consistency of manual scoring results. The time consumed by manual and LLM-assisted medication reconciliation was compared across tasks of different difficulty levels (simple, moderate, and high). RESULTS A structured knowledge base covering 8 major drug categories was finally established, covering common and high-risk preoperative medication scenarios and providing structured retrieval support for the LLM. For automatic evaluation, the precision, recall, and F1-score of BERT Score were 0.783±0.033, 0.811±0.038, and 0.796±0.028, respectively. The F1-scores of ROUGE-1, ROUGE-2 and ROUGE-L were 0.566±0.067, 0.338±0.076 and 0.468±0.082, respectively. The 7DI scores from three manual raters ranged from 32.06 to 33.45. The F1-score of automatic scoring was significantly positively correlated with the 7DI score of manual scoring (maximum coefficient of determination=0.611, P <0.001), and the internal consistency of manual scoring was good (Cronbach’s α = 0.876). In terms of efficiency, LLM-assisted medication reconciliation reduced time consumption by more than 90% compared with manual reconciliation in the simple, moderate, and high-difficulty groups ( P <0.001). CONCLUSIONS The medication reconciliation model constructed based on a localized LLM and structured knowledge base shows high accuracy, consistency, and clinical applicability in complex preoperative medication scenarios for gastric cancer. It can improve the efficiency of medication reconciliation and reduce potential medication risks.
期刊: 2026年第37卷第08期
作者: 朱宇轩;张冀中;孙雨豪;温佳瑜;刘欣;魏继福;黄凌莉
AUTHORS: ZHU Yuxuan,ZHANG Jizhong,SUN Yuhao,WEN Jiayu,LIU Xin,WEI Jifu,HUANG Lingli
关键字: 药物重整;人工智能;大语言模型;胃癌;术前用药;用药安全
KEYWORDS: medication reconciliation; artificial intelligence; large language model; gastric cancer; preoperative medication;
阅读数: 8 次
本月下载数: 1 次

* 注:未经本站明确许可,任何网站不得非法盗链资源下载连接及抄袭本站原创内容资源!在此感谢您的支持与合作!