Artificial intelligence-assisted endoscopy for early screening, diagnosis, and therapy of gastrointestinal tumors
-
摘要: 我国消化道肿瘤面临高发病率、低早诊率的严峻形势,传统内镜筛查存在局限性,人工智能(AI)辅助消化内镜技术正成为实现早筛、早诊、早治目标的重要手段。在食管癌、胃癌及结直肠癌的早期识别、性质判定以及内镜精准治疗导航中,基于深度学习的计算机辅助检测与诊断系统已展现出巨大潜力,有助于提升诊断的灵敏度、特异度,提高操作效率。尽管面临数据标准化、算法可解释性及伦理监管等方面的挑战,AI与消化内镜的结合正推动诊疗范式从“经验依赖”向“数据驱动”转变,为实现消化道肿瘤“早筛-早诊-早治”的全链条精准防控提供新的助力。Abstract: In response to the severe challenge of high incidence but low early detection rate of gastrointestinal cancers in China and the limitations of traditional endoscopic screening, artificial intelligence (AI)-assisted endoscopy is emerging as a key solution for early screening, diagnosis, and treatment. Deep learning-based computer-aided detection and diagnosis systems have demonstrated significant potential to enhance clinical sensitivity, specificity, and operational efficiency in the early identification, pathological characterization, and guidance of precision endoscopic therapy for esophageal, gastric, and colorectal cancers. Although challenges remain in data standardization, algorithm interpretability, and regulatory compliance and ethical governance, the deep integration of AI and digestive endoscopy is transforming the diagnostic and therapeutic paradigm from "experience-dependent" to "data-driven", and offers a core driving force for achieving comprehensive and precise prevention and control of gastrointestinal tumors.
-
Keywords:
- gastrointestinal neoplasms /
- artificial intelligence /
- digestive endoscopy /
- screening /
- diagnosis /
- treatment
-
消化道肿瘤是我国肿瘤防控体系中的重要防治领域。据统计,食管癌、胃癌和结直肠癌三类肿瘤合计发病人数占全国新发癌症病例的1/3,其流行病学特征呈现高发病率、高死亡率与低早诊率并存的特点[1]。2022年全球癌症统计数据显示,食管癌、胃癌、结直肠癌新发病例分别达51.1万、96.9万及192.6万例,合计突破350万例;总死亡病例超200万例,死亡率分列全球癌症相关死亡第7、第5及第2位[2-4]。我国作为人口大国,2022年三类肿瘤总死亡病例达68.7万例,占我国癌症死亡人数的26.7%,防控形势尤为严峻[2-4]。
尽管早期消化道肿瘤经内镜/手术治疗后5年生存率可超90%,但我国早诊率普遍偏低:食管癌仅18.6%[5],胃癌、结直肠癌不足10%[6-7]。传统内镜检查的局限性是早诊率低的关键瓶颈。白光内镜依赖医师经验判断,观察者的主观性影响检出率;放大内镜等技术成本高、操作复杂,不适用于大规模筛查;同时,我国庞大的人口基数导致内镜医师筛查负担沉重,难以兼顾准确性与效率。
人工智能(artificial intelligence,AI)的出现为破解这一困境提供了新的解决方案[8]。以深度学习为代表的技术突破推动了AI在医学各领域的广泛应用,使其成为辅助临床诊治的重要工具[9]。消化内镜作为胃肠道疾病诊断的金标准,其诊断过程高度依赖操作者经验,且学习周期长、培养成本高。而基于深度学习的计算机视觉算法可自动识别内镜图像中的可疑病灶特征并精准标注,相较于人类医师,AI在准确性与效率上展现出潜在优势[10]。因此,AI辅助消化内镜技术有助于推动内镜检测实现从“经验依赖”到“数据驱动”的转变,对实现消化道肿瘤早筛、早诊、早治的“三早”目标具有积极意义[11]。
1 AI辅助消化内镜的原理与主要应用范畴
机器学习作为实现AI的核心方法,涵盖了深度神经网络、卷积神经网络(convolutional neural network,CNN)及循环神经网络等多种架构,其中CNN因在图像处理领域的突出性能已成为研究热点[12]。CNN作为专门处理网格结构数据(如图像)的深度学习模型,通过“卷积-激活-池化”的多层堆叠机制,逐层提取局部特征(如边缘、角点、纹理梯度等),并最终通过全连接层实现分类识别。这种特性使其能够精准捕捉内镜图像中黏膜微结构、血管形态等细微变化,对肉眼难以辨识的早期病灶进行识别,从而提升消化内镜检查的灵敏度与特异度。当前,消化内镜领域最成熟的机器学习应用集中于病变检测与诊断两大模块:计算机辅助检测(computer-aided detection,CADe)与计算机辅助诊断(computer-aided diagnosis,CADx)[13]。
CADe是内镜医师的“实时感知增强系统”,旨在通过技术手段弥补人类视觉的局限性,提升对微小、平坦或位于解剖复杂区域(如结肠皱襞后、食管胃交界处)病变的检出率[14]。系统通常基于白光内镜影像训练,通过学习海量图像中正常黏膜与病变的形态学差异,自动识别可疑区域并实时输出视觉标记(如边界框、高亮覆盖),直接辅助操作者定位目标。目前,CADe技术最成熟的领域是结肠镜息肉检出及早癌筛查,多项多中心研究证实其可显著提高腺瘤检出率,并有助于缩小不同经验水平医师间的诊断差异[15]。该技术已逐步扩展至食管早癌、胃早癌及小肠病变的检测,正从单纯的辅助工具发展为内镜筛查质量控制的有益补充。
CADx则在CADe完成病灶定位后承担“智能判读”功能,目标是通过深度分析病变的形态学特征(如黏膜表面微血管形态、腺管开口模式),在无需离体活检的情况下实时预测病理性质,实现“光学活检”。相较于CADe主要依赖白光影像,CADx通常需要整合或依赖图像增强技术(如窄带成像、蓝激光成像、放大内镜等),这些技术通过增强对比度与分辨率,可更清晰地显示黏膜微结构、血管纹理等关键诊断信息[16]。其输出不仅限于二分类(是否病变),更能实现多级分类(如腺瘤性息肉与增生性息肉的区分、肿瘤浸润深度分级等),并给出诊断建议。当前,CADx在组织学预测方面进展迅速,对推动“诊断-留存”或“切除-丢弃”的临床决策优化具有潜在价值。
2 AI在消化道肿瘤早筛、早诊中的应用
2.1 食管癌的早期筛查与诊断
早期食管癌的诊断高度依赖内镜直视观察与靶向活检的协同应用。尽管窄带成像、放大内镜等先进技术持续迭代,但由于早期病变黏膜外观变化极其微妙且患者缺乏特异性症状,导致其早期筛查较为困难[17-19]。AI通过自动识别内镜下早期微小病变、异常血管模式等黏膜变化,在保证病变检出率的同时实现靶向活检, 为食管癌的筛查带来新进展。
Hashimoto等[20]基于多模态内镜图像(白光、窄带成像、共聚焦成像)训练的AI模型在早期食管癌检测中展现出良好的性能,灵敏度、特异度、准确度分别达到96.4%、94.2%和95.4%。Cai等[21]利用1 332例白光内镜图像构建的深度神经网络模型,在食管鳞状细胞癌识别中的灵敏度、特异度、准确度分别为97.8%、85.4%和91.4%,其表现优于初级内镜医师(准确度为77.2%)及中级内镜医师(准确度为81.6%)。王士旭等[22]基于YOLOv5l框架开发的诊断模型在白光成像、窄带成像和卢戈染色内镜图像中均展现出较高的诊断准确度,分别为96.9%、98.6%和93.0%,同时有效降低了漏诊率。这种AI与先进成像技术的融合不仅提升了筛查效率,更通过“人机协同”模式加速了新手内镜医师的诊断能力成长。在巴雷特食管相关肿瘤检测领域,De Groof等[23]开发的深度学习系统在静态图像中检测巴雷特食管肿瘤性病变的灵敏度、特异度、准确度分别为90%、88%和89%,优于非专家内镜医师;使用该系统对20例患者进行实时内镜检测,检测效能表现稳定,灵敏度、特异度、准确度分别达91%、89%、90%[24]。
AI在肿瘤浸润深度评估方面同样展现出潜力。Everson等[25]基于CNN构建的深度学习系统可在数秒内完成食管癌浸润深度判断,准确度为93.7%(95%CI 86.2%~98.3%);Nakagawa等[26]开发的针对食管鳞状细胞癌的AI系统在区分黏膜下微浸润癌与深层浸润癌时,性能与资深内镜医师相当。值得关注的是,武汉大学人民医院于红刚团队提出的基于可解释性AI的食管鳞状细胞癌浸润深度预测系统,在浸润深度预测的准确性与可解释性上均优于传统深度学习模型,提高了内镜医师的诊断水平[27]。在肿瘤分化程度及病理分型层面,四川大学华西医院胡兵团队开发的实时诊断系统可精准识别癌前病变与早期食管鳞状细胞癌[28];东京大学Tada团队构建的CNN模型在早期与进展期食管癌区分中达到98%的高准确度,为临床分期与治疗策略选择提供了支持[29]。
上述研究表明,AI在食管癌早诊中已从单纯的病变检测(CADe)向精准诊断(CADx)逐步发展。其价值不仅体现在高灵敏度的数字指标上,更在于能够将窄带成像、放大内镜等复杂成像技术的判读经验进行“标准化”提取,从而有助于缩小不同年资医师间的诊断差异,对于在更广泛医疗机构推广早期食管癌内镜诊断技术具有积极意义。然而,现有模型多基于静态图像进行训练,对实时视频或模糊边界病变的处理仍是未来需要重点验证和优化的方向。
2.2 胃癌的早期筛查与诊断
胃癌内镜筛查的漏诊率受肿瘤特征(如大小、形态、位置)及医师经验水平双重影响,文献报道常规白光内镜筛查漏诊率为4.6%~25.8%[30]。AI的引入通过智能感知和主动识别机制弥补了传统内镜的局限性,其可精准捕捉胃黏膜色泽微变、表面凹凸异常等早期病变特征,实现可疑病灶的自动标注与实时提示,从而降低漏诊风险并提升病灶检出效率[31]。
Hirasawa等[32]基于Single Shot MultiBox Detector架构构建的CNN模型在早期胃癌识别中达到92.2%的总体灵敏度,展现了AI在病灶检测层面的优异性能。Wu等[33]研究显示,基于深度学习的胃癌诊断系统ENDOANGEL在与多中心内镜医师的“人机对战”中表现突出,其早期胃癌识别准确性显著优于人类医师,显示了AI在复杂场景下的诊断优势。Jiang等[34]的meta分析证实,AI辅助检测早期胃癌的AUC达0.96(95%CI 0.94~0.97),灵敏度为86%(95%CI 77%~92%),特异度为93%(95%CI 89%~96%),提示AI模型在胃癌早筛中具备高准确性和临床应用价值。
对于局限于黏膜层及黏膜下层的早期胃癌,内镜黏膜下剥离术(endoscopic submucosal dissection,ESD)可实现根治性治疗,因此准确评估肿瘤浸润深度是制定治疗策略的关键[35]。AI在此领域同样表现突出。Wu等[33]开发的深度学习系统可预测内镜视频中早期胃癌的分化程度,其准确度与资深内镜医师相当;Zhu等[36]基于790例训练图像构建的CNN模型,在203例测试图像中预测胃癌浸润深度的准确度为89.16%,特异度为95.56%,其表现优于经验丰富的内镜医师;Nagao等[37]利用白光、窄带成像及靛胭脂染色多模态图像训练的AI模型,预测胃癌浸润深度的准确度超过90%;Goto等[38]研发的AI分类器能够精准区分黏膜层与黏膜下层胃癌的病理特征,解决了早期胃癌浸润深度判断的临床难点,作为智能辅助工具有效提升了内镜医师的诊断准确度。
AI在胃癌早诊中的应用聚焦于解决两大临床难点:一是提高微小、平坦型早期胃癌的检出率(CADe),二是精准判断早期胃癌的浸润深度以指导治疗(CADx)。研究显示,AI系统在特定任务上可达到甚至超越内镜专家的水平,尤其在提升非专家内镜医师的诊断能力方面潜力显著。这为在胃癌高发区开展大规模内镜筛查、实现诊断质量的提升提供了技术可能。未来的研究需关注AI模型在真实世界、连续病例中的表现,以及其与超声内镜等现有分期手段的协同价值。
2.3 结直肠癌的早期筛查与诊断
结肠镜是结直肠癌筛查的主要手段,其质量评估的关键指标之一是腺瘤检出率。腺瘤作为结直肠癌的主要癌前病变,其检出水平直接关联结直肠癌发生风险。多项研究证实,高腺瘤检出率与低结直肠癌风险呈显著相关性[39]。然而,当前结肠镜筛查仍面临长期挑战,其中病灶漏诊问题尤为突出:近60%的结肠镜后结直肠癌病例可归因于肿瘤漏诊,这一数据凸显了筛查过程中对潜在病灶识别的不充分性[40]。鉴于腺瘤是结直肠癌演变的起点,漏诊与误判会显著削弱筛查效果,因此,提高腺瘤检出率不仅是提升筛查质量的核心目标,更是降低结直肠癌发病率与死亡率的关键策略。
基于AI的CADe研究虽多数尚处起步阶段,但已展现出潜在的临床应用价值,有望直接提高结直肠癌早筛率并降低结直肠癌风险[41-43]。Misawa等[44]开发的结直肠息肉实时检测系统通过分析结肠镜视频图像自动标记可疑息肉区域,灵敏度达90.0%。Krenzer等[45]基于超过50万张注释图像的数据集训练,创建了实时息肉检测系统ENDOMIND-Advanced,该系统在测试中表现出良好的检测性能。一项涵盖21项随机试验、18 232例患者的meta分析显示,AI辅助结肠镜的腺瘤检出率高于标准结肠镜组,腺瘤漏诊率相对降低55%(RR=0.45,95%CI 0.35~0.58)[46]。Shaukat等[47]的随机对照试验以每例结肠镜腺瘤检出数(adenomas per colonoscopy,APC)作为主要观察结局,结果显示AI辅助结肠镜检查使APC提高了27%(由0.83增加至1.05)。我国研究者开展的随机对照试验则证实,在无症状人群中,AI辅助结肠镜检查组的腺瘤检出率、进展期腺瘤检出率及平均腺瘤检出数量均高于传统结肠镜组,且这一优势在专家与非专家内镜医师组中均具有统计学意义[48]。随着技术成果的涌现,部分AI系统已进入临床实践。GI Genius作为全球首个获监管部门批准的结肠镜实时辅助检测设备,在实际应用中已被证实可提高结直肠息肉检出率[49]。
在鉴别诊断层面,内镜对结肠腺瘤与非腺瘤的鉴别准确度常低于临床预期阈值,直接影响治疗决策的精准性[50]。传统方法需切除息肉后观察病理切片,存在滞后性;若内镜下无法确定性质,可能导致不必要的切除,增加患者痛苦及内镜检查成本。CADx可帮助内镜医师准确判断息肉性质并克服观察者间差异。Mori等[51]研究发现,实时CADx系统的结肠息肉病理预测率达98.1%,可有效区分肿瘤性息肉与非肿瘤性息肉。Byrne等[52]开发的CADe模型通过实时分析结直肠息肉内镜视频图像,可区分微小腺瘤与增生性息肉,灵敏度达98%(95%CI 92%~100%),特异度为83%(95%CI 67%~93%)。Minegishi等[53]基于窄带成像开发的CADx系统识别包括无蒂锯齿状病变在内的肿瘤性病变的灵敏度为94.4%,特异度为62.5%,其产品EndoBRAIN®-X已获得日本监管机构批准。Houwen等[54]开发的CADx系统能够对微小结直肠息肉进行实时分型,识别肿瘤性病变(含无蒂锯齿状病变)的灵敏度为89%,特异度为38%,进一步验证了AI在优化息肉病理性质判断中的临床辅助价值。
在结直肠癌筛查领域,AI辅助结肠镜检查是研究最深入、证据最丰富的方向。大量随机对照试验和meta分析一致证实,CADe系统能稳定提高腺瘤检出率,尤其是提高非专家内镜医师的操作质量,这具有明确的公共卫生意义。CADx系统则致力于实现“光学活检”,其高灵敏度特性使其在“诊断-留存”策略中扮演关键角色,但特异度仍有提升空间。总体来看,AI在结直肠癌筛查中已从技术验证走向临床推广,下一步的重点是如何将AI工具无缝、高效地整合到常规筛查流程中,并评估其长期成本效益。
3 AI驱动消化道肿瘤的早期精准治疗
在内镜下治疗领域,机器人内镜系统与AI的结合正推动诊疗模式向精准化、智能化发展。例如,EndoMaster EASE系统已成功应用于结直肠癌ESD,通过机械臂的稳定操作提升了手术的稳定性与安全性[55];而OverStitch内镜缝合系统则通过连续缝合技术强化了胃肠道病灶切除后创面修复的稳固性,降低了术后并发症的发生风险[56]。AI技术在此进程中扮演着“智能导航员”的关键角色。在ESD等内镜下切除术中,AI系统可实时自动勾勒肿瘤边界,替代传统碘染/靛胭脂染色的主观判断,为完整切除提供视觉参考。其不仅能识别黏膜层、黏膜下层、固有肌层等解剖层次,还可通过深度学习模型实时预测肿瘤浸润深度,实现增强的视觉引导[57]。以Cao等[58]构建的AI-Endo模型为例,当发现可疑早期癌或平坦型病变时,可通过先进图像分割算法在内镜屏幕上实时用轮廓线勾勒病变范围;对于已确诊的早期癌,则能综合分析多种内镜下特征,实时预测病变侵犯胃壁或肠壁的深度。这种智能辅助的精准定位与实时分析能力,有助于降低因医师经验差异导致的误诊和漏诊风险,尤其在病变特征不明显或位置隐蔽(如结肠皱襞后、胃小弯侧)的情况下,可最大限度保留健康组织并降低术后追加手术的概率。
此外,AI具有实时监控功能,能持续追踪手术器械位置与操作状态,通过动态轨迹分析及时发现操作偏差(如切割深度不足、电凝过度)并预警,辅助医师调整策略以保障手术安全。这种“人机协同”的治疗模式不仅提升了内镜下治疗的效率与安全性,更推动了从“经验驱动”到“数据驱动”的治疗范式演进,为消化道肿瘤的早期精准治疗提供了技术辅助。
4 临床转化挑战与未来发展方向
随着AI辅助内镜相关研究的持续涌现,AI技术正从多方面提升传统消化内镜检查的水平。尽管AI辅助消化内镜不断取得技术进步,但在技术全面推广之前仍面临诸多挑战。
首先,AI赋能消化内镜高度依赖于大量优质数据。然而,当前消化道肿瘤大数据的建设仍面临严峻挑战,数据的采集、存储与标注环节缺乏统一标准,导致整体质量不佳,严重制约了AI模型的研发与落地。影像、病理等非结构化数据的采集规范缺失,且设备参数不一,造成数据质量参差不齐;各医疗机构的信息系统互不兼容,数据格式与术语存在差异;数据标注缺乏统一指南和质量控制流程,且大量数据标注的人力成本高、主观性强,难以保证一致性。此外,跨机构数据共享缺乏平台,存在壁垒,形成“数据孤岛”,单中心样本数据难以聚合,使得可用于模型训练与验证的样本量不足,导致AI的临床转化进程受阻。
其次,AI在消化道肿瘤诊疗中的应用不仅要求模型具备高精度的预测能力,还需临床医师能够理解并信任其决策依据[59-60]。然而,当前深度学习模型大多存在算法“黑箱”,深度学习模型结构复杂、参数多样,其决策过程难以用直观方式解释。这种不透明性使得医师难以理解AI给出的诊断与预测,进而影响内镜医师和患者对AI系统的信任与采纳[61-62]。
最后,相关的法规尚未推出,落地壁垒仍需打破。AI赋能后的医疗器械生产与采购的标准和流程如何制定,相关耗材的收费标准如何划分,以及相关事故的责任如何界定,这些医院和患者最为关心的事项尚需落实。除此之外,数据采集涉及患者隐私安全,需要相关法规提供保护。
根据最新研究统计,医学AI领域中实际超越原型阶段并进入常规临床应用的模型不足2%[63]。换言之,目前绝大多数AI系统仍停留在原型开发与验证阶段,能够真正应用于临床实践的系统非常有限。然而,每项新技术从理论构想到最终临床落地都需经历漫长历程,来自不同学科科学家的协作努力正加速这一进程。尽管挑战重重,但随着技术突破与制度完善,AI辅助内镜检查有望实现从“辅助工具”到“临床伙伴”的逐步转变,最终推动消化道肿瘤“早筛-早诊-早治”全链条的智能化、精准化升级。
-
[1] 王洛伟, 杜奕奇, 柏愚, 等. 中国消化道肿瘤的早诊早治现状及挑战[J]. 中国实用内科杂志, 2025, 45(5): 353-356. DOI: 10.19538/j.nk2025050101. [2] 高野, 林寒, 王伟, 等. 中国食管癌早筛早诊早治高质量发展的思考[J]. 中国实用内科杂志, 2025, 45(5): 357-364. DOI: 10.19538/j.nk2025050102. [3] 许诗涵, 周显祝, 杜奕奇, 等. 中国胃癌早筛早诊早治高质量发展的思考[J]. 中国实用内科杂志, 2025, 45(5): 365-372. DOI: 10.19538/j.nk2025050103. [4] 赵英楠, 高君妍, 贺子轩, 等. 中国结直肠癌早筛早诊早治高质量发展的思考[J]. 中国实用内科杂志, 2025, 45(5): 373-380. DOI: 10.19538/j.nk2025050104. [5] XIN L, GAO Y, CHENG Z, et al. Utilization and quality assessment of digestive endoscopy in China: results from 5-year consecutive nationwide surveys[J]. Chin Med J, 2022, 135(16): 2003-2010. DOI: 10.1097/CM9.0000000000002366. [6] SUGANO K. Screening of gastric cancer in Asia[J]. Best Pract Res Clin Gastroenterol, 2015, 29(6): 895-905. DOI: 10.1016/j.bpg.2015.09.013. [7] 国家消化系统疾病临床医学研究中心(上海), 国家消化道早癌防治中心联盟, 中华医学会消化内镜学分会, 等. 中国早期结直肠癌筛查流程专家共识意见(2019, 上海)[J]. 中华内科杂志, 2019, 58(10): 736-744. DOI: 10.3760/cma.j.issn.05781426.2019.10.004. [8] HE Y S, SU J R, LI Z, et al. Application of artificial intelligence in gastrointestinal endoscopy[J]. J Dig Dis, 2019, 20(12): 623-630. DOI: 10.1111/1751-2980.12827. [9] WISSE P H A, ERLER N S, DE BOER S Y, et al. Adenoma detection rate and risk for interval postcolonoscopy colorectal cancer in fecal immunochemical test-based screening: a population-based cohort study[J]. Ann Intern Med, 2022, 175(10): 1366-1373. DOI: 10.7326/m22-0301. [10] PANNALA R, KRISHNAN K, MELSON J, et al. Artificial intelligence in gastrointestinal endoscopy[J]. VideoGIE, 2020, 5(12): 598-613. DOI: 10.1016/j.vgie.2020.08.013. [11] GUO F, MENG H. Application of artificial intelligence in gastrointestinal endoscopy[J]. Arab J Gastroenterol, 2024, 25(2): 93-96. DOI: 10.1016/j.ajg.2023.12.010. [12] ALI H, ALI MUZAMMIL M, DAHIYA D S, et al. Artificial intelligence in gastrointestinal endoscopy: a comprehensive review[J]. Ann Gastroenterol, 2024, 37(2): 133-141. DOI: 10.20524/aog.2024.0861. [13] MISAWA M, KUDO S E. Current status of artificial intelligence use in colonoscopy[J]. Digestion, 2025, 106(2): 138-145. DOI: 10.1159/000543345. [14] SPADACCINI M, MASSIMI D, MORI Y, et al. Artificial intelligence-aided endoscopy and colorectal cancer screening[J]. Diagnostics, 2023, 13(6): 1102. DOI: 10.3390/diagnostics13061102. [15] SPADACCINI M, IANNONE A, MASELLI R, et al. Computer-aided detection versus advanced imaging for detection of colorectal neoplasia: a systematic review and network meta-analysis[J]. Lancet Gastroenterol Hepatol, 2021, 6(10): 793-802. DOI: 10.1016/S2468-1253(21)00215-6. [16] BARUA I, WIESZCZY P, KUDO S E, et al. Real-time artificial intelligence-based optical diagnosis of neoplastic polyps during colonoscopy[J]. NEJM Evid, 2022, 1(6): EVIDoa2200003. DOI: 10.1056/EVIDoa2200003. [17] PIMENTEL-NUNES P, LIBÂNIO D, BASTIAANSEN B A J, et al. Endoscopic submucosal dissection for superficial gastrointestinal lesions: European Society of Gastrointestinal Endoscopy (ESGE) guideline-update 2022[J]. Endoscopy, 2022, 54(6): 591-622. DOI: 10.1055/a-1811-7025. [18] CHADWICK G, GROENE O, HOARE J, et al. A population-based, retrospective, cohort study of esophageal cancer missed at endoscopy[J]. Endoscopy, 2014, 46(7): 553-560. DOI: 10.1055/s-0034-1365646. [19] GAVRIC A, HANZEL J, ZAGAR T, et al. Survival outcomes and rate of missed upper gastrointestinal cancers at routine endoscopy: a single centre retrospective cohort study[J]. Eur J Gastroenterol Hepatol, 2020, 32(10): 1312-1321. DOI: 10.1097/meg.0000000000001863. [20] HASHIMOTO R, REQUA J, DAO T, et al. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett's esophagus (with video)[J]. Gastrointest Endosc, 2020, 91(6): 1264-1271.e1. DOI: 10.1016/j.gie.2019.12.049. [21] CAI S L, LI B, TAN W M, et al. Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video)[J]. Gastrointest Endosc, 2019, 90(5): 745-753.e2. DOI: 10.1016/j.gie.2019.06.044. [22] 王士旭, 柯岩, 刘雨蒙, 等. 内镜下早期食管癌及癌前病变识别人工智能YOLOv51模型的建立及临床验证[J]. 中华肿瘤杂志, 2022, 44(5): 395-401. DOI: 10.3760/cma.j.cn112152-20211126-00877. [23] DE GROOF A J, STRUYVENBERG M R, VAN DER PUTTEN J, et al. Deep-learning system detects neoplasia in patients with Barrett's esophagus with higher accuracy than endoscopists in a multistep training and validation study with benchmarking[J]. Gastroenterology, 2020, 158(4): 915-929.e4. DOI: 10.1053/j.gastro.2019.11.030. [24] DE GROOF A J, STRUYVENBERG M R, FOCKENS K N, et al. Deep learning algorithm detection of Barrett's neoplasia with high accuracy during live endoscopic procedures: a pilot study (with video)[J]. Gastrointest Endosc, 2020, 91(6): 1242-1250. DOI: 10.1016/j.gie.2019.12.048. [25] EVERSON M, HERRERA L, LI W, et al. Artificial intelligence for the real-time classification of intrapapillary capillary loop patterns in the endoscopic diagnosis of early oesophageal squamous cell carcinoma: a proof-of-concept study[J]. United European Gastroenterol J, 2019, 7(2): 297-306. DOI: 10.1177/2050640618821800. [26] NAKAGAWA K, ISHIHARA R, AOYAMA K, et al. Classification for invasion depth of esophageal squamous cell carcinoma using a deep neural network compared with experienced endoscopists[J]. Gastrointest Endosc, 2019, 90(3): 407-414. DOI: 10.1016/j.gie.2019.04.245. [27] ZHANG L, LUO R, TANG D, et al. Human-like artificial intelligent system for predicting invasion depth of esophageal squamous cell carcinoma using magnifying narrow-band imaging endoscopy: a retrospective multicenter study[J]. Clin Transl Gastroenterol, 2023, 14(10): e00606. DOI: 10.14309/ctg.0000000000000606. [28] GUO L, XIAO X, WU C, et al. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos)[J]. Gastrointest Endosc, 2020, 91(1): 41-51. DOI: 10.1016/j.gie.2019.08.018. [29] HORIE Y, YOSHIO T, AOYAMA K, et al. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks[J]. Gastrointest Endosc, 2019, 89(1): 25-32. DOI: 10.1016/j.gie.2018.07.037. [30] CHEN H, LIU S Y, HUANG S H, et al. Applications of artificial intelligence in gastroscopy: a narrative review[J]. J Int Med Res, 2024, 52(1): 03000605231223454. DOI: 10.1177/03000605231223454. [31] MENON S, TRUDGILL N. How commonly is upper gastrointestinal cancer missed at endoscopy? A meta-analysis[J]. Endosc Int Open, 2014, 2(2): E46-E50. DOI: 10.1055/s-0034-1365524. [32] HIRASAWA T, AOYAMA K, TANIMOTO T, et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images[J]. Gastric Cancer, 2018, 21(4): 653-660. DOI: 10.1007/s10120-018-0793-2. [33] WU L, WANG J, HE X, et al. Deep learning system compared with expert endoscopists in predicting early gastric cancer and its invasion depth and differentiation status (with videos)[J]. Gastrointest Endosc, 2022, 95(1): 92-104.e3. DOI: 10.1016/j.gie.2021.06.033. [34] JIANG K, JIANG X, PAN J, et al. Current evidence and future perspective of accuracy of artificial intelligence application for early gastric cancer diagnosis with endoscopy: a systematic and meta-analysis[J]. Front Med, 2021, 8: 629080. DOI: 10.3389/fmed.2021.629080. [35] LEI C, SUN W, WANG K, et al. Artificial intelligence-assisted diagnosis of early gastric cancer: present practice and future prospects[J]. Ann Med, 2025, 57(1): 2461679. DOI: 10.1080/07853890.2025.2461679. [36] ZHU Y, WANG Q C, XU M D, et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy[J]. Gastrointest Endosc, 2019, 89(4): 806-815.e1. DOI: 10.1016/j.gie.2018.11.011. [37] NAGAO S, TSUJI Y, SAKAGUCHI Y, et al. Highly accurate artificial intelligence systems to predict the invasion depth of gastric cancer: efficacy of conventional white-light imaging, nonmagnifying narrow-band imaging, and indigo-carmine dye contrast imaging[J]. Gastrointest Endosc, 2020, 92(4): 866-873.e1. DOI: 10.1016/j.gie.2020.06.047. [38] GOTO A, KUBOTA N, NISHIKAWA J, et al. Cooperation between artificial intelligence and endoscopists for diagnosing invasion depth of early gastric cancer[J]. Gastric Cancer, 2023, 26(1): 116-122. DOI: 10.1007/s10120-022-01330-9. [39] SCHOTTINGER J E, JENSEN C D, GHAI N R, et al. Association of physician adenoma detection rates with postcolonoscopy colorectal cancer[J]. JAMA, 2022, 327(21): 2114-2122. DOI: 10.1001/jama.2022.6644. [40] LE CLERCQ C M C, BOUWENS M W E, RONDAGH E J A, et al. Postcolonoscopy colorectal cancers are preventable: a population-based study[J]. Gut, 2014, 63(6): 957-963. DOI: 10.1136/gutjnl-2013-304880. [41] SUN K, WANG Y, QU R, et al. Comprehensive application of artificial intelligence in colorectal cancer: a review[J]. iScience, 2025, 28(7): 112980. DOI: 10.1016/j.isci.2025.112980. [42] HASSAN C, BISSCHOPS R, SHARMA P, et al. Colon cancer screening, surveillance, and treatment: novel artificial intelligence driving strategies in the management of colon lesions[J]. Gastroenterology, 2025, 169(3): 444-455. DOI: 10.1053/j.gastro.2025.02.021. [43] LUO Y, ZHANG Y, LIU M, et al. Artificial intelligence-assisted colonoscopy for detection of colon polyps: a prospective, randomized cohort study[J]. J Gastrointest Surg, 2021, 25(8): 2011-2018. DOI: 10.1007/s11605-020-04802-4. [44] MISAWA M, KUDO S E, MORI Y, et al. Artificial intelligence-assisted polyp detection for colonoscopy: initial experience[J]. Gastroenterology, 2018, 154(8): 2027-2029.e3. DOI: 10.1053/j.gastro.2018.04.003. [45] KRENZER A, BANCK M, MAKOWSKI K, et al. A real-time polyp-detection system with clinical application in colonoscopy using deep convolutional neural networks[J]. J Imaging, 2023, 9(2): 26. DOI: 10.3390/jimaging9020026. [46] HASSAN C, SPADACCINI M, MORI Y, et al. Real-time computer-aided detection of colorectal neoplasia during colonoscopy: a systematic review and meta-analysis[J]. Ann Intern Med, 2023, 176(9): 1209-1220. DOI: 10.7326/M22-3678. [47] SHAUKAT A, LICHTENSTEIN D R, SOMERS S C, et al. Computer-aided detection improves adenomas per colonoscopy for screening and surveillance colonoscopy: a randomized trial[J]. Gastroenterology, 2022, 163(3): 732-741. DOI: 10.1053/j.gastro.2022.05.028. [48] XU H, TANG R S Y, LAM T Y T, et al. Artificial intelligence-assisted colonoscopy for colorectal cancer screening: a multicenter randomized controlled trial[J]. Clin Gastroenterol Hepatol, 2023, 21(2): 337-346.e3. DOI: 10.1016/j.cgh.2022.07.006. [49] SAVINO A, RONDONOTTI E, ROCCHETTO S, et al. GI genius endoscopy module: a clinical profile[J]. Expert Rev Med Devices, 2024, 21(5): 359-372. DOI: 10.1080/17434440.2024.2342508. [50] REES C J, RAJASEKHAR P T, WILSON A, et al. Narrow band imaging optical diagnosis of small colorectal polyps in routine clinical practice: the Detect Inspect Characterise Resect and Discard 2 (DISCARD 2) study[J]. Gut, 2017, 66(5): 887-895. DOI: 10.1136/gutjnl-2015-310584. [51] MORI Y, KUDO S E, MISAWA M, et al. Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy: a prospective study[J]. Ann Intern Med, 2018, 169(6): 357-366. DOI: 10.7326/M18-0249. [52] BYRNE M F, CHAPADOS N, SOUDAN F, et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model[J]. Gut, 2019, 68(1): 94-100. DOI: 10.1136/gutjnl-2017-314547. [53] MINEGISHI Y, KUDO S E, MIYATA Y, et al. Comprehensive diagnostic performance of real-time characterization of colorectal lesions using an artificial intelligence-assisted system: a prospective study[J]. Gastroenterology, 2022, 163(1): 323-325.e3. DOI: 10.1053/j.gastro.2022.03.053. [54] HOUWEN B B S L, HAZEWINKEL Y, GIOTIS I, et al. Computer-aided diagnosis for optical diagnosis of diminutive colorectal polyps including sessile serrated lesions: a real-time comparison with screening endoscopists[J]. Endoscopy, 2023, 55(8): 756-765. DOI: 10.1055/a-2009-3990. [55] CHIU P W Y, YIP H C, CHU S, et al. Prospective single-arm trial on feasibility and safety of an endoscopic robotic system for colonic endoscopic submucosal dissection[J]. Endoscopy, 2025, 57(3): 240-246. DOI: 10.1055/a-2411-0892. [56] KEIHANIAN T, ZABAD N, KHALAF M, et al. Safety and efficacy of a novel suturing device for closure of large defects after endoscopic submucosal dissection (with video)[J]. Gastrointest Endosc, 2023, 98(3): 381-391. DOI: 10.1016/j.gie.2023.04.006. [57] PHEE S J, LOW S C, HUYNH V A, et al. Master and slave transluminal endoscopic robot (MASTER) for natural orifice transluminal endoscopic surgery (NOTES)[J]. Annu Int Conf IEEE Eng Med Biol Soc, 2009, 2009: 1192-1195. DOI: 10.1109/IEMBS.2009.5333413. [58] CAO J, YIP H C, CHEN Y, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study[J]. Nat Commun, 2023, 14: 6676. DOI: 10.1038/s41467-023-42451-8. [59] MARKUS A F, KORS J A, RIJNBEEK P R. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies[J]. J Biomed Inform, 2021, 113: 103655. DOI: 10.1016/j.jbi.2020.103655. [60] HATHERLEY J, SPARROW R, HOWARD M. The virtues of interpretable medical AI[J]. Camb Q Healthc Ethics, 2024, 33(3): 323-332. DOI: 10.1017/s0963180122000664. [61] RUDIN C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[J]. Nat Mach Intell, 2019, 1(5): 206-215. DOI: 10.1038/s42256-019-0048-x. [62] DURÁN J M, JONGSMA K R. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI[J]. J Med Ethics, 2021: medethics-medet2020-106820. DOI: 10.1136/medethics-2020-106820. [63] SCHOUTEN J S, KALDEN M A C M, VAN TWIST E, et al. From bytes to bedside: a systematic review on the use and readiness of artificial intelligence in the neonatal and pediatric intensive care unit[J]. Intensive Care Med, 2024, 50(11): 1767-1777. DOI: 10.1007/s00134-024-07629-8.