WO2023283980A1 - 一种应用于临床影像的人工智能医学影像质控方法 - Google Patents

一种应用于临床影像的人工智能医学影像质控方法 Download PDF

Info

Publication number
WO2023283980A1
WO2023283980A1 PCT/CN2021/107749 CN2021107749W WO2023283980A1 WO 2023283980 A1 WO2023283980 A1 WO 2023283980A1 CN 2021107749 W CN2021107749 W CN 2021107749W WO 2023283980 A1 WO2023283980 A1 WO 2023283980A1
Authority
WO
WIPO (PCT)
Prior art keywords
quality control
artificial intelligence
lung field
image quality
medical image
Prior art date
Application number
PCT/CN2021/107749
Other languages
English (en)
French (fr)
Inventor
连泽宇
胡安宁
Original Assignee
江苏宏创信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏宏创信息科技有限公司 filed Critical 江苏宏创信息科技有限公司
Publication of WO2023283980A1 publication Critical patent/WO2023283980A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the invention relates to the technical field of medical image quality control, in particular to an artificial intelligence medical image quality control method applied to clinical images.
  • DR chest film that is, Radio Radiography digital X-ray image
  • Radio Radiography digital X-ray image has become the best clinical diagnosis for the initial screening of chest and lung diseases in recent years because of its extremely low radiation dose, low cost of equipment, high density resolution, and fast imaging. Way. With the increase of patients and clinical needs and the trend of digital and intelligent transformation, the intelligent quality control of DR chest radiographs is particularly important.
  • the application number is CN109741317A, which discloses a medical image intelligent evaluation method, which realizes automatic intelligent judgment of medical image quality with the help of multiple convolutional neural network models, adopts the U-Net model, but after adopting U-Net segmentation, due to the Often the color is darker.
  • the surrounding dark area is mistaken for the lung field area, so there will be white noise blocks outside the lung field area; similarly, there will occasionally be in the actual lung field area Black noise, as if there are "holes" in the lung field;
  • the foreign body classification module distinguishes the types of foreign bodies on the images that meet the requirements, and detects all foreign bodies, which requires a relatively large workload.
  • the purpose of the present invention is to provide an artificial intelligence medical image quality control method applied to clinical images, which reduces the time cost for technicians to evaluate films, avoids subjective deviations, and greatly improves the process efficiency of chest radiographs.
  • the present invention provides following technical scheme:
  • An artificial intelligence medical image quality control method applied to clinical images A technician collects patient images and transmits the images to an artificial intelligence medical image quality control management system.
  • the artificial intelligence medical image quality control management system is used to perform semantic analysis on images Segmentation, classification processing and quality control scoring, the score is displayed on the operation interface of the technician's preview image, and the technician judges whether it is necessary to make notes or re-acquire images according to the quality control score and the patient's condition, so as to reduce the occurrence of low-score image quality.
  • the method of the artificial intelligence medical image quality control management system for image processing and performing quality control scoring includes the following steps:
  • the binary cross-entropy method is used to perform binary classification processing on the patient images collected by the technician to detect whether there is an extracorporeal foreign body in the lung field in the image, and if so, perform corresponding deductions. Points, if there is no point, no points will be deducted;
  • step S3 According to the connected domains of the three regions obtained in step S1, the body position offset, shoulder shrug, the proportion of the overlapping area of the scapula/lung field, the determination of the integrity of the lung field area, whether there are left and right identification codes, and the data of the contrast of the chest radiograph are obtained. , and then score the above data according to the pre-set quality control standard scoring rules, and combine with step S2 to obtain the final quality control score.
  • the calculation method of each data in the step S3 and the corresponding quality control standard scoring rules are as follows:
  • ⁇ x is the body position offset
  • x im is the coordinates of the center point of the picture
  • x p is the coordinates of the center point of the patient's body position
  • its calculation formula is x 1 is the maximum value of the midpoint coordinates of the left clavicular connected domain
  • x 2 is the minimum value of the midpoint coordinates of the right clavicle connected domain
  • the calculation method of the shoulder shrug is as follows: the left and right clavicles in the figure obtained in step S1 are respectively circumscribed rectangles, and the horizontal angle between the diagonals of the circumscribed rectangles is the left and right shoulder shrugs; If the amount of shrug on any side is greater than that, then it is deemed that there is a shrug, and one point will be deducted;
  • S 1 is the area of the left scapula
  • S 2 is the area of the right scapula
  • U 1 is the overlapping area of the left scapula and the lung field
  • U 2 is the overlapping area of the right scapula and the lung field
  • the method for judging the integrity of the lung field area is by drawing a square border 2cm-2.5cm away from the edge of the image in the lung field connected domain mask image, if the bilateral lung field areas are completely located in the border Within, the lung field area is considered complete, otherwise 1 point will be deducted.
  • the method for judging whether there are left and right identification code marks is: use OCR technology to perform text recognition on the area of the specified mark, and if the identification code cannot be recognized on both sides, one point will be deducted.
  • the contrast of the chest film is calculated using the Michelson contrast calculation formula.
  • the contrast exceeds the set maximum value or is less than the set minimum value, it is judged as overexposure or underexposure, and 1 point is deducted.
  • the establishment method of the image semantic segmentation model described in step S1 is as follows:
  • the U-Net framework of the three areas of the frontal chest radiograph is trained separately through the graphics card, and the abstract features in the image are converted into gray values of 0 and 1 on each pixel through upsampling, where 1 represents the lung field area, and 0 represents the lung field area to obtain an image semantic segmentation model based on the U-Net framework.
  • the white noise blocks in the image are eliminated through the opening operation in the morphology, and the black holes in the target area are eliminated through the closing operation, so that the connected domain is closed.
  • the artificial intelligence medical image quality control management system includes a PACS subsystem, an AI subsystem and a quality control management subsystem, and the PACS subsystem is used to connect imaging equipment and a database.
  • the present invention performs front-end quality control on images through the artificial intelligence medical image quality control management system, that is, the operation interface at the image acquisition end performs quality control in real time according to a standardized process, and the image acquisition doctor performs quality control according to the quality control score and Determine whether re-acquisition is required in detail, reduce the generation of low-score image quality, greatly reduce the time cost of technician evaluation, and avoid subjective bias; divide the image into extracorporeal foreign bodies in the non-lung field and extracorporeal foreign bodies in the lung field through the binary classifier , the difficulty of the classification task is greatly reduced; the present invention calculates the body position offset, shoulder shrug, scapula/lung field overlap area ratio, lung field area integrity judgment, whether there is left and right identification code labeling, and chest radiograph contrast. Quality control scoring is fast, efficient, and objective, which greatly improves the process efficiency of chest radiographs.
  • Fig. 1 is a flow chart of system operation of the present invention
  • Fig. 2 is that the present invention is used for the original chest film schematic diagram of labelme mark
  • Fig. 3 is the mask image of the lung field area generated after labelme labeling of the present invention.
  • Fig. 4 is the mask image of the scapula area generated after labelme labeling of the present invention.
  • Fig. 5 is the mask image of the collarbone region generated after labelme labeling of the present invention.
  • Fig. 6 is U-Net test chest film of the present invention.
  • Fig. 7 is the mask image of the lung field area manually marked by the present invention.
  • Fig. 8 is the mask image of the scapula area manually marked by the present invention.
  • Fig. 9 is the mask image of the clavicle region manually marked by the present invention.
  • Fig. 10 is the mask image of the lung field area predicted by U-Net of the present invention.
  • Fig. 11 is the scapula region mask image predicted by U-Net of the present invention.
  • Fig. 12 is the mask image of the collarbone region predicted by U-Net of the present invention.
  • Figure 13 is a chest radiograph image of a foreign body in the non-lung field
  • Figure 14 is a chest radiographic image with extracorporeal foreign bodies in the lung field
  • Fig. 15 is the mask image of the lung field area after the morphological operation
  • Fig. 16 is the mask image of the scapula region after the morphological operation
  • Fig. 17 is the mask image of the clavicle region after the morphological operation.
  • an artificial intelligence medical image quality control method applied to clinical images includes: the technician collects patient images, and transmits the images to the artificial intelligence medical image quality control management system, and the artificial intelligence medical image quality control management system It is used for semantic segmentation, classification processing and quality control scoring of images.
  • the score is displayed on the operation interface of the technician previewing the image.
  • the technician judges whether it is necessary to make notes or re-acquire images according to the quality control score and patient conditions to reduce low-scoring images.
  • the production of quality adopts the method of front-end quality control, which greatly reduces the time cost for technicians to review films, avoids subjective bias, and realizes accurate and effective quality control methods for the whole process from image data collection to diagnosis, and cooperates with junior technicians and doctors. Combining with team management, establish a standardized and intelligent image quality control system.
  • the artificial intelligence medical imaging quality control management system includes the PACS subsystem, the AI subsystem and the quality control management subsystem.
  • the PACS subsystem is used to connect the imaging equipment and the database. For quality control evaluation, it is necessary to obtain images from the PACS system for quality evaluation.
  • the method for image processing and quality control scoring by the artificial intelligence medical image quality control management system includes the following steps:
  • the binary cross-entropy method is used to perform binary classification processing on the patient images collected by the technician to detect whether there is an extracorporeal foreign body in the lung field in the image. If there is, a corresponding penalty is performed. If it does not exist, no points will be deducted;
  • step S3 According to the connected domains of the three regions obtained in step S1, the body position offset, shoulder shrug, the proportion of the overlapping area of the scapula/lung field, the determination of the integrity of the lung field area, whether there are left and right identification codes, and the data of the contrast of the chest radiograph are obtained. , and then score the above data according to the pre-set quality control standard scoring rules, and combine with step S2 to obtain the final quality control score.
  • step S2 since foreign objects in the body such as cardiac pacemakers, cardiac stents, and CVC venous catheters are not deducted, such foreign objects cannot be removed, so it has nothing to do with the technician's operation; external foreign objects outside the lung field, such as earrings, etc. , often for film reading, film review does not cause any obstacles, so points are not deducted; foreign objects in the lung field, such as underwear steel nails, necklaces, etc. will block the core area, seriously affecting film reading, so points are deducted, therefore Divide the image classification task into two types of chest radiographs, whether there is foreign body in the lung field or not (as shown in Figure 13-14). Patients removed all metal objects including underwear, jewelry, etc. Chest radiographs with extracorporeal foreign bodies in the lung field were rare. After careful screening, 500 pictures are finally selected from the pictures that meet the conditions of category 1 and category 2 as the training/test set.
  • the VGG16 network consists of 16 parameter layers, 13 of which are convolutional layers and 3 are dense layers. Each convolutional layer is similar to U-Net, using a 3x3 convolutional kernel and a ReLU activation function. In addition, there are 5 pooling layers in VGG16, and the same maximum pooling is used, that is, the Max-Pooling method.
  • the traditional VGG16 is designed for multi-classification tasks, so the softmax loss function is used.
  • binary cross-entropy namely binary cross-entropy.
  • the detailed training parameters are as follows: input image size: 1024x1024, number of training samples in a single batch: 4, training times: 00, learning rate: 1e-6. The final test accuracy is 87.58%.
  • the establishment method of the image semantic segmentation model in step S1 of this embodiment is as follows:
  • the U-Net framework of the three areas of the frontal chest radiograph is trained separately through the graphics card, and the abstract features in the image are converted into gray values of 0 and 1 on each pixel through upsampling, where 1 represents the lung field area, and 0 represents the lung field area to obtain an image semantic segmentation model based on the U-Net framework.
  • U-Net includes two parts: feature extraction feature extraction (left) and upsampling upsampling (right).
  • the feature extraction path is also called the contraction path, because every time a pooling layer (Pooling layer) passes through this path, the length and width of the image are halved and the number of feature channels is doubled. For example, input an image with a size of 572x572 pixels. After two convolutional layers, we can obtain a 64-channel feature map. Due to the edge padding of the convolutional layer, the size of each feature map becomes is 568x568. After the first pooling layer, the number of channels is expanded from 64 to 128 and the feature map size is halved to 284x284. The entire shrinkage path is very similar to the VGG architecture, and both use the feed-foward operation of the convolution layer with the ReLU activation function and the 3x3 convolution kernel and the Max-Pooling pooling layer.
  • Each 3x3 convolution kernel records some kind of "feature”. For example, when we train U-Net in the lung field area, some convolution kernels will try to match the lung field. Some convolution kernels will try to learn the gray value gradient in the lung field and so on. These "features" become more abstract with the number of network layers, and finally we convert these abstract features into gray values of 0 and 1 on each pixel through upsampling, 1 represents the lung field area, and 0 represents the lung field area .
  • the upsampling path also known as the expansion path, realizes upsampling through the deconvolution layer, which can be understood as the inverse operation of the above shrinkage path. That is, each time an upsampling layer is passed, the number of feature map channels is halved, and the size of the feature map is doubled.
  • the loss function of U-Net is binary cross-entropy, which is binary cross entropy.
  • the training process is a convex optimization problem of finding the local minimum value of cross-entropy. When the cross-entropy tends to 0, the prediction result of the lung field area and the gray value of the corresponding training mask will be exactly the same at each pixel.
  • the data set we finally used in this project is 645 sets of chest X-masks as the training set, and 50 sets of chest X-masks as the test set.
  • the training parameters are as follows: input image size: 512x512, pooling layer moving step: 2, number of training samples in a single batch: 2, training times: 50.
  • the point-by-point test intensive reading is: lung field 96.73%, scapula 98.02%, clavicle 98.71%.
  • the test results are shown in Figure 10- Figure 12.
  • ⁇ x is the body position offset
  • x im is the coordinates of the center point of the picture
  • x p is the coordinates of the center point of the patient's body position
  • its calculation formula is x 1 is the maximum value of the midpoint coordinates of the left clavicular connected domain
  • x 2 is the minimum value of the midpoint coordinates of the right clavicle connected domain
  • the calculation method of the shoulder shrug is as follows: find the circumscribed rectangle for the left and right clavicles in the figure obtained in step S1, and the horizontal angle between the diagonals of the circumscribed rectangle is the left and right shoulder shrug; when the shrug of any side is greater than If there is a shrug, one point will be deducted;
  • S 1 is the area of the left scapula
  • S 2 is the area of the right scapula
  • U 1 is the overlapping area of the left scapula and the lung field
  • U 2 is the overlapping area of the right scapula and the lung field
  • the method of judging the integrity of the lung field area is to draw a square border 2cm-2.5cm away from the edge of the image in the mask image of the connected domain of the lung field. If the bilateral lung field areas are completely within the border, the lung field area is considered complete , otherwise deduct 1 point.
  • the method of judging whether there are left and right identification codes is as follows: use OCR technology to perform text recognition on the area of the specified mark, and if the identification code cannot be recognized on both sides, one point will be deducted.
  • Chest film contrast is calculated using the Michelson contrast calculation formula. When the contrast exceeds the set maximum value or is less than the set minimum value, it is judged as overexposure or underexposure, and 1 point is deducted.
  • the Michelson contrast calculation formula is (Imax-Imin)/(Imax+Imin), wherein, the maximum gray value in the image is Imax, and the minimum gray value is Imin.
  • the white noise blocks in the image are eliminated through the opening operation in the morphology, and the black holes in the target area are eliminated through the closing operation, so that the connected domain is closed .
  • the processing steps are as follows: For the lung field, scapula, and clavicle area, respectively use 14x14, 9x9, and 3x3 unit matrices as structural elements to perform opening and closing operations in sequence.
  • the post-processing results are shown in Figure 15-17.
  • the present invention performs front-end quality control on the image through the artificial intelligence medical image quality control management system, that is, the operation interface at the image acquisition end performs quality control in real time according to a standardized process, and the image acquisition doctor judges whether re-acquisition is required based on the quality control score and details.
  • the present invention performs quality control and scoring on chest radiographs by calculating body position offset, shoulder shrug, scapula/lung field overlap area ratio, lung field region integrity judgment, whether there are left and right identification codes, and chest radiograph contrast, which is fast and efficient , objective, greatly improving the process efficiency of chest radiographs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

一种应用于临床影像的人工智能医学影像质控方法,包括:技师采集患者图像,并将图像传输至人工智能医学影像质控管理系统,人工智能医学影像质控管理系统用于对图像进行语义分割、分类处理并进行质控评分,在技师预览图像的操作界面显示评分,技师根据质控评分及患者情况判断是否需要进行备注或重新采集图像,以减少低分值图像质量的产生,解决了传统质控效率低、工作量大、质控样本数量小、主观性强的局限性,对影像数据采集到诊断的全流程均加以精准有效的质控手段并与初级技师及医师队伍管理结合,建立规范化、智能化的影像质控体系。

Description

一种应用于临床影像的人工智能医学影像质控方法 技术领域
本发明涉及医学影像质控技术领域,具体涉及一种应用于临床影像的人工智能医学影像质控方法。
背景技术
DR胸片,即Radio Radiography数字化X射线影像,因其辐射剂量极低,仪器成本低廉,高密度分辨率,成像快捷等特性,近些年来已成为胸肺部疾病初步筛查的最佳临床诊断方式。随着患者、临床需求的增加以及数字化、智能化转型的时代趋势,DR胸片的智能化质量控制显得尤为重要。
着眼于全国各地对影像科质控未来的讨论与研究,多数是针对影像科质控工作标准的统一性、规范的一致性、人工评分的客观性来进行讨论并制定一系列的解决方案。现存的DR质控标准由中华放射学会质控中心、各地方放射质控中心分别制定,因此质控标准化程度较低、质控标准参照不一,图像或报告的质量基本完全依靠人工判别,缺乏客观性,不仅耗时耗力,且存在主观评判差异。
在人工智能(Artificial Intelligence,AI)赋能百业的今天,大众的视野依旧停留在利用各类高新技术对影像科现有的医学影像图像进行智能化的质量评估,在影像应用层面进行质量管理,忽略了对医学影像全流程的质量控制。事实上,将影像数据采集到诊断的全流程均加以精准有效的质控手段,是十分有必要的。
申请号为CN109741317A公开了一种医学影像智能评价方法,借助多个卷积神经网络模型实现医学影像质量的自动化智能判断,采取了U-Net模型,但是采取U-Net分割后,由于肺野区域往往颜色较深,神经网络预测过程中有时会将周边的深色区域误以为是肺野区域,因而肺野区域外会存在白色噪点块; 类似地,在实际的肺野区域内偶尔也会存在黑色的噪点,仿佛肺野中存在“漏洞”;关于异物的分类,异物分类模块对符合要求的影像,进行异物类型的判别,对所有异物进行检测,工作量比较大,而对于体内异物诸如心脏起搏器,心脏支架,CVC静脉置管等一率不予扣分,此类异物无法移除,也不应该作为评判胸片质量的异物指标,因此该专利在异物判别时不够精确,容易出现误判。
发明内容
本发明的目的是提供一种应用于临床影像的人工智能医学影像质控方法,降低了技师评片的时间成本,避免了主观偏差,大大提高了胸片拍摄的流程效率。
本发明提供了如下的技术方案:
一种应用于临床影像的人工智能医学影像质控方法,技师采集患者图像,并将图像传输至人工智能医学影像质控管理系统,所述人工智能医学影像质控管理系统用于对图像进行语义分割、分类处理并进行质控评分,在技师预览图像的操作界面显示评分,技师根据所述质控评分及患者情况判断是否需要进行备注或重新采集图像,以减少低分值图像质量的产生。
优选的,所述人工智能医学影像质控管理系统对于图像处理并进行质控评分的方法包括如下步骤:
S1、通过建立基于U-Net框架的图像语义分割模型对技师采集的患者图像进行分割,以得到最终的左右肺野、肩胛骨与锁骨连通域;
S2、采用基于VGG16架构的二分类器,采用二值交叉熵的方式对技师采集的患者图像进行二分类处理,以检测所述图像是否存在肺野内的体外异物,若存在,则进行相应的扣分,若不存在,则不扣分;
S3、根据步骤S1所得的三个区域的连通域得到体位偏移量、耸肩量、肩胛骨/肺野重合面积占比、肺野区域完整性判定、是否存在左右识别码标注以及胸片对比度的数据,再将上述数据按照事先设定的质控标准评分细则进行评分,并结合步骤S2得到最终的质控评分。
优选的,所述步骤S3中各数据的计算方法及对应的质控标准评分细则如 下:
所述体位偏移量计算公式如下:
Δx=|x p-x im|
其中,Δx为体位偏移量,x im为图片中心点坐标;x p为患者体位中心点坐标,其计算公式为
Figure PCTCN2021107749-appb-000001
x 1为左侧锁骨连通域中点坐标的最大值,x 2为右侧锁骨连通域中点坐标的最小值;
当Δx>50时,则认定存在体位偏移,扣1分。
优选的,所述耸肩量的计算方法如下:对步骤S1得到的图中的左、右侧锁骨分别求外接矩形,所述外接矩形对角线的水平夹角即为左、右侧耸肩量;当任意一侧所述耸肩量大于则认定存在耸肩,扣一分;
所述肩胛骨/肺野重合面积占比计算公式为:
左侧肩胛骨/肺野重合面积占比为O 1=U 1/S 1
右侧肩胛骨/肺野重合面积占比为O 2=U 2/S 2
其中,S 1为左侧肩胛骨面积,S 2为右侧肩胛骨面积;U 1为左侧肩胛骨与肺野重合面积;U 2为右侧肩胛骨与肺野重合面积;
当O 1、O 2任意一者大于2/3,则认定肩胛骨与肺野严重重合,扣三分;当O 1、O 2任意一者大于1/3且均小于2/3,则认定肩胛骨与肺野轻微重合,扣一分;否则不扣分。
优选的,所述肺野区域完整性判定的方法为通过在所述肺野连通域掩码图像中绘制一个距离图像边缘2cm-2.5cm的正方形边框,若双侧肺野区域完全位于所述边框之内,则认为肺野区域完整,否则扣1分。
优选的,所述是否存在左右识别码标注的判别方法为:使用OCR技术对规定标记的区域进行文字识别,如果双侧均无法识别到标识码则扣一分。
优选的,所述胸片对比度采用Michelson对比度计算公式来进行对比度的计算,当对比度超过设定最大值或小于设定最小值,则判定为曝光过度或曝光不足,扣1分。
优选的,步骤S1中所述图像语义分割模型的建立方法如下:
从患者的胸片数据库中,筛选出若干张图像作为训练集与测试集,并通过labelme软件,完成对选取的图像的正位胸片左右肺野、肩胛骨与锁骨区域的多边形描点标注,将标注结果保存,再使用python脚本进行批处理以生成各个区域的二值化掩码图像,其中对应区域呈白色,对应的灰度值为1,背景区域呈黑色,对应的灰度值为0;
通过显卡对正位胸片的三个区域的U-Net框架分别进行训练,通过上采样将图像中的抽象特征转换为每个像素点上的0和1的灰度值,其中,1代表肺野区域,0代表肺野外区域,以得到基于U-Net框架的图像语义分割模型。
优选的,对于步骤S1得到的连通域图像,通过形态学中的开运算消除图像中的白色噪点块,通过闭运算消除目标区域内的黑色空洞,从而使得连通域闭合。
优选的,所述人工智能医学影像质控管理系统包括PACS子系统、AI子系统与质控管理子系统,所述PACS子系统用于连接影像设备与数据库。
本发明的有益效果是:本发明通过人工智能医学影像质控管理系统对图像进行前端质控,即在图像采集端的操作界面,根据标准化的流程即时进行质控,图像采集医生根据质控评分及详情判断是否需要重新采集,减小低分值图像质量的产生,大大降低了技师评片的时间成本,避免了主观偏差;将图像通过二分类器分成无肺野内体外异物与有肺野内体外异物,分类任务难度大大降低;本发明通过计算体位偏移量、耸肩量、肩胛骨/肺野重合面积占比、肺野区域完整性判定、是否存在左右识别码标注以及胸片对比度以对胸片进行质控评分,快捷、高效、客观,大大提高了胸片拍摄的流程效率。
附图说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:
图1是本发明系统操作流程图;
图2是本发明用于labelme标注的原始胸片示意图;
图3是本发明labelme标注后生成的肺野区域掩码图像;
图4是本发明labelme标注后生成的肩胛骨区域掩码图像;
图5是本发明labelme标注后生成的锁骨区域掩码图像;
图6是本发明U-Net测试胸片;
图7是本发明手动标注的肺野区域掩码图像;
图8是本发明手动标注的肩胛骨区域掩码图像;
图9是本发明手动标注的锁骨区域掩码图像;
图10是本发明U-Net预测的肺野区域掩码图像;
图11是本发明U-Net预测的肩胛骨区域掩码图像;
图12是本发明U-Net预测的锁骨区域掩码图像;
图13是无肺野内体外异物的胸片图像;
图14是有肺野内体外异物的胸片图像;
图15是形态学运算后的肺野区域掩码图像;
图16是形态学运算后的肩胛骨区域掩码图像;
图17是形态学运算后的锁骨区域掩码图像。
具体实施方式
实施例一
如图1所示,一种应用于临床影像的人工智能医学影像质控方法,包括:技师采集患者图像,并将图像传输至人工智能医学影像质控管理系统,人工智能医学影像质控管理系统用于对图像进行语义分割、分类处理并进行质控评分,在技师预览图像的操作界面显示评分,技师根据质控评分及患者情况判断是否需要进行备注或重新采集图像,以减少低分值图像质量的产生,采用前端质控的方法,大大降低了技师评片的时间成本,避免了主观偏差,实现对影像数据采集到诊断的全流程均加以精准有效的质控手段并与初级技师及医师队伍管理结合,建立规范化、智能化的影像质控体系。
人工智能医学影像质控管理系统包括PACS子系统、AI子系统与质控管理子系统,PACS子系统用于连接影像设备与数据库,在科室日常的质控工作中, 如果需要对过往图像进行质控评价,则需要从PACS系统中获取图像进行质量评价。
其中,人工智能医学影像质控管理系统对于图像处理并进行质控评分的方法,包括如下步骤:
S1、通过建立基于U-Net框架的图像语义分割模型对技师采集的患者图像进行分割,以得到最终的左右肺野、肩胛骨与锁骨连通域;
S2、采用基于VGG16架构的二分类器,采用二值交叉熵的方式对技师采集的患者图像进行二分类处理,以检测图像是否存在肺野内的体外异物,若存在,则进行相应的扣分,若不存在,则不扣分;
S3、根据步骤S1所得的三个区域的连通域得到体位偏移量、耸肩量、肩胛骨/肺野重合面积占比、肺野区域完整性判定、是否存在左右识别码标注以及胸片对比度的数据,再将上述数据按照事先设定的质控标准评分细则进行评分,并结合步骤S2得到最终的质控评分。
关于步骤S2,由于体内异物诸如心脏起搏器,心脏支架,CVC静脉置管等一率不予扣分,此类异物无法移除,故与技师操作无关;肺野外的体外异物诸如耳钉等,往往对于阅片,评片不造成任何障碍,故同样不予扣分;肺野内的体外异物,如内衣钢钉,项链等会对核心区域造成遮挡,严重影响阅片,故扣分,因此将图像分类任务分成是否存在肺野内的体外异物两类胸片(如图13-图14所示),训练数据集同样采集自ChestX-ray8开源数据集,由于胸片拍摄过程中往往技师会要求患者摘卸所有金属物品包括内衣、首饰等,存在肺野内体外异物的胸片少之甚少。经过仔细筛查,最终从满足类1和类2条件的图片中各选定500张作为训练/测试集。
VGG16网络顾名思义由16个参数层组成,其中13个为卷积层(convolutional layer),3个为全连接层(dense layer)。每个卷积层与U-Net相似,使用3x3的卷积核以及ReLU激活函数。此外VGG16中共有5个池化层,同样的采用最大值池化,即Max-Pooling方式。传统的VGG16针对多分类任务设计,故而采用了softmax损失函数,而对于二分类任务,我们采用二值交叉熵, 即binary cross-entropy。详细的训练参数如下:输入图像尺寸:1024x1024,单批训练样本数:4,训练次数:00,学习速率:1e-6。最终测试精度为87.58%。
实施例二
相比于实施例一,本实施例的步骤S1中图像语义分割模型的建立方法如下:
从患者的胸片数据库中,筛选出若干张图像作为训练集与测试集,并通过labelme软件,完成对选取的图像的正位胸片左右肺野、肩胛骨与锁骨区域的多边形描点标注,将标注结果保存(以json格式保存),再使用python脚本进行批处理以生成各个区域的二值化掩码图像,其中对应区域呈白色,对应的灰度值为1,背景区域呈黑色,对应的灰度值为0,用于labelme标注的原始胸片如图2所示,labelme标注后生成的肺野、肩胛骨与锁骨区域掩码图像分别如图3-图5所示。
本实施例中,此项目的所有分类框架训练均基于ChestX-ray8开源数据集,该数据集包含了32717位患者的共计108948张正位胸片。我们在项目初期筛选了近1000张数据作为训练集与测试集并对其进行了高精度的人工标注。
通过显卡对正位胸片的三个区域的U-Net框架分别进行训练,通过上采样将图像中的抽象特征转换为每个像素点上的0和1的灰度值,其中,1代表肺野区域,0代表肺野外区域,以得到基于U-Net框架的图像语义分割模型。
具体的,U-Net包括两个部分:feature extraction特征提取(左侧)和upsampling上采样(右侧)。
特征提取路径又称收缩路径,因为在此路径上每经过一个池化层(Pooling layer)图像长宽减半而特征通道数(feature channels)加倍。例如输入一张尺寸为572x572像素的图像,经过两层卷积层后我们可以获得64-通道的特征图(feature map),由于卷积层的边缘填充(padding),每个特征图的尺寸变为568x568。经过第一个池化层后,通道数由64扩充至128而特征图尺寸减半后变为284x284。整个收缩路径与VGG架构非常相似,都采用了装配ReLU激活函数和3x3卷积核的卷积层与Max-Pooling池化层的feed-foward运算。
收缩路径的目的本质上是特征提取,每个3x3的卷积核都记录着某种“特征”,例如我们在训练肺野区域U-Net的时候,某些卷积核会试图去匹配肺野的边缘轮廓,某些卷积核会试图去学习肺野内的灰度值梯度等等。这些“特征”随着网络层数变得愈加抽象,最终我们经过上采样将这些抽象的特征转换为每个像素点上0和1的灰度值,1代表肺野区域,0代表肺野外区域。
上采样路径又称扩展路径,通过反卷积层实现上采样,可以理解为上述收缩路径的逆运算。即每经过一个上采样层,特征图通道数减半,特征图尺寸加倍。
U-Net的损失函数为binary cross-entropy,即二元交叉熵。训练过程即为寻找交叉熵局部最小值的凸优化问题,当交叉熵趋于0时,肺野区域预测结果与对应的训练掩码的灰度值在每个像素点上将完全相同。
大量的研究表明,U-Net即使在非常小的训练集上也可以实现出色的精度。本项目中我们最终使用的数据集为645组胸片-掩码作为训练集,50组胸片-掩码作为测试集。使用2台Tesla-V100显卡对三个独立的U-Net分别进行训练。训练参数如下:输入图像尺寸:512x512,池化层移动步长:2,单批训练样本数:2,训练次数:50。逐点测试精读为:肺野96.73%,肩胛骨98.02%,锁骨98.71%。测试结果见图10-图12。
通过图7-12可以看出,U-Net的预测结果与手动标注的区域相似度极高。U-Net甚至展示出一些手动标注无法精准实现的局部细节,例如图12中图片右侧即左锁骨的内测轮廓。
本实施例其他模块的方法与实施例一相同。
实施例三
相比于实施例二,本实施例的步骤S3中各数据的计算方法及对应的质控标准评分细则如下:
体位偏移量计算公式如下:
Δx=|x p-x im|
其中,Δx为体位偏移量,x im为图片中心点坐标;x p为患者体位中心点坐 标,其计算公式为
Figure PCTCN2021107749-appb-000002
x 1为左侧锁骨连通域中点坐标的最大值,x 2为右侧锁骨连通域中点坐标的最小值;
当Δx>50时,则认定存在体位偏移,扣1分。
耸肩量的计算方法如下:对步骤S1得到的图中的左、右侧锁骨分别求外接矩形,外接矩形对角线的水平夹角即为左、右侧耸肩量;当任意一侧耸肩量大于则认定存在耸肩,扣一分;
肩胛骨/肺野重合面积占比计算公式为:
左侧肩胛骨/肺野重合面积占比为O 1=U 1/S 1
右侧肩胛骨/肺野重合面积占比为O 2=U 2/S 2
其中,S 1为左侧肩胛骨面积,S 2为右侧肩胛骨面积;U 1为左侧肩胛骨与肺野重合面积;U 2为右侧肩胛骨与肺野重合面积;
当O 1、O 2任意一者大于2/3,则认定肩胛骨与肺野严重重合,扣三分;当O 1、O 2任意一者大于1/3且均小于2/3,则认定肩胛骨与肺野轻微重合,扣一分;否则不扣分。
肺野区域完整性判定的方法为通过在肺野连通域掩码图像中绘制一个距离图像边缘2cm-2.5cm的正方形边框,若双侧肺野区域完全位于边框之内,则认为肺野区域完整,否则扣1分。
是否存在左右识别码标注的判别方法为:使用OCR技术对规定标记的区域进行文字识别,如果双侧均无法识别到标识码则扣一分。
胸片对比度采用Michelson对比度计算公式来进行对比度的计算,当对比度超过设定最大值或小于设定最小值,则判定为曝光过度或曝光不足,扣1分。
其中,Michelson对比度计算公式为(Imax-Imin)/(Imax+Imin),其中,图像中最大灰度值为Imax,最小灰度值为Imin。
本实施例其他模块的方法与实施例二相同。
实施例四
相比于实施例三,本实施例对于步骤S1得到的连通域图像,通过形态学 中的开运算消除图像中的白色噪点块,通过闭运算消除目标区域内的黑色空洞,从而使得连通域闭合。
处理的步骤如下:对于肺野、肩胛骨,锁骨区域分别使用14x14,9x9,3x3的单位矩阵作为结构元素依次进行开、闭运算。后处理结果见图15-17。
本实施例其他模块的方法与实施例三相同。
本发明通过人工智能医学影像质控管理系统对图像进行前端质控,即在图像采集端的操作界面,根据标准化的流程即时进行质控,图像采集医生根据质控评分及详情判断是否需要重新采集,减小低分值图像质量的产生,大大降低了技师评片的时间成本,避免了主观偏差;将图像通过二分类器分成无肺野内体外异物与有肺野内体外异物,分类任务难度大大降低;本发明通过计算体位偏移量、耸肩量、肩胛骨/肺野重合面积占比、肺野区域完整性判定、是否存在左右识别码标注以及胸片对比度以对胸片进行质控评分,快捷、高效、客观,大大提高了胸片拍摄的流程效率。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,尽管参照前述实施例对本发明进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种应用于临床影像的人工智能医学影像质控方法,其特征在于,技师采集患者图像,并将图像传输至人工智能医学影像质控管理系统,所述人工智能医学影像质控管理系统用于对图像进行语义分割、分类处理并进行质控评分,在技师预览图像的操作界面显示评分,技师根据所述质控评分及患者情况判断是否需要进行备注或重新采集图像,以减少低分值图像质量的产生。
  2. 根据权利要求1所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,所述人工智能医学影像质控管理系统对于图像处理并进行质控评分的方法包括如下步骤:
    S1、通过建立基于U-Net框架的图像语义分割模型对技师采集的患者图像进行分割,以得到最终的左右肺野、肩胛骨与锁骨连通域;
    S2、采用基于VGG16架构的二分类器,采用二值交叉熵的方式对技师采集的患者图像进行二分类处理,以检测所述图像是否存在肺野内的体外异物,若存在,则进行相应的扣分,若不存在,则不扣分;
    S3、根据步骤S1所得的三个区域的连通域得到体位偏移量、耸肩量、肩胛骨/肺野重合面积占比、肺野区域完整性判定、是否存在左右识别码标注以及胸片对比度的数据,再将上述数据按照事先设定的质控标准评分细则进行评分,并结合步骤S2得到最终的质控评分。
  3. 根据权利要求2所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,所述步骤S3中各数据的计算方法及对应的质控标准评分细则如下:
    所述体位偏移量计算公式如下:
    Δx=|x p-x im|
    其中,Δx为体位偏移量,x im为图片中心点坐标;x p为患者体位中心点坐标,其计算公式为
    Figure PCTCN2021107749-appb-100001
    x 1为左侧锁骨连通域中点坐标的最大值,x 2为右侧锁骨连通域中点坐标的最小值;
    当Δx>50时,则认定存在体位偏移,扣1分。
  4. 根据权利要求2所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,所述耸肩量的计算方法如下:对步骤S1得到的图中的左、右侧锁骨分别求外接矩形,所述外接矩形对角线的水平夹角即为左、右侧耸肩量;当任意一侧所述耸肩量大于则认定存在耸肩,扣一分;
    所述肩胛骨/肺野重合面积占比计算公式为:
    左侧肩胛骨/肺野重合面积占比为O 1=U 1/S 1
    右侧肩胛骨/肺野重合面积占比为O 2=U 2/S 2
    其中,S 1为左侧肩胛骨面积,S 2为右侧肩胛骨面积;U 1为左侧肩胛骨与肺野重合面积;U 2为右侧肩胛骨与肺野重合面积;
    当O 1、O 2任意一者大于2/3,则认定肩胛骨与肺野严重重合,扣三分;当O 1、O 2任意一者大于1/3且均小于2/3,则认定肩胛骨与肺野轻微重合,扣一分;否则不扣分。
  5. 根据权利要求2所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,所述肺野区域完整性判定的方法为通过在所述肺野连通域掩码图像中绘制一个距离图像边缘2cm-2.5cm的正方形边框,若双侧肺野区域完全位于所述边框之内,则认为肺野区域完整,否则扣1分。
  6. 根据权利要求2所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,所述是否存在左右识别码标注的判别方法为:使用OCR技术对规定标记的区域进行文字识别,如果双侧均无法识别到标识码则扣一分。
  7. 根据权利要求2所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,所述胸片对比度采用Michelson对比度计算公式来进行对比度的计算,当对比度超过设定最大值或小于设定最小值,则判定为曝光过度或曝光不足,扣1分。
  8. 根据权利要求2所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,步骤S1中所述图像语义分割模型的建立方法如下:
    从患者的胸片数据库中,筛选出若干张图像作为训练集与测试集,并通过 labelme软件,完成对选取的图像的正位胸片左右肺野、肩胛骨与锁骨区域的多边形描点标注,将标注结果保存,再使用python脚本进行批处理以生成各个区域的二值化掩码图像,其中对应区域呈白色,对应的灰度值为1,背景区域呈黑色,对应的灰度值为0;
    通过显卡对正位胸片的三个区域的U-Net框架分别进行训练,通过上采样将图像中的抽象特征转换为每个像素点上的0和1的灰度值,其中,1代表肺野区域,0代表肺野外区域,以得到基于U-Net框架的图像语义分割模型。
  9. 根据权利要求8所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,对于步骤S1得到的连通域图像,通过形态学中的开运算消除图像中的白色噪点块,通过闭运算消除目标区域内的黑色空洞,从而使得连通域闭合。
  10. 根据权利要求1-9任一项所述的一种应用于临床影像的人工智能医学影像质控方法,其特征在于,所述人工智能医学影像质控管理系统包括PACS子系统、AI子系统与质控管理子系统,所述PACS子系统用于连接影像设备与数据库。
PCT/CN2021/107749 2021-07-14 2021-07-22 一种应用于临床影像的人工智能医学影像质控方法 WO2023283980A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110799266.7A CN113555089A (zh) 2021-07-14 2021-07-14 一种应用于临床影像的人工智能医学影像质控方法
CN202110799266.7 2021-07-14

Publications (1)

Publication Number Publication Date
WO2023283980A1 true WO2023283980A1 (zh) 2023-01-19

Family

ID=78103184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107749 WO2023283980A1 (zh) 2021-07-14 2021-07-22 一种应用于临床影像的人工智能医学影像质控方法

Country Status (2)

Country Link
CN (1) CN113555089A (zh)
WO (1) WO2023283980A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309503A (zh) * 2023-03-20 2023-06-23 瑞泰影像科技(深圳)有限公司 基于语义分割的医学影像图像质量评价系统
CN116596919A (zh) * 2023-07-11 2023-08-15 浙江华诺康科技有限公司 内镜图像质控方法、装置、系统、计算机设备和存储介质
CN117389529A (zh) * 2023-12-12 2024-01-12 神州医疗科技股份有限公司 基于pacs系统的ai接口调用方法及系统
CN117455925A (zh) * 2023-12-26 2024-01-26 杭州健培科技有限公司 一种胸部多器官和肋骨分割方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596251A (zh) * 2021-11-23 2022-06-07 杭州深睿博联科技有限公司 一种膝关节x光影像质控方法及装置
CN115713526A (zh) * 2022-11-28 2023-02-24 南方医科大学珠江医院 一种基于人工智能的影像质量控制系统
CN117437207A (zh) * 2023-11-09 2024-01-23 重庆师范大学 一种多专家融合胸部x线影像辅助诊断系统及方法
CN117653163A (zh) * 2023-12-05 2024-03-08 上海长征医院 一种肝脏图像采集处理的方法、系统、计算机及终端

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108511085A (zh) * 2018-04-10 2018-09-07 川北医学院附属医院 一种远程医学影像会诊中心质量控制方法及系统
CN109658400A (zh) * 2018-12-14 2019-04-19 首都医科大学附属北京天坛医院 一种基于头颅ct影像的评分方法及系统
US20190122360A1 (en) * 2017-10-24 2019-04-25 General Electric Company Deep convolutional neural network with self-transfer learning
CN109741316A (zh) * 2018-12-29 2019-05-10 成都金盘电子科大多媒体技术有限公司 医学影像智能评片系统
CN111583285A (zh) * 2020-05-12 2020-08-25 武汉科技大学 一种基于边缘关注策略的肝脏影像语义分割方法
CN111986182A (zh) * 2020-08-25 2020-11-24 卫宁健康科技集团股份有限公司 辅助诊断方法、系统、电子设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191457B (zh) * 2018-09-21 2022-07-01 中国人民解放军总医院 一种病理图像质量有效性识别方法
CN113052795B (zh) * 2018-12-28 2023-12-08 上海联影智能医疗科技有限公司 一种x射线胸片图像质量确定方法及装置
CN110555825A (zh) * 2019-07-23 2019-12-10 北京赛迈特锐医疗科技有限公司 胸部x线影像智能诊断系统及其诊断方法
CN110930391A (zh) * 2019-11-26 2020-03-27 北京华医共享医疗科技有限公司 一种基于VggNet网络模型实现医学影像辅助诊断的方法、装置、设备及存储介质
CN111080579B (zh) * 2019-11-28 2023-05-26 杭州电子科技大学 基于深度学习实现图像分割和分类的骨龄评估方法
CN111127504B (zh) * 2019-12-28 2024-02-09 中国科学院深圳先进技术研究院 心房间隔闭塞患者心脏医学影像分割方法及系统
CN111476777B (zh) * 2020-04-07 2023-08-22 上海联影智能医疗科技有限公司 胸片图像处理方法、系统、可读存储介质和设备
CN111916186A (zh) * 2020-08-17 2020-11-10 北京赛迈特锐医疗科技有限公司 序贯型ai诊断模型对胸部x线智能诊断系统及方法
CN112308853A (zh) * 2020-10-20 2021-02-02 平安科技(深圳)有限公司 电子设备、医学图像指标生成方法、装置及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190122360A1 (en) * 2017-10-24 2019-04-25 General Electric Company Deep convolutional neural network with self-transfer learning
CN108511085A (zh) * 2018-04-10 2018-09-07 川北医学院附属医院 一种远程医学影像会诊中心质量控制方法及系统
CN109658400A (zh) * 2018-12-14 2019-04-19 首都医科大学附属北京天坛医院 一种基于头颅ct影像的评分方法及系统
CN109741316A (zh) * 2018-12-29 2019-05-10 成都金盘电子科大多媒体技术有限公司 医学影像智能评片系统
CN111583285A (zh) * 2020-05-12 2020-08-25 武汉科技大学 一种基于边缘关注策略的肝脏影像语义分割方法
CN111986182A (zh) * 2020-08-25 2020-11-24 卫宁健康科技集团股份有限公司 辅助诊断方法、系统、电子设备及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309503A (zh) * 2023-03-20 2023-06-23 瑞泰影像科技(深圳)有限公司 基于语义分割的医学影像图像质量评价系统
CN116309503B (zh) * 2023-03-20 2024-06-18 瑞泰影像科技(深圳)有限公司 基于语义分割的医学影像图像质量评价系统
CN116596919A (zh) * 2023-07-11 2023-08-15 浙江华诺康科技有限公司 内镜图像质控方法、装置、系统、计算机设备和存储介质
CN116596919B (zh) * 2023-07-11 2023-11-07 浙江华诺康科技有限公司 内镜图像质控方法、装置、系统、计算机设备和存储介质
CN117389529A (zh) * 2023-12-12 2024-01-12 神州医疗科技股份有限公司 基于pacs系统的ai接口调用方法及系统
CN117455925A (zh) * 2023-12-26 2024-01-26 杭州健培科技有限公司 一种胸部多器官和肋骨分割方法及装置
CN117455925B (zh) * 2023-12-26 2024-05-17 杭州健培科技有限公司 一种胸部多器官和肋骨分割方法及装置

Also Published As

Publication number Publication date
CN113555089A (zh) 2021-10-26

Similar Documents

Publication Publication Date Title
WO2023283980A1 (zh) 一种应用于临床影像的人工智能医学影像质控方法
US20220198214A1 (en) Image recognition method and device based on deep convolutional neural network
CN107038692A (zh) 基于小波分解和卷积神经网络的x线胸片骨抑制处理方法
CN111062947B (zh) 一种基于深度学习的x光胸片病灶定位方法及系统
CN113420826B (zh) 一种肝脏病灶图像处理系统及图像处理方法
Rajee et al. Gender classification on digital dental x-ray images using deep convolutional neural network
Han et al. Skeletal bone age prediction based on a deep residual network with spatial transformer
Kushol et al. Contrast enhancement of medical x-ray image using morphological operators with optimal structuring element
CN112241961A (zh) 基于深度卷积神经网络的胸部x光片辅助诊断方法及系统
CN114694236A (zh) 一种基于循环残差卷积神经网络的眼球运动分割定位方法
LU500798B1 (en) Full-Automatic Segmentation Method for Coronary Artery Calcium Lesions Based on Non-Contrast Chest CT
Fonseca et al. Automatic orientation identification of pediatric chest x-rays
de Azevedo Marques et al. Content-based retrieval of medical images: landmarking, indexing, and relevance feedback
Singh et al. Semantic segmentation of bone structures in chest X-rays including unhealthy radiographs: A robust and accurate approach
Jia et al. Fine-grained precise-bone age assessment by integrating prior knowledge and recursive feature pyramid network
Hsu et al. Development of a deep learning model for chest X-ray screening
Zhang et al. Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
TWI698887B (zh) 腦功能影像數據擴增方法
US20240037741A1 (en) Cardiac Catheterization Image Recognition and Evaluation Method
Guan et al. Automatic Segmentation of Tongue Image Based on Res-Unet Approach
Jena et al. Image processing techniques for chest radiography enhancements and pneumonia detection
Lin et al. Alpha-gamma equalization-enhanced hand radiographic image segmentation scheme
Palkar et al. Bone Age Estimation of Pediatrics by Analyzing Hand X-Rays Using Deep Learning Technique
Shendge et al. Image Pre-processing techniques comparison: COVID-19 detection through Chest X-Rays via Deep Learning
Xing SCAN: Structure correcting adversarial network for organ segmentation in chest X-rays

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21949760

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE