WO2023005634A1 - 一种基于ct图像的肺结节良恶性诊断方法及装置 - Google Patents

一种基于ct图像的肺结节良恶性诊断方法及装置 Download PDF

Info

Publication number
WO2023005634A1
WO2023005634A1 PCT/CN2022/104347 CN2022104347W WO2023005634A1 WO 2023005634 A1 WO2023005634 A1 WO 2023005634A1 CN 2022104347 W CN2022104347 W CN 2022104347W WO 2023005634 A1 WO2023005634 A1 WO 2023005634A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
self
representation
pulmonary
module
Prior art date
Application number
PCT/CN2022/104347
Other languages
English (en)
French (fr)
Inventor
张番栋
俞益洲
李一鸣
乔昕
Original Assignee
杭州深睿博联科技有限公司
北京深睿博联科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州深睿博联科技有限公司, 北京深睿博联科技有限责任公司 filed Critical 杭州深睿博联科技有限公司
Publication of WO2023005634A1 publication Critical patent/WO2023005634A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Definitions

  • the present invention claims the priority of the application proposed by the applicant, the application date is July 26, 2021, the application number is CN2021108433928, and the name is "a method and device for diagnosing benign and malignant lesions based on CT images".
  • the entire content of the above application is hereby incorporated by reference in its entirety.
  • the present application relates to the technical field of medical imaging, in particular to a method and device for diagnosing benign and malignant lesions based on CT images.
  • Lung cancer is one of the most common cancers in the world. Pulmonary nodules are the main lesion leading to lung cancer. Computed Tomography (CT) is the most common method for screening benign and malignant pulmonary nodules. Computer-aided diagnosis (Computer-Aided Diagnosis) for benign and malignant pulmonary nodules in CT is designed. sis, CAD) system is very important for alleviating the intensity of doctor's film reading and improving the accuracy of malignant pulmonary nodule screening.
  • the contextual characteristics of pulmonary nodules can also provide important clues for the diagnosis of benign and malignant.
  • the existing CAD system generally only judges benign and malignant pulmonary nodules based on their own characteristics, and fails to make full use of the contextual features around the pulmonary nodules. Therefore, the existing diagnostic methods for benign and malignant pulmonary nodules have problems such as low diagnostic accuracy.
  • the present invention aims to provide a method and device for diagnosing benign and malignant lesions based on CT images to overcome the above problems or at least partially solve the above problems, so as to improve the accuracy of benign and malignant pulmonary nodules detection.
  • the present invention provides a method for diagnosing benign and malignant pulmonary nodules based on CT images, comprising the following steps:
  • the position and size of all pulmonary nodules are detected from the input CT image, and the image area containing each pulmonary nodule is segmented;
  • the surrounding feature map of the pulmonary nodule is extracted from the segmented pulmonary nodule image, and the region of interest is pooled to obtain the characteristic representation of the pulmonary nodule itself;
  • the self-sign feature representation and the context feature feature representation are fused, and the fused features are input into a logistic regression layer to obtain the benign and malignant probability of pulmonary nodules.
  • the method also includes a preprocessing step for the input CT image:
  • the input CT image is resampled according to the pixel size of 1mm ⁇ 1mm ⁇ 1mm using the nearest neighbor method;
  • the feature extraction network is a convolutional neural network 3D U-Net or 3D ResNet.
  • the contextual sign extraction module based on the attention mechanism includes a self-attention module and a mutual-attention module; the self-attention module takes the surrounding feature map of the pulmonary nodule as input, and generates the same The self-attention vector with the same size as the number of feature map channels outputs the fusion features of the surrounding feature maps to the mutual attention coding module of the mutual attention module; The force encoding module produces mutual attention vectors with the same size as the number of feature map channels, outputting contextual feature representations of lung nodules.
  • the method of fusing self-signature feature representation and contextual feature feature representation includes:
  • the feature fusion coefficient is multiplied by the self-sign feature representation and the context feature feature representation respectively, and then the multiplied self-sign feature representation and the context feature feature representation are spliced together.
  • the present invention provides a device for diagnosing benign and malignant pulmonary nodules based on CT images, including:
  • the pulmonary nodule segmentation module is used to detect the positions and sizes of all pulmonary nodules from the input CT image based on a pulmonary nodule detection network, and segment the image area containing each pulmonary nodule;
  • the self-sign extraction module is used to extract the surrounding feature map of the pulmonary nodule from the segmented pulmonary nodule image based on a feature extraction network, and perform region-of-interest pooling on it to obtain the self-sign feature representation of the pulmonary nodule;
  • the contextual sign extraction module is used to input the surrounding feature map and self-signature feature representation of the pulmonary nodule into a contextual sign extraction module based on the attention mechanism to obtain the contextual sign feature representation of the pulmonary nodule;
  • the feature fusion and diagnosis module is used to fuse the self-sign feature representation and the context feature feature representation, and input the fused feature into a logistic regression layer to obtain the benign and malignant probability of pulmonary nodules.
  • the device also includes a preprocessing module for performing the following operations on the input CT image:
  • the input CT image is resampled according to the pixel size of 1mm ⁇ 1mm ⁇ 1mm using the nearest neighbor method;
  • the feature extraction network is a convolutional neural network 3D U-Net or 3D ResNet.
  • the contextual sign extraction module based on the attention mechanism includes a self-attention module and a mutual-attention module; the self-attention module takes the surrounding feature map of the pulmonary nodule as input, and generates the same The self-attention vector with the same size as the number of feature map channels outputs the fusion features of the surrounding feature maps to the mutual attention coding module of the mutual attention module; The force encoding module produces mutual attention vectors with the same size as the number of feature map channels, outputting contextual feature representations of lung nodules.
  • the method of fusing self-signature feature representation and contextual feature feature representation includes:
  • the feature fusion coefficient is multiplied by the self-sign feature representation and the context feature feature representation respectively, and then the multiplied self-sign feature representation and the context feature feature representation are spliced together.
  • the present invention has the following beneficial effects:
  • the present invention can better obtain contextual signs of pulmonary nodules by constructing a contextual sign extraction module based on an attention mechanism; by further fusing the self-signature feature representation and contextual sign feature representation of pulmonary nodules, the fused
  • the features include not only the sign information of the pulmonary nodules itself, but also the contextual sign information. Compared with the existing benign and malignant pulmonary nodules diagnosis system, which mainly judges based on its own signs, it can effectively improve the accuracy of the detection of benign and malignant pulmonary nodules.
  • Fig. 1 is a flow chart of a method for diagnosing benign and malignant pulmonary nodules based on CT images according to an embodiment of the present invention
  • Fig. 2 is a block diagram of an apparatus for diagnosing benign and malignant pulmonary nodules based on CT images according to an embodiment of the present invention.
  • first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present application, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
  • Fig. 1 is a flow chart of a method for diagnosing benign and malignant pulmonary nodules based on CT images in an embodiment of the present invention, comprising the following steps:
  • Step 101 based on a pulmonary nodule detection network, detect the position and size of all pulmonary nodules from the input CT image, and segment the image area containing each pulmonary nodule;
  • Step 102 based on a feature extraction network, extract the surrounding feature map of the pulmonary nodule from the segmented pulmonary nodule image, and perform region-of-interest pooling on it to obtain the characteristic representation of the pulmonary nodule itself;
  • Step 103 input the surrounding feature map and self-sign feature representation of the pulmonary nodule into an attention-based contextual sign extraction module to obtain the contextual feature representation of the pulmonary nodule;
  • Step 104 fusing the self-sign feature representation and the context feature feature representation, and inputting the fused feature into a logistic regression layer to obtain the probability of benign and malignant pulmonary nodules.
  • step 101 is mainly used to detect and segment the pulmonary nodule image region.
  • a trained pulmonary nodule detection network is used to detect the positions and sizes of all pulmonary nodules in the input CT image; Section picture blocks are cut out.
  • the size of the clipping was set to a fixed size larger than the actual size of the pulmonary nodules, such as 96mm ⁇ 96mm ⁇ 96mm.
  • step 102 is mainly used to obtain the self-symptom feature representation of the pulmonary nodule.
  • a feature extraction network is used to extract the depth feature map containing the surrounding features of the pulmonary nodule (referred to as the surrounding feature map of the pulmonary nodule) from each segmented pulmonary nodule image, and the surrounding features of the pulmonary nodule
  • the graph performs region of interest pooling (Region of interest pooling, also known as RoI pooling) operation (maximum pooling) to obtain the characteristic representation of the pulmonary nodule itself.
  • Region-of-interest pooling is a widely used operation in object detection tasks using convolutional neural networks. For example, detecting multiple cars and pedestrians in a single image.
  • step 103 is mainly used to obtain the contextual feature representation of pulmonary nodules.
  • a contextual sign acquisition module based on an attention mechanism is constructed to acquire the contextual sign feature representation of pulmonary nodules. Since the surrounding feature map of the pulmonary nodule contains the contextual information of different pulmonary nodules, the attention mechanism operation can be performed on the surrounding feature map of the pulmonary nodule obtained in the previous step and its own sign feature representation. The contextual sign feature representation of pulmonary nodules is obtained.
  • step 104 is mainly used to fuse the self-sign feature representation and the context feature feature representation, and obtain the probability of benign and malignant pulmonary nodules based on the fused features.
  • the existing pulmonary nodule detection system generally only judges benign and malignant based on the extracted sign information of the pulmonary nodules itself, without considering the correlation between different pulmonary nodules, so the detection accuracy is not high.
  • this embodiment not only obtains the self-symptom feature representation of pulmonary nodules, but also obtains the contextual feature representation of pulmonary nodules, and fuses the two symptom feature representations (weighted summation and splicing together), so that after fusion
  • the features of the feature include both the sign information of the pulmonary nodule itself and the context sign information, and then input the fusion feature into a logistic regression layer to obtain the probability of benign and malignant pulmonary nodules, which can effectively improve the accuracy of the detection of benign and malignant pulmonary nodules compared with the existing technology Spend.
  • the method also includes a preprocessing step on the input CT image:
  • the input CT image is resampled according to the pixel size of 1mm ⁇ 1mm ⁇ 1mm using the nearest neighbor method;
  • This embodiment provides a technical solution for image preprocessing.
  • the preprocessing mainly includes three parts: the first is to resample the input CT image according to the set pixel size (such as 1mm ⁇ 1mm ⁇ 1mm) using the nearest neighbor method to improve the resolution; the second is to perform window resampling on the lung window Adjust the wide window level. After the adjustment, the center HU value is -600 and the window width HU value is 1600, so as to align the lungs so as to accurately segment the CT images of the lungs. The third is to use the generated lung mask to segment the CT images of the lungs.
  • the lung mask can be obtained by binarizing the resampled image with a set HU value threshold (such as -320), performing connected region calculation on the binarized image and retaining the largest connected region.
  • a set HU value threshold such as -320
  • the CT image of the lung can be segmented by multiplying the lung mask and the CT image pixel by pixel.
  • the feature extraction network is a convolutional neural network 3D U-Net or 3D ResNet.
  • This embodiment gives the network structure of the feature extraction network.
  • the feature extraction network can use any neural network model, but considering that the CT images involved in this embodiment are three-dimensional images, in order to perform feature extraction more effectively, this embodiment uses 3D U-Net or 3D ResNet as Feature extraction network. It should be noted that this embodiment only provides one or two preferred implementations of the feature extraction network, and does not negate or exclude other feasible implementations.
  • the contextual sign extraction module based on the attention mechanism includes a self-attention module and a mutual-attention module;
  • the self-attention module takes the surrounding feature map of the pulmonary nodule as input, and passes a surrounding feature
  • the encoding module generates a self-attention vector with the same size as the number of feature map channels, and outputs the fusion features of the surrounding feature maps to the mutual attention encoding module of the mutual attention module;
  • the mutual attention module takes the self-signature representation of pulmonary nodules as input , through the mutual attention encoding module to generate a mutual attention vector with the same size as the number of feature map channels, and output the contextual feature representation of pulmonary nodules.
  • the context sign extraction module of this embodiment is mainly composed of a self-attention module and a mutual-attention module.
  • the mutual attention module is a common attention module.
  • the main purpose of setting the self-attention module is to optimize the learning of the model for contextual signs related to benign and malignant.
  • the self-attention module takes the surrounding feature map of pulmonary nodules as input, and generates a self-attention vector with the same size as the channel number of the feature map through a surrounding feature encoding module, thereby outputting the fusion features of the surrounding feature map.
  • the surrounding feature encoding module can use any encoding module that has a strong representation ability for global features.
  • the purpose of the mutual attention module is to guide the nodule's own signs to promote the model's learning of contextual features.
  • the mutual attention module is set to take the pulmonary nodule's own sign feature representation as input, and a mutual attention encoding module is used to generate a mutual attention vector with the same size as the number of feature map channels.
  • the fusion feature of the surrounding feature map output by the self-attention module contains contextual sign information, and it is used as an input of the mutual-attention encoding module to generate a mutual-attention vector, which allows the mutual-attention module to output the contextual sign features of pulmonary nodules express.
  • the method for fusing the self-signature feature representation and the context feature feature representation includes:
  • the feature fusion coefficient is multiplied by the self-sign feature representation and the context feature feature representation respectively, and then the multiplied self-sign feature representation and the context feature feature representation are spliced together.
  • This embodiment provides a technical solution for fusing the self-signature feature representation and the context feature feature representation.
  • feature fusion There are many methods of feature fusion.
  • two different fully-connected layers and a Softmax layer are firstly used to generate a set of feature fusion coefficients.
  • the fusion coefficients are the weighting coefficients of contextual signs and self-signatures, and their sizes are similar to those of different pulmonary nodules.
  • the softmax function is generally used for multi-classifiers to output the probability of multiple categories, and the sum of all category probabilities is 1, so the softmax function is also called the normalized exponential function. Therefore, the sum of the feature fusion coefficients generated by the Softmax layer in this embodiment is also 1.
  • the fusion coefficient is multiplied by the self-signature feature representation and the context feature feature representation respectively, and the multiplication results are spliced together to obtain the fused feature of the self-signature feature representation and the context feature feature representation.
  • Figure 2 is a schematic diagram of the composition of a device for diagnosing benign and malignant pulmonary nodules based on CT images according to an embodiment of the present invention, the device comprising:
  • the pulmonary nodule segmentation module 11 is used to detect the positions and sizes of all pulmonary nodules from the input CT image based on a pulmonary nodule detection network, and segment an image region containing each pulmonary nodule;
  • the self-sign extraction module 12 is used to extract the surrounding feature map of the pulmonary nodule from the segmented pulmonary nodule image based on a feature extraction network, and perform region-of-interest pooling on it to obtain the self-sign feature representation of the pulmonary nodule ;
  • the contextual sign extraction module 13 is used to input the surrounding feature map and self-signature feature representation of the pulmonary nodule into a contextual sign extraction module based on an attention mechanism to obtain the contextual sign feature representation of the pulmonary nodule;
  • the feature fusion and diagnosis module 14 is used to fuse the self-sign feature representation and the context feature feature representation, and input the fused feature into a logistic regression layer to obtain the benign and malignant probability of pulmonary nodules.
  • the device of this embodiment can be used to implement the technical solution of the method embodiment shown in FIG. 1 , and its implementation principle and technical effect are similar, and will not be repeated here. The same is true for the following embodiments, which will not be further described.
  • the device further includes a preprocessing module, configured to perform the following operations on the input CT image:
  • the input CT image is resampled according to the pixel size of 1mm ⁇ 1mm ⁇ 1mm using the nearest neighbor method;
  • the feature extraction network is a convolutional neural network 3D U-Net or 3D ResNet.
  • the contextual sign extraction module based on the attention mechanism includes a self-attention module and a mutual-attention module;
  • the self-attention module takes the surrounding feature map of the pulmonary nodule as input, and passes a surrounding feature
  • the encoding module generates a self-attention vector with the same size as the number of feature map channels, and outputs the fusion features of the surrounding feature maps to the mutual attention encoding module of the mutual attention module;
  • the mutual attention module takes the self-signature representation of pulmonary nodules as input , through the mutual attention encoding module to generate a mutual attention vector with the same size as the number of feature map channels, and output the contextual feature representation of pulmonary nodules.
  • the method for fusing the self-signature feature representation and the context feature feature representation includes:
  • the feature fusion coefficient is multiplied by the self-sign feature representation and the context feature feature representation respectively, and then the multiplied self-sign feature representation and the context feature feature representation are spliced together.
  • the method and device for benign and malignant pulmonary nodules based on CT images provided by the present invention can combine the fused features with not only the lesion's own sign information, but also contextual sign information. Judgment, greatly improving the accuracy of benign and malignant pulmonary nodules detection.
  • the formed products can be mass-produced and quickly applied to systems or scenarios that have high requirements for the accuracy of benign and malignant pulmonary nodules detection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

一种基于CT图像的肺结节良恶性诊断方法及装置。方法包括:从输入CT图像中检测出所有肺结节的位置及尺寸,并分割出包含每个肺结节的图像区域(101);从分割出的肺结节图像中提取肺结节的周围特征图,并对其进行感兴趣区域池化得到肺结节的自身征象特征表示(102);利用一个基于注意力机制的上下文征象提取模块,得到肺结节的上下文征象特征表示(103);对自身征象特征表示和上下文征象特征表示进行融合,并基于融合后的特征进行肺结节良恶性诊断(104)。

Description

一种基于CT图像的肺结节良恶性诊断方法及装置
本发明要求由申请人提出的,申请日为2021年7月26日,申请号为CN2021108433928,名称为“一种基于CT图像的病灶良恶性诊断方法及装置”的申请的优先权。上述申请的全部内容通过整体引用结合于此。
技术领域
本申请涉及医疗影像技术领域,尤其涉及一种基于CT图像的病灶良恶性诊断方法及装置。
发明背景
肺癌是世界上发病率最高的癌症之一。肺结节是导致肺癌的最主要病变,电子计算机断层扫描(Computed Tomography,CT)是筛查肺结节良恶性的最常见手段,设计CT中肺结节良恶性的计算机辅助诊断(Computer-AidedDiagno sis,CAD)系统对于缓解医生阅片强度、提高恶性肺结节筛查准确性非常重要。
在临床诊断中,除了肺结节的自身特征(如纹理、形状),肺结节周围的上下文特征(如胸膜牵拉、血管变形)也能为良恶性诊断提供重要线索。但现有CAD系统一般只是根据肺结节自身特征判断其良恶性,没能充分利用肺结节周围的上下文特征,因此,现有的肺结节良恶性诊断方法存在诊断准确度不高等问题。
发明内容
本发明旨在提供一种克服上述问题或者至少部分地解决上述问题的基于CT图像的病灶良恶性诊断方法及装置,以提高肺结节良恶性检测的准确度。
为达到上述目的,本发明的技术方案具体是这样实现的:
第一方面,本发明提供一种基于CT图像的肺结节良恶性诊断方法,包括以下步骤:
基于一个肺结节检测网络,从输入CT图像中检测出所有肺结节的位置及尺寸,并分割出包含每个肺结节的图像区域;
基于一个特征提取网络,从分割出的肺结节图像中提取肺结节的周围特征图,并对其进行感兴趣区域池化得到肺结节的自身征象特征表示;
将肺结节的周围特征图和自身征象特征表示,输入到一个基于注意力机制的上下文征象提取模块,得到肺结节的上下文征象特征表示;
对自身征象特征表示和上下文征象特征表示进行融合,并将融合后的特征输入到一个逻辑回归层得到肺结节良恶性概率。
进一步地,所述方法还包括对输入CT图像进行的预处理步骤:
对输入CT图像按照1mm×1mm×1mm的像素大小采用最近邻方法进行重采样;
按照肺窗中心HU值-600、窗宽HU值1600进行窗宽窗位调整;
生成肺部分割所需要的掩膜:以HU值=-320为阈值对于重采样的图像进行二值化;对二值化图像进行连通区域计算,保留最大的连通区域作为肺部掩膜;将肺部掩膜与CT图像逐像素相乘,得到分割出肺部的CT图像,其余区域统一填充像素灰度值170。
进一步地,所述特征提取网络为卷积神经网络3D U-Net或3D ResNet。
进一步地,所述基于注意力机制的上下文征象提取模块包括一个自注意力模块和一个互注意力模块;自注意力模块以肺结节的周围特征图为输入,通过一个周围特征编码模块产生与特征图通道数相同大小的自注意力向量,输出周围特征图的融合特征至互注意力模块的互注意力编码模块;互注意力模块以肺结节的自身征象特征表示为输入,通过互注意力编码模块产生与特征图通道数相同大小的互注意力向量,输出肺结节的上下文征象特征表示。
进一步地,对自身征象特征表示和上下文征象特征表示进行融合的方法包括:
以自身征象特征表示和上下文征象特征表示为输入,利用两个不同的全连接层和一个Softmax层产生一组特征融合系数;
将所述特征融合系数分别与自身征象特征表示和上下文征象特征表示相乘,再将相乘后的自身征象特征表示和上下文征象特征表示拼接在一起。
第二方面,本发明提供一种基于CT图像的肺结节良恶性诊断装置,包括:
肺结节分割模块,用于基于一个肺结节检测网络,从输入CT图像中检测出所有肺结节的位置及尺寸,并分割出包含每个肺结节的图像区域;
自身征象提取模块,用于基于一个特征提取网络,从分割出的肺结节图像中提取肺结节的周围特征图,并对其进行感兴趣区域池化得到肺结节的自身征象特征表示;
上下文征象提取模块,用于将肺结节的周围特征图和自身征象特征表示,输入到一个基于注意力机制的上下文征象提取模块,得到肺结节的上下文征象特征表示;
特征融合与诊断模块,用于对自身征象特征表示和上下文征象特征表示进行融合,并将融合后的特征输入到一个逻辑回归层得到肺结节良恶性概率。
进一步地,所述装置还包括预处理模块,用于对输入CT图像进行以下操作:
对输入CT图像按照1mm×1mm×1mm的像素大小采用最近邻方法进行重采样;
按照肺窗中心HU值-600、窗宽HU值1600进行窗宽窗位调整;
生成肺部分割所需要的掩膜:以HU值=-320为阈值对于重采样的图像进行二值化;对二值化图像进行连通区域计算,保留最大的连通区域作为肺部掩膜;将肺部掩膜与CT图像逐像素相乘,得到分割出肺部的CT图像,其余区域统一填充像素灰度值170。
进一步地,所述特征提取网络为卷积神经网络3D U-Net或3D ResNet。
进一步地,所述基于注意力机制的上下文征象提取模块包括一个自注意力模块和一个互注意力模块;自注意力模块以肺结节的周围特征图为输入,通过一个周围特征编码模块产生与特征图通道数相同大小的自注意力向量,输出周围特征 图的融合特征至互注意力模块的互注意力编码模块;互注意力模块以肺结节的自身征象特征表示为输入,通过互注意力编码模块产生与特征图通道数相同大小的互注意力向量,输出肺结节的上下文征象特征表示。
进一步地,对自身征象特征表示和上下文征象特征表示进行融合的方法包括:
以自身征象特征表示和上下文征象特征表示为输入,利用两个不同的全连接层和一个Softmax层产生一组特征融合系数;
将所述特征融合系数分别与自身征象特征表示和上下文征象特征表示相乘,再将相乘后的自身征象特征表示和上下文征象特征表示拼接在一起。
与现有技术相比,本发明具有以下有益效果:
本发明通过构建一个基于注意力机制的上下文征象提取模块,可以更好地得到肺结节的上下文征象;通过进一步对肺结节的自身征象特征表示和上下文征象特征表示进行融合,使融合后的特征既包含肺结节自身征象信息,又包含上下文征象信息,相对现有肺结节良恶性诊断系统主要根据自身征象判断,可有效提高肺结节良恶性检测的准确度。
上述的非惯用的实施方式所具有的进一步效果将在下文中结合具体实施方式加以说明。
附图简要说明
为了更清楚地说明本申请实施例或现有的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例一种基于CT图像的肺结节良恶性诊断方法的流程图;
图2为本发明实施例一种基于CT图像的肺结节良恶性诊断装置的方框图。
实施本发明的方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地, 第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
图1为本发明实施例一种基于CT图像的肺结节良恶性诊断方法的流程图,包括以下步骤:
步骤101,基于一个肺结节检测网络,从输入CT图像中检测出所有肺结节的位置及尺寸,并分割出包含每个肺结节的图像区域;
步骤102,基于一个特征提取网络,从分割出的肺结节图像中提取肺结节的周围特征图,并对其进行感兴趣区域池化得到肺结节的自身征象特征表示;
步骤103,将肺结节的周围特征图和自身征象特征表示,输入到一个基于注意力机制的上下文征象提取模块,得到肺结节的上下文征象特征表示;
步骤104,对自身征象特征表示和上下文征象特征表示进行融合,并将融合后的特征输入到一个逻辑回归层得到肺结节良恶性概率。
本实施例中,步骤101主要用于检测并分割肺结节图像区域。本实施例利用一个已训练的肺结节检测网络,检测出输入CT图像中所有肺结节的位置及尺寸;然后根据检测到的肺结节的位置和尺寸,将输入CT图像中所有肺结节图片块剪切出来。为保留存在于肺结节周围的上下文征象,剪切时大小设置为一个大于肺结节实际尺寸的固定尺寸,如96mm×96mm×96mm。
本实施例中,步骤102主要用于得到肺结节的自身征象特征表示。本实施例利用一个特征提取网络,从分割出的每个肺结节图像中提取包含肺结节周围特征的深度特征图(简称肺结节的周围特征图),并对肺结节的周围特征图进行感兴趣区域池化(Region of interest pooling,也称为RoI pooling)操作(最大值池化),得到肺结节的自身征象特征表示。感兴趣区域池化是使用卷积神经网络在目标检测任务中广泛使用的操作。例如,在单个图像中检测多个汽车和行人。
本实施例中,步骤103主要用于得到肺结节的上下文征象特征表示。本实施例通过构建一个基于注意力机制的上下文征象获取模块,获取肺结节的上下文征象特征表示。由于肺结节的周围特征图包含了不同肺结节之间的关联信息即上下文征象,因此,对上一步得到的肺结节的周围特征图和自身征象特征表示进行注意力机制运算,就可以得到肺结节的上下文征象特征表示。
本实施例中,步骤104主要用于对自身征象特征表示和上下文征象特征表示进行融合,并基于融合后的特征得到肺结节良恶性概率。现有的肺结节检测系统一般只基于提取的肺结节自身征象信息进行良恶性判断,没有考虑到不同肺结节之间的关联,因此检测精度不高。为此,本实施例既获得肺结节的自身征象特征表示,又获得肺结节的上下文征象特征表示,并对两种征象特征表示进行融合(加权求和后拼接在一起),使融合后的特征既包含肺结节自身征象信息,又包含上下文征象信息,然后将融合特征输入到一个逻辑回归层得到肺结节良恶性概率,相对现有技术可有效提高肺结节良恶性检测的准确度。
作为一可选实施例,所述方法还包括对输入CT图像进行的预处理步骤:
对输入CT图像按照1mm×1mm×1mm的像素大小采用最近邻方法进行重采样;
按照肺窗中心HU值-600、窗宽HU值1600进行窗宽窗位调整;
生成肺部分割所需要的掩膜:以HU值=-320为阈值对于重采样的图像进行二 值化;对二值化图像进行连通区域计算,保留最大的连通区域作为肺部掩膜;将肺部掩膜与CT图像逐像素相乘,得到分割出肺部的CT图像,其余区域统一填充像素灰度值170。
本实施例给出了图像预处理的一种技术方案。为了有效地进行基于CT图像的肺结节良恶性检测,需要先对输入的CT图像进行预处理。所述预处理主要包括三部分内容:一是对输入CT图像按照设定的像素大小(如1mm×1mm×1mm)采用最近邻方法进行重采样,以提高分辨率;二是对肺窗进行窗宽窗位调整,调整后中心HU值为-600、窗宽HU值为1600,对正肺部以便准确地分割肺部CT图像;三是利用生成的肺部掩膜分割肺部的CT图像。肺部掩膜可通过以设定的HU值阈值(如-320)对重采样的图像进行二值化,对二值化图像进行连通区域计算并保留最大的连通区域得到。有了肺部掩膜,用肺部掩膜与CT图像逐像素相乘就可分割出肺部的CT图像。
作为一可选实施例,所述特征提取网络为卷积神经网络3D U-Net或3D ResNet。
本实施例给出了特征提取网络的网络结构。从原理上讲,特征提取网络可以采用任意神经网络模型,但考虑到本实施例涉及的CT图像为三维图像,因此为了更有效地进行特征提取,本实施例采用3D U-Net或3D ResNet作为特征提取网络。值得说明的是,本实施例只是给出了特征提取网络的一种或两种较佳的实施方式,并不否定和排斥其它可行的实施方式。
作为一可选实施例,所述基于注意力机制的上下文征象提取模块包括一个自注意力模块和一个互注意力模块;自注意力模块以肺结节的周围特征图为输入,通过一个周围特征编码模块产生与特征图通道数相同大小的自注意力向量,输出周围特征图的融合特征至互注意力模块的互注意力编码模块;互注意力模块以肺结节的自身征象特征表示为输入,通过互注意力编码模块产生与特征图通道数相同大小的互注意力向量,输出肺结节的上下文征象特征表示。
本实施例给出了上下文征象提取模块的一种技术方案。本实施例上下文征象提取模块主要由一个自注意力模块和一个互注意力模块组成。所述互注意力模块就是通常的注意力模块。设置自注意力模块的主要目的是优化模型对于与良恶性相关的上下文征象的学习。自注意力模块以肺结节的周围特征图为输入,通过一个周围特征编码模块产生与特征图通道数相同大小的自注意力向量,从而输出周围特征图的融合特征。周围特征编码模块可以使用任何对全局特征有较强的表示能力的编码模块。互注意力模块的目的是以结节自身征象为指导,促进模型对于上下文特征的学习。设置互注意力模块以肺结节自身征象特征表示为输入,通过一个互注意力编码模块产生与特征图通道数相同大小的互注意力向量。自注意力模块输出的周围特征图的融合特征包含了上下文征象信息,以它作为互注意力编码模块的一个输入生成的互注意力向量,可使互注意力模块输出肺结节的上下文征象特征表示。
作为一可选实施例,对自身征象特征表示和上下文征象特征表示进行融合的方法包括:
以自身征象特征表示和上下文征象特征表示为输入,利用两个不同的全连接层和一个Softmax层产生一组特征融合系数;
将所述特征融合系数分别与自身征象特征表示和上下文征象特征表示相乘,再将相乘后的自身征象特征表示和上下文征象特征表示拼接在一起。
本实施例给出了对自身征象特征表示和上下文征象特征表示进行融合的一种技术方案。特征融合的方法很多,本实施例首先利用两个不同的全连接层和一个Softmax层产生一组特征融合系数,所述融合系数就是上下文征象和自身征象的加权系数,其大小与不同肺结节存在的上下文征象信息的丰富程度有关间。softmax函数一般用于多分类器输出多个类别的概率,所有类别概率的和为1,所以softmax函数又称归一化指数函数。因此,本实施例利用Softmax层产生的特征融合系数的和也为1。然后用所述融合系数分别与自身征象特征表示和上下文征象特征表示相乘,将相乘结果拼接在一起就得到了自身征象特征表示和上下文征象特征表示融合后的特征。
图2为本发明实施例一种基于CT图像的肺结节良恶性诊断装置的组成示意图,所述装置包括:
肺结节分割模块11,用于基于一个肺结节检测网络,从输入CT图像中检测出所有肺结节的位置及尺寸,并分割出包含每个肺结节的图像区域;
自身征象提取模块12,用于基于一个特征提取网络,从分割出的肺结节图像中提取肺结节的周围特征图,并对其进行感兴趣区域池化得到肺结节的自身征象特征表示;
上下文征象提取模块13,用于将肺结节的周围特征图和自身征象特征表示,输入到一个基于注意力机制的上下文征象提取模块,得到肺结节的上下文征象特征表示;
特征融合与诊断模块14,用于对自身征象特征表示和上下文征象特征表示进行融合,并将融合后的特征输入到一个逻辑回归层得到肺结节良恶性概率。
本实施例的装置,可以用于执行图1所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。后面的实施例也是如此,均不再展开说明。
作为一可选实施例,所述装置还包括预处理模块,用于对输入CT图像进行以下操作:
对输入CT图像按照1mm×1mm×1mm的像素大小采用最近邻方法进行重采样;
按照肺窗中心HU值-600、窗宽HU值1600进行窗宽窗位调整;
生成肺部分割所需要的掩膜:以HU值=-320为阈值对于重采样的图像进行二值化;对二值化图像进行连通区域计算,保留最大的连通区域作为肺部掩膜;将肺部掩膜与CT图像逐像素相乘,得到分割出肺部的CT图像,其余区域统一填充像素灰度值170。
作为一可选实施例,所述特征提取网络为卷积神经网络3D U-Net或3D ResNet。
作为一可选实施例,所述基于注意力机制的上下文征象提取模块包括一个自注意力模块和一个互注意力模块;自注意力模块以肺结节的周围特征图为输入,通过一个周围特征编码模块产生与特征图通道数相同大小的自注意力向量,输出周围特征图的融合特征至互注意力模块的互注意力编码模块;互注意力模块以肺结节的自身征象特征表示为输入,通过互注意力编码模块产生与特征图通道数相同大小的互注意力向量,输出肺结节的上下文征象特征表示。
作为一可选实施例,对自身征象特征表示和上下文征象特征表示进行融合的方法包括:
以自身征象特征表示和上下文征象特征表示为输入,利用两个不同的全连接层和一个Softmax层产生一组特征融合系数;
将所述特征融合系数分别与自身征象特征表示和上下文征象特征表示相乘,再将相乘后的自身征象特征表示和上下文征象特征表示拼接在一起。
以上仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。
工业实用性
本发明提供的一种基于CT图像的肺结节良恶性诊断方法及装置能够将融合后的特征既包含病灶自身征象信息,又包含上下文征象信息,相对现有病灶良恶性诊断系统主要根据自身征象判断,大幅提高肺结节良恶性检测的准确度。形成的产品可以批量生产,快速应用于对肺结节良恶性检测的准确度具有高需求的系统或场景。

Claims (10)

  1. 一种基于CT图像的肺结节良恶性诊断方法,其特征在于,包括以下步骤:
    基于一个肺结节检测网络,从输入CT图像中检测出所有肺结节的位置及尺寸,并分割出包含每个肺结节的图像区域;
    基于一个特征提取网络,从分割出的肺结节图像中提取肺结节的周围特征图,并对其进行感兴趣区域池化得到肺结节的自身征象特征表示;
    将肺结节的周围特征图和自身征象特征表示,输入到一个基于注意力机制的上下文征象提取模块,得到肺结节的上下文征象特征表示;
    对自身征象特征表示和上下文征象特征表示进行融合,并将融合后的特征输入到一个逻辑回归层得到肺结节良恶性概率。
  2. 根据权利要求1所述的基于CT图像的肺结节良恶性诊断方法,其特征在于,所述方法还包括对输入CT图像进行的预处理步骤:
    对输入CT图像按照1mm×1mm×1mm的像素大小采用最近邻方法进行重采样;
    按照肺窗中心HU值-600、窗宽HU值1600进行窗宽窗位调整;
    生成肺部分割所需要的掩膜:以HU值=-320为阈值对于重采样的图像进行二值化;对二值化图像进行连通区域计算,保留最大的连通区域作为肺部掩膜;将肺部掩膜与CT图像逐像素相乘,得到分割出肺部的CT图像,其余区域统一填充像素灰度值170。
  3. 根据权利要求1所述的基于CT图像的肺结节良恶性诊断方法,其特征在于,所述特征提取网络为卷积神经网络3D U-Net或3D ResNet。
  4. 根据权利要求1所述的基于CT图像的肺结节良恶性诊断方法,其特征在于,所述基于注意力机制的上下文征象提取模块包括一个自注意力模块和一个互注意力模块;自注意力模块以肺结节的周围特征图为输入,通过一个周围特征编码模块产生与特征图通道数相同大小的自注意力向量,输出周围特征图的融合特征至互注意力模块的互注意力编码模块;互注意力模块以肺结节的自身征象特征表示为输入,通过互注意力编码模块产生与特征图通道数相同大小的互注意力向量,输出肺结节的上下文征象特征表示。
  5. 根据权利要求1所述的基于CT图像的肺结节良恶性诊断方法,其特征在于,对自身征象特征表示和上下文征象特征表示进行融合的方法包括:
    以自身征象特征表示和上下文征象特征表示为输入,利用两个不同的全连接层和一个Softmax层产生一组特征融合系数;
    将所述特征融合系数分别与自身征象特征表示和上下文征象特征表示相乘,再将相乘后的自身征象特征表示和上下文征象特征表示拼接在一起。
  6. 一种基于CT图像的肺结节良恶性诊断装置,其特征在于,包括:
    肺结节分割模块,用于基于一个肺结节检测网络,从输入CT图像中检测出所有肺结节的位置及尺寸,并分割出包含每个肺结节的图像区域;
    自身征象提取模块,用于基于一个特征提取网络,从分割出的肺结节图像中提取肺结节的周围特征图,并对其进行感兴趣区域池化得到肺结节的自身征象特 征表示;
    上下文征象提取模块,用于将肺结节的周围特征图和自身征象特征表示,输入到一个基于注意力机制的上下文征象提取模块,得到肺结节的上下文征象特征表示;
    特征融合与诊断模块,用于对自身征象特征表示和上下文征象特征表示进行融合,并将融合后的特征输入到一个逻辑回归层得到肺结节良恶性概率。
  7. 根据权利要求6所述的基于CT图像的肺结节良恶性诊断装置,其特征在于,所述装置还包括预处理模块,用于对输入CT图像进行以下操作:
    对输入CT图像按照1mm×1mm×1mm的像素大小采用最近邻方法进行重采样;
    按照肺窗中心HU值-600、窗宽HU值1600进行窗宽窗位调整;
    生成肺部分割所需要的掩膜:以HU值=-320为阈值对于重采样的图像进行二值化;对二值化图像进行连通区域计算,保留最大的连通区域作为肺部掩膜;将肺部掩膜与CT图像逐像素相乘,得到分割出肺部的CT图像,其余区域统一填充像素灰度值170。
  8. 根据权利要求6所述的基于CT图像的肺结节良恶性诊断装置,其特征在于,所述特征提取网络为卷积神经网络3D U-Net或3D ResNet。
  9. 根据权利要求6所述的基于CT图像的肺结节良恶性诊断装置,其特征在于,所述基于注意力机制的上下文征象提取模块包括一个自注意力模块和一个互注意力模块;自注意力模块以肺结节的周围特征图为输入,通过一个周围特征编码模块产生与特征图通道数相同大小的自注意力向量,输出周围特征图的融合特征至互注意力模块的互注意力编码模块;互注意力模块以肺结节的自身征象特征表示为输入,通过互注意力编码模块产生与特征图通道数相同大小的互注意力向量,输出肺结节的上下文征象特征表示。
  10. 根据权利要求6所述的基于CT图像的肺结节良恶性诊断装置,其特征在于,对自身征象特征表示和上下文征象特征表示进行融合的方法包括:
    以自身征象特征表示和上下文征象特征表示为输入,利用两个不同的全连接层和一个Softmax层产生一组特征融合系数;
    将所述特征融合系数分别与自身征象特征表示和上下文征象特征表示相乘,再将相乘后的自身征象特征表示和上下文征象特征表示拼接在一起。
PCT/CN2022/104347 2021-07-26 2022-07-07 一种基于ct图像的肺结节良恶性诊断方法及装置 WO2023005634A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110843392.8 2021-07-26
CN202110843392.8A CN113782181A (zh) 2021-07-26 2021-07-26 一种基于ct图像的肺结节良恶性诊断方法及装置

Publications (1)

Publication Number Publication Date
WO2023005634A1 true WO2023005634A1 (zh) 2023-02-02

Family

ID=78836373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/104347 WO2023005634A1 (zh) 2021-07-26 2022-07-07 一种基于ct图像的肺结节良恶性诊断方法及装置

Country Status (2)

Country Link
CN (1) CN113782181A (zh)
WO (1) WO2023005634A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117542527A (zh) * 2024-01-09 2024-02-09 百洋智能科技集团股份有限公司 肺结节跟踪与变化趋势预测方法、装置、设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782181A (zh) * 2021-07-26 2021-12-10 杭州深睿博联科技有限公司 一种基于ct图像的肺结节良恶性诊断方法及装置
CN115358976B (zh) * 2022-08-10 2023-04-07 北京医准智能科技有限公司 一种图像识别方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003070102A2 (en) * 2002-02-15 2003-08-28 The Regents Of The University Of Michigan Lung nodule detection and classification
US20200160997A1 (en) * 2018-11-02 2020-05-21 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
CN111415342A (zh) * 2020-03-18 2020-07-14 北京工业大学 一种融合注意力机制的三维卷积神经网络肺部结节图像自动检测方法
CN112232433A (zh) * 2020-10-27 2021-01-15 河北工业大学 一种基于双通路网络的肺结节良恶性分类方法
CN112419307A (zh) * 2020-12-11 2021-02-26 长春工业大学 一种基于注意力机制的肺结节良恶性识别方法
US20210224603A1 (en) * 2020-01-17 2021-07-22 Ping An Technology (Shenzhen) Co., Ltd. Device and method for universal lesion detection in medical images
CN113782181A (zh) * 2021-07-26 2021-12-10 杭州深睿博联科技有限公司 一种基于ct图像的肺结节良恶性诊断方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220971A (zh) * 2017-06-02 2017-09-29 太原理工大学 一种基于卷积神经网络和主成分分析法的肺结节特征提取方法
CN109711315A (zh) * 2018-12-21 2019-05-03 四川大学华西医院 一种肺结节分析的方法及装置
CN110175979B (zh) * 2019-04-08 2021-07-27 杭州电子科技大学 一种基于协同注意力机制的肺结节分类方法
CN110534192B (zh) * 2019-07-24 2023-12-26 大连理工大学 一种基于深度学习的肺结节良恶性识别方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003070102A2 (en) * 2002-02-15 2003-08-28 The Regents Of The University Of Michigan Lung nodule detection and classification
US20200160997A1 (en) * 2018-11-02 2020-05-21 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
US20210224603A1 (en) * 2020-01-17 2021-07-22 Ping An Technology (Shenzhen) Co., Ltd. Device and method for universal lesion detection in medical images
CN111415342A (zh) * 2020-03-18 2020-07-14 北京工业大学 一种融合注意力机制的三维卷积神经网络肺部结节图像自动检测方法
CN112232433A (zh) * 2020-10-27 2021-01-15 河北工业大学 一种基于双通路网络的肺结节良恶性分类方法
CN112419307A (zh) * 2020-12-11 2021-02-26 长春工业大学 一种基于注意力机制的肺结节良恶性识别方法
CN113782181A (zh) * 2021-07-26 2021-12-10 杭州深睿博联科技有限公司 一种基于ct图像的肺结节良恶性诊断方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117542527A (zh) * 2024-01-09 2024-02-09 百洋智能科技集团股份有限公司 肺结节跟踪与变化趋势预测方法、装置、设备及存储介质
CN117542527B (zh) * 2024-01-09 2024-04-26 百洋智能科技集团股份有限公司 肺结节跟踪与变化趋势预测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113782181A (zh) 2021-12-10

Similar Documents

Publication Publication Date Title
CN110599448B (zh) 基于MaskScoring R-CNN网络的迁移学习肺部病变组织检测系统
WO2023005634A1 (zh) 一种基于ct图像的肺结节良恶性诊断方法及装置
CN109685060B (zh) 图像处理方法和装置
CN111325739B (zh) 肺部病灶检测的方法及装置,和图像检测模型的训练方法
Shen et al. An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN110807788A (zh) 医学图像处理方法、装置、电子设备及计算机存储介质
CN109087703B (zh) 基于深度卷积神经网络的腹腔ct图像腹膜转移标记方法
CN111429473B (zh) 基于多尺度特征融合的胸片肺野分割模型建立及分割方法
JP2023550844A (ja) 深層形状学習に基づく肝臓ct自動分割方法
Chang et al. Graph-based learning for segmentation of 3D ultrasound images
US10706534B2 (en) Method and apparatus for classifying a data point in imaging data
CN111275712B (zh) 一种面向大尺度图像数据的残差语义网络训练方法
CN111667478A (zh) Cta到mra跨模态预测的颈动脉斑块识别方法及系统
WO2023071154A1 (zh) 图像分割方法及相关模型的训练方法和装置、设备
CN111553892A (zh) 基于深度学习的肺结节分割计算方法、装置及系统
Yang et al. AlignShift: bridging the gap of imaging thickness in 3D anisotropic volumes
CN112365973A (zh) 基于对抗网络和Faster R-CNN的肺结节辅助诊断系统
CN110738702B (zh) 一种三维超声图像的处理方法、装置、设备及存储介质
Suinesiaputra et al. Deep learning analysis of cardiac MRI in legacy datasets: multi-ethnic study of atherosclerosis
Salih et al. The local ternary pattern encoder–decoder neural network for dental image segmentation
CN116777893B (zh) 一种基于乳腺超声横纵切面特征结节的分割与识别方法
Zhu et al. Attention-Unet: A Deep Learning Approach for Fast and Accurate Segmentation in Medical Imaging
Liu et al. RPLS-Net: pulmonary lobe segmentation based on 3D fully convolutional networks and multi-task learning
US20230110263A1 (en) Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22848239

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22848239

Country of ref document: EP

Kind code of ref document: A1