CN115082909B - Method and system for identifying lung lesions - Google Patents

Method and system for identifying lung lesions Download PDF

Info

Publication number
CN115082909B
CN115082909B CN202111296412.0A CN202111296412A CN115082909B CN 115082909 B CN115082909 B CN 115082909B CN 202111296412 A CN202111296412 A CN 202111296412A CN 115082909 B CN115082909 B CN 115082909B
Authority
CN
China
Prior art keywords
network
lesion
features
lung
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111296412.0A
Other languages
Chinese (zh)
Other versions
CN115082909A (en
Inventor
卞修武
姚小红
赵泽
何志承
郑烨
陈伟
王晓雯
时雨
平轶芳
肖诗奇
崔莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhijian Life Technology Co ltd
Chongqing Zhijian Life Technology Co ltd
First Affiliated Hospital of Army Medical University
Original Assignee
Beijing Zhijian Life Technology Co ltd
Chongqing Zhijian Life Technology Co ltd
First Affiliated Hospital of Army Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhijian Life Technology Co ltd, Chongqing Zhijian Life Technology Co ltd, First Affiliated Hospital of Army Medical University filed Critical Beijing Zhijian Life Technology Co ltd
Priority to CN202111296412.0A priority Critical patent/CN115082909B/en
Publication of CN115082909A publication Critical patent/CN115082909A/en
Application granted granted Critical
Publication of CN115082909B publication Critical patent/CN115082909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a lung lesion recognition method and a lung lesion recognition system. The ground glass region and the bronchi expansion region in the lung medical image are detected through a target detection and example segmentation algorithm, so that the method is used for identifying lung lesions. The method extracts basic visual features of the lung medical image by using a multi-scale multi-depth convolutional neural network structure, and fuses features with different scales and depths through a feature pyramid network. Features of regions that are likely to be targets are extracted by the lesion candidate region identification network while useless background information is filtered out. And then the extracted target region features are subjected to classification head network, detection head network and segmentation head network to obtain detection segmentation results, and finally final prediction results are obtained through non-maximum suppression post-processing. The whole network structure adopts an end-to-end multitasking model, so that the network can effectively learn the characteristic information of medical images, and the robustness and generalization capability of an algorithm are enhanced.

Description

Method and system for identifying lung lesions
Technical Field
The invention belongs to the technical field of target classification in image processing, and relates to a method and a system for identifying lung lesions, wherein a network structure is segmented by combining a target detection example of a medical image for identifying lung lesions.
Background
In recent years, deep learning has been getting hot in the medical field. Due to the strong nonlinear modeling capability of the deep learning network and the characteristics of large information quantity, abundant characteristics and multiple modal types of medical images, the deep learning network is widely applied to the medical images.
The two-stage target detection structure based on deep learning is firstly published in 2015 NIPS (near R-CNN: towards Real-Time Object Detection with Region Proposal Networks) and is used for detecting common categories on MS-COCO natural pictures, namely, simultaneously classifying and positioning targets such as people, vehicles, animals and the like. In 2017, the ICCV is improved based on Faster R-CNN, and the paper MASK R-CNN is published for carrying out example segmentation on predefined category targets in natural images, so that the current optimal result is obtained. The principle is mainly that a flow of detecting before dividing is adopted, each target instance is detected first, then each instance is divided, and another important contribution is that experiments prove that adding a division task is very helpful for improving the detection precision.
With the continuous progress and development of deep learning, the target detection and instance segmentation algorithm also starts to appear in various industries, such as industrial flaw detection, pedestrian detection and the like, but has great potential to be mined in the field of medical images.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A method for target detection and instance segmentation network structure combined with medical image multi-task learning is provided.
Aiming at the defects of the prior art, the invention provides a lung lesion identification method, which comprises the following steps:
step 1, constructing a lesion recognition model comprising a convolutional neural network, a feature pyramid network, a lesion candidate region extraction network and a joint recognition network, wherein the joint recognition network comprises a classification head network, a detection head network and a segmentation head network; acquiring a lung medical image as training data, wherein the training data is marked with a lesion area and a lesion category;
step 2, extracting multi-scale image features of the training data through the convolutional neural network; the feature pyramid network performs up-sampling and summation on adjacent features in the image features, and fuses features with different scale depths to obtain enhanced features; inputting the enhancement features into the lesion candidate region extraction network to obtain all candidate region features containing lesions in the enhancement features;
and 3, inputting all the candidate region features into the joint recognition network to obtain the lesion type and the lesion region of the training data, constructing a loss function by combining the lesion type and the lesion region marked by the training data, iteratively training the lesion recognition model through the loss function until the loss function converges or reaches the preset iteration times, and recognizing the lesion region of the appointed lung medical image by using the current lesion recognition model.
The lung lesion recognition method is characterized in that the lung medical image is a CT image.
The lung lesion identification method comprises the following steps: and carrying out information interaction fusion on the features with different scales or depths in the image features through a multi-scale feature pyramid network.
The lung lesion identification method, wherein the lesion classification comprises glass grinding and bronchiectasis;
the step 3 comprises the following steps: and respectively executing corresponding classification, coordinate frame regression and semantic segmentation tasks through the classification head, the detection head and the segmentation head network in the joint recognition network to obtain a recognition result of the glass grinding and the bronchiectasis region in the lung medical image.
The invention also provides a lung lesion recognition system, which comprises:
the model construction module is used for constructing a lesion recognition model comprising a convolutional neural network, a characteristic pyramid network, a lesion candidate region extraction network and a joint recognition network, wherein the joint recognition network comprises a classification head network, a detection head network and a segmentation head network; acquiring a lung medical image as training data, wherein the training data is marked with a lesion area and a lesion category;
the image feature extraction module is used for extracting the multi-scale image features of the training data through the convolutional neural network; the feature pyramid network performs up-sampling and summation on adjacent features in the image features, and fuses features with different scale depths to obtain enhanced features; inputting the enhancement features into the lesion candidate region extraction network to obtain all candidate region features containing lesions in the enhancement features;
and the lesion area identification module is used for inputting all the candidate area characteristics into the joint identification network to obtain the lesion type and the lesion area of the training data, constructing a loss function by combining the marked lesion area and lesion type of the training data, iteratively training the lesion identification model through the loss function until the loss function converges or reaches the preset iteration times, and identifying the lesion area of the appointed lung medical image by using the current lesion identification model.
The lung lesion recognition system is characterized in that the lung medical image is a CT image.
The lung lesion recognition system, wherein the image feature extraction module comprises: and carrying out information interaction fusion on the features with different scales or depths in the image features through a multi-scale feature pyramid network.
The lung lesion recognition system, wherein the lesion classification includes ground glass and concomitant bronchiectasis;
the lesion area identification module includes: and respectively executing corresponding classification, coordinate frame regression and semantic segmentation tasks through the classification head, the detection head and the segmentation head network in the joint recognition network to obtain a recognition result of the glass grinding and the bronchiectasis region in the lung medical image.
The invention also provides a storage medium for storing a program for executing the lung lesion recognition method.
The invention also provides a client side used for the lung lesion recognition system.
The advantages of the invention are as follows:
the invention uses medical images to identify lung lesions by segmenting the network structure through the target detection instance. Basic visual features of the lung medical image are extracted by using a multi-scale multi-depth convolutional neural network structure, and features with different scales and depths are fused through a feature pyramid network. Features of regions that are likely to be targets are extracted by the lesion candidate region identification network while useless background information is filtered out. And then, the extracted target region features are subjected to detection and segmentation results through a classification head network, a detection network and a segmentation network, and finally, a final prediction result is obtained through non-maximum suppression post-processing. The whole network structure adopts an end-to-end multitasking model, so that the network can effectively learn the characteristic information of medical images, and the robustness and generalization capability of an algorithm are enhanced.
Drawings
FIG. 1 is a flow chart of a method of providing a preferred embodiment of the present invention;
fig. 2 is a monolithic network structure.
Detailed Description
In order to make the above features and effects of the present invention more clearly understood, the following specific examples are given with reference to the accompanying drawings.
The method is characterized in that firstly, the characteristics of the multi-scale medical images are fused through the characteristic pyramid network, and targets with different sizes are effectively detected. Meanwhile, the overall recognition effect is improved by adopting a mode of joint learning of three tasks of classification, detection and segmentation. Therefore, the feature pyramid network structure is used for effectively extracting the features of the medical image, the final recognition effect is improved through multitask learning, and the method has wide application prospects in the medical field.
The invention provides a lung lesion recognition method combined with medical images, which comprises the following steps:
s1, acquiring an original image of a medical image, and then carrying out corresponding normalization on the image by using a fixed mean value and a fixed variance.
S2, inputting the medical image after normalization processing into a basic network to extract a basic feature map.
S3, inputting the basic feature map into a feature pyramid network to extract and fuse the multi-scale medical image feature map, wherein the feature map can be used for identifying lesion targets with different sizes at the same time.
S4, inputting the multi-scale features obtained in the previous step into a lesion candidate region extraction network to obtain a plurality of features which are possibly lesions.
And S5, sending all lesion candidate features into a classification head network, a detection head network and a segmentation head network to perform corresponding classification, coordinate frame regression and semantic segmentation tasks, and mutually improving the effect by adopting a joint learning mode. Joint learning herein refers to classification, detection and segmentation of three networks with respective corresponding loss functions, which are optimized simultaneously during training.
In order to make the above features and effects of the present invention more clearly understood, the following specific examples are given with reference to the accompanying drawings.
The technical scheme for solving the technical problems is as follows:
the process flow of the present invention is shown in figure 1.
The present invention relates to a feature attention network architecture example such as that of fig. 2.
The method comprises the following specific steps:
step S1: firstly, a lung medical image of a person to be tested is obtained. Because of more medical image pictures, the medical image pictures need to be screened, and the medical image pictures with obvious lung lesion characteristics are selected as a training set. Because the numerical distribution ranges of the image images are different, the image images need to be normalized by adopting a fixed mean variance for better training of the network, and then are put into the network for training and testing.
Step S2: and transmitting the normalized medical image to a basic feature extraction network to extract a basic feature map. Medical images are generally single-channel gray-scale images, and high-level semantic information and low-level features are important because the structure of organs is fixed and semantic information is not particularly abundant, so that ResNet-50 is used as a basic network structure to extract the semantic information of the images.
Step S3: the medical image basic feature images are transmitted to a feature pyramid network structure, and feature information of different scales is effectively fused by up-sampling and summing the feature images of adjacent scales, so that lesion areas of different sizes can be conveniently identified simultaneously.
Step S4: the multi-scale medical image feature map obtained in the step S3 is input into a lesion candidate region extraction network to obtain the features of a plurality of lesion candidate regions, and the potential lesion regions can be primarily screened out and irrelevant backgrounds can be filtered out.
Step S5: and (3) inputting the medical image feature map of the potential lesion obtained in the step (S4) into a classification head network, a detection network and a segmentation network to obtain the identification results of the ground glass region and the known bronchiectasis region. Wherein, the classification is to distinguish whether the glass is ground or the background, whether the bronchus is ground or the background, the detection is to obtain specific positions of the ground glass and the bronchus, and the segmentation is to carry out fine pixel level distinction in the obtained specific positions.
The following is a system example corresponding to the above method example, and this embodiment mode may be implemented in cooperation with the above embodiment mode. The related technical details mentioned in the above embodiments are still valid in this embodiment, and in order to reduce repetition, they are not repeated here. Accordingly, the related technical details mentioned in the present embodiment can also be applied to the above-described embodiments.
The invention also provides a lung lesion recognition system, which comprises:
the model construction module is used for constructing a lesion recognition model comprising a convolutional neural network, a characteristic pyramid network, a lesion candidate region extraction network and a joint recognition network, wherein the joint recognition network comprises a classification head network, a detection head network and a segmentation head network; acquiring a lung medical image as training data, wherein the training data is marked with a lesion area and a lesion category;
the image feature extraction module is used for extracting the multi-scale image features of the training data through the convolutional neural network; the feature pyramid network performs up-sampling and summation on adjacent features in the image features, and fuses features with different scale depths to obtain enhanced features; inputting the enhancement features into the lesion candidate region extraction network to obtain all candidate region features containing lesions in the enhancement features;
and the lesion area identification module is used for inputting all the candidate area characteristics into the joint identification network to obtain the lesion type and the lesion area of the training data, constructing a loss function by combining the marked lesion area and lesion type of the training data, iteratively training the lesion identification model through the loss function until the loss function converges or reaches the preset iteration times, and identifying the lesion area of the appointed lung medical image by using the current lesion identification model.
The lung lesion recognition system is characterized in that the lung medical image is a CT image.
The lung lesion recognition system, wherein the image feature extraction module comprises: and carrying out information interaction fusion on the features with different scales or depths in the image features through a multi-scale feature pyramid network.
The lung lesion recognition system, wherein the lesion classification includes ground glass and concomitant bronchiectasis;
the lesion area identification module includes: and respectively executing corresponding classification, coordinate frame regression and semantic segmentation tasks through the classification head, the detection head and the segmentation head network in the joint recognition network to obtain a recognition result of the glass grinding and the bronchiectasis region in the lung medical image.
The invention also provides a storage medium for storing a program for executing the lung lesion recognition method.
The invention also provides a client side used for the lung lesion recognition system.

Claims (10)

1. A method of identifying a lung lesion, comprising:
step 1, constructing a lesion recognition model comprising a convolutional neural network, a feature pyramid network, a lesion candidate region extraction network and a joint recognition network, wherein the joint recognition network comprises a classification head network, a detection head network and a segmentation head network; acquiring a lung medical image as training data, wherein the training data is marked with a lesion area and a lesion category;
step 2, extracting multi-scale image features of the training data through the convolutional neural network; the feature pyramid network performs up-sampling and summation on adjacent features in the image features, and fuses features with different scale depths to obtain enhanced features; inputting the enhancement features into the lesion candidate region extraction network to obtain all candidate region features containing lesions in the enhancement features;
step 3, inputting all the candidate region features into the joint recognition network to obtain a lesion type and a lesion region of the training data, constructing a loss function by combining the lesion type and the lesion type marked by the training data, iteratively training the lesion recognition model through the loss function until the loss function converges or reaches the preset iteration times, and recognizing the lesion region of the appointed lung medical image by using the current lesion recognition model;
wherein the step 3 comprises: respectively executing corresponding classification, coordinate frame regression and semantic segmentation tasks through a classification head, a detection head and a segmentation head network in the joint recognition network to obtain a recognition result of a glass grinding and bronchiectasis region in a lung medical image; and the joint recognition network is jointly trained by simultaneously optimizing the loss functions corresponding to the classification head, the detection head and the segmentation head during training.
2. The method of claim 1, wherein the medical image of the lung is a CT image.
3. The method of claim 1, wherein step 2 comprises: and carrying out information interaction fusion on the features with different scales or depths in the image features through a multi-scale feature pyramid network.
4. The method of claim 1, wherein the lesion classification comprises ground glass and bronchiectasis.
5. A pulmonary lesion recognition system, comprising:
the model construction module is used for constructing a lesion recognition model comprising a convolutional neural network, a characteristic pyramid network, a lesion candidate region extraction network and a joint recognition network, wherein the joint recognition network comprises a classification head network, a detection head network and a segmentation head network; acquiring a lung medical image as training data, wherein the training data is marked with a lesion area and a lesion category;
the image feature extraction module is used for extracting the multi-scale image features of the training data through the convolutional neural network; the feature pyramid network performs up-sampling and summation on adjacent features in the image features, and fuses features with different scale depths to obtain enhanced features; inputting the enhancement features into the lesion candidate region extraction network to obtain all candidate region features containing lesions in the enhancement features;
the lesion area identification module is used for inputting all the candidate area characteristics into the joint identification network to obtain a lesion type and a lesion area of the training data, constructing a loss function by combining the lesion area and the lesion type marked by the training data, iteratively training the lesion identification model through the loss function until the loss function converges or reaches the preset iteration times, and identifying the lesion area of the appointed lung medical image by using the current lesion identification model;
wherein the lesion area identification module is configured to: respectively executing corresponding classification, coordinate frame regression and semantic segmentation tasks through a classification head, a detection head and a segmentation head network in the joint recognition network to obtain a recognition result of a glass grinding and bronchiectasis region in a lung medical image; and the joint recognition network is jointly trained by simultaneously optimizing the loss functions corresponding to the classification head, the detection head and the segmentation head during training.
6. The pulmonary lesion recognition system of claim 5, wherein the medical image of the lung is a CT image.
7. The pulmonary lesions recognition system of claim 5, wherein the image feature extraction module comprises: and carrying out information interaction fusion on the features with different scales or depths in the image features through a multi-scale feature pyramid network.
8. The pulmonary lesion recognition system of claim 5, wherein the lesion classification includes ground glass and concomitant bronchiectasis.
9. A storage medium storing a program for executing the lung lesion recognition method according to any one of claims 1 to 4.
10. A client for a pulmonary lesion recognition system according to any of claims 5 to 8.
CN202111296412.0A 2021-11-03 2021-11-03 Method and system for identifying lung lesions Active CN115082909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111296412.0A CN115082909B (en) 2021-11-03 2021-11-03 Method and system for identifying lung lesions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111296412.0A CN115082909B (en) 2021-11-03 2021-11-03 Method and system for identifying lung lesions

Publications (2)

Publication Number Publication Date
CN115082909A CN115082909A (en) 2022-09-20
CN115082909B true CN115082909B (en) 2024-04-12

Family

ID=83245545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111296412.0A Active CN115082909B (en) 2021-11-03 2021-11-03 Method and system for identifying lung lesions

Country Status (1)

Country Link
CN (1) CN115082909B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110599448A (en) * 2019-07-31 2019-12-20 浙江工业大学 Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN111126202A (en) * 2019-12-12 2020-05-08 天津大学 Optical remote sensing image target detection method based on void feature pyramid network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110599448A (en) * 2019-07-31 2019-12-20 浙江工业大学 Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN111126202A (en) * 2019-12-12 2020-05-08 天津大学 Optical remote sensing image target detection method based on void feature pyramid network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于Mask R-CNN和多特征融合的实例分割;姜世浩;齐苏敏;王来花;贾惠;;计算机技术与发展;20200910(第09期);全文 *
基于深度学习图像处理的肺部造影检测研究;李维嘉;陈爽;张雷;吴正灏;;自动化与仪器仪表;20191225(第12期);全文 *
基于特征金字塔网络的肺结节检测;高智勇;黄金镇;杜程刚;;计算机应用;20201231(第09期);全文 *

Also Published As

Publication number Publication date
CN115082909A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
Ali et al. Structural crack detection using deep convolutional neural networks
Zhang et al. CrackGAN: Pavement crack detection using partially accurate ground truths based on generative adversarial learning
Tong et al. Salient object detection via bootstrap learning
CN112308826B (en) Bridge structure surface defect detection method based on convolutional neural network
Dib et al. A review on negative road anomaly detection methods
CN111257341A (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN112257665A (en) Image content recognition method, image recognition model training method, and medium
CN111274926A (en) Image data screening method and device, computer equipment and storage medium
CN111612747A (en) Method and system for rapidly detecting surface cracks of product
CN115239644A (en) Concrete defect identification method and device, computer equipment and storage medium
CN112733711A (en) Remote sensing image damaged building extraction method based on multi-scale scene change detection
CN115019133A (en) Method and system for detecting weak target in image based on self-training and label anti-noise
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
Khoshboresh-Masouleh et al. Robust building footprint extraction from big multi-sensor data using deep competition network
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
Athira et al. Underwater object detection model based on YOLOv3 architecture using deep neural networks
CN115082909B (en) Method and system for identifying lung lesions
CN116977249A (en) Defect detection method, model training method and device
CN115937095A (en) Printing defect detection method and system integrating image processing algorithm and deep learning
Ali et al. Asphalt Pavement Potholes Localization and Segmentation using Deep Retina Net and Conditional Random Fields
Nguyen et al. Pavement crack detection and segmentation based on deep neural network
Anitha et al. A survey on crack detection algorithms for concrete structures
CN114387496A (en) Target detection method and electronic equipment
Li et al. Wooden spoon crack detection by prior knowledge-enriched deep convolutional network
Cai et al. Unfeatured weld positioning technology based on neural network and machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant