CN111325739B - Method and device for detecting lung focus and training method of image detection model - Google Patents

Method and device for detecting lung focus and training method of image detection model Download PDF

Info

Publication number
CN111325739B
CN111325739B CN202010130331.2A CN202010130331A CN111325739B CN 111325739 B CN111325739 B CN 111325739B CN 202010130331 A CN202010130331 A CN 202010130331A CN 111325739 B CN111325739 B CN 111325739B
Authority
CN
China
Prior art keywords
detection
lung
image
training
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010130331.2A
Other languages
Chinese (zh)
Other versions
CN111325739A (en
Inventor
王慧芳
王瑜
班允峰
邹彤
周越
赵朝炜
李新阳
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202010130331.2A priority Critical patent/CN111325739B/en
Publication of CN111325739A publication Critical patent/CN111325739A/en
Application granted granted Critical
Publication of CN111325739B publication Critical patent/CN111325739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for detecting lung focuses and a training method of an image detection model, wherein the method for detecting the lung focuses comprises the following steps: carrying out segmentation processing on the chest radiography image data to obtain segmented lung detection image data; inputting the lung detection image data into an image detection model to obtain multilayer characteristic layer data; inputting the multilayer feature layer data into a first detection sub-model in the image detection model, and outputting a detection frame of a focus of the lung detection image and a first prediction probability for accurately predicting the focus of the lung detection image by the detection frame; and inputting the multi-layer feature layer data into a second detection sub-model in the image detection model, and outputting a second prediction probability for predicting whether the focus of the lung detection image exists and/or a thermodynamic diagram of the focus area of the lung detection image.

Description

Method and device for detecting lung focus and training method of image detection model
Technical Field
The application relates to the technical field of image processing, in particular to a method and a device for detecting lung lesions and a training method of an image detection model.
Background
The development of modern medical technology is mature, and professional medical staff can diagnose various diseases by means of medical knowledge and medical experience, however, the working efficiency is obviously low, and due to serious shortage of the professional medical staff, the requirements on the medical level and the spatial imagination ability of the professional medical staff are high, so that the task is heavy, the working pressure is high, and under a high-intensity working state, the condition of missed detection can occur, so that the whole detection process is time-consuming and labor-consuming, and the missed diagnosis rate is high. Meanwhile, because the medical levels of various regions are very inconsistent and the personal experience levels of doctors are also uneven, the traditional method for diagnosing diseases by doctors is easily affected by the medical levels of the regions and the personal experience levels of the doctors, so that the problem of large diagnosis error is caused.
Content of application
In view of this, embodiments of the present application aim to provide a method and an apparatus for detecting a lung lesion, and a training method for an image detection model, which can improve the accuracy of lung lesion detection, thereby reducing the missed diagnosis rate of lesion detection of medical staff and improving the lesion detection efficiency of medical staff.
According to a first aspect of embodiments of the present application, there is provided a method of lung lesion detection, comprising: carrying out segmentation processing on the chest radiography image data to obtain segmented lung detection image data; inputting the lung detection image data into an image detection model to obtain multilayer characteristic layer data; inputting the multilayer feature layer data into a first detection sub-model in the image detection model, and outputting a detection frame of a focus of the lung detection image and a first prediction probability for accurately predicting the focus of the lung detection image by the detection frame; and inputting the multi-layer feature layer data into a second detection sub-model in the image detection model, and outputting a second prediction probability for predicting whether the focus of the lung detection image exists and/or a thermodynamic diagram of the focus area of the lung detection image.
In one embodiment, the method further comprises: inputting lung training image data into a first neural network for feature extraction, and outputting a plurality of layers of feature layers; inputting the multilayer characteristic layers into a second neural network for training to obtain a first detection submodel; inputting the multilayer characteristic layers into a third neural network for training to obtain a second detection submodel; and determining the image detection model based on the first detection submodel and the second detection submodel.
In one embodiment, the method further comprises: and acquiring the lung training image data, wherein the lung training image data is labeled lung training image data after manual labeling. The labels comprise detection labels and at least one of category labels and segmentation labels, the detection labels are manually marked detection frames of focuses of the lung training images, the category labels are manually marked classifications of the focuses of the lung training images, the segmentation labels are manually marked focus regions of the lung training images, and the focus regions of the lung training images are regions extending outwards from the centers of the detection frames of the focuses of the lung training images by preset distances.
In one embodiment, the inputting the multi-layer feature layers into a second neural network for training to obtain a first detection submodel includes: determining a plurality of first feature maps based on the multi-layer feature layer; generating a plurality of prior boxes on each of the plurality of first feature maps; carrying out post-processing on the plurality of prior frames to obtain a first prior frame matched with the detection frame, a first prediction probability of the first prior frame for accurately predicting the focus of the lung training image and a prediction position of the first prior frame; calculating a first probability difference between the first prediction probability and the true probability of the detection tag and a position deviation between the predicted position of the first prior box and the true position of the detection tag, and back-propagating the first probability difference and the position deviation to adjust the first neural network and the second neural network; and iteratively executing the steps to obtain the first detection submodel after training is finished.
In one embodiment, the inputting the multi-layer feature layers into a third neural network for training to obtain a second detection submodel includes: carrying out up-sampling and fusion operation on the multilayer feature layers to generate a second feature map; convolving the second feature map in the third neural network a plurality of times to obtain a first matrix, activating the first matrix with a classifier to obtain a second prediction probability predicting whether the lesion of the lung training image exists, calculating a second probability difference between the second prediction probability and the true probability of the class label, and back-propagating the second probability difference to adjust the first neural network and the third neural network, and iteratively performing the above steps to obtain the second detection model after training, and/or convolving the second feature map in the third neural network a plurality of times to obtain a second matrix, activating the second matrix with a classifier to obtain a third prediction probability predicting whether each point in the second matrix is in the lesion region of the lung training image, and calculating a third probability difference value between the third prediction probability and the real probability of the segmentation label, reversely propagating the third probability difference value to adjust the first neural network and the third neural network, and iteratively executing the steps to obtain the trained second detection submodel.
In one embodiment, the method further comprises: and carrying out image scaling operation on the size of the second feature map to obtain a thermodynamic diagram of a lesion region of the lung training image.
In one embodiment, the segmenting the chest image data to obtain segmented lung detection image data includes: inputting the chest image data into the image segmentation model to obtain first lung field segmentation image data; post-processing the first lung field segmentation image data based on a full-connection conditional random field model to obtain second lung field segmentation image data; and generating a mask according to the second lung field segmentation image data to segment the lung detection image data.
In one embodiment, the method further comprises: preprocessing the original chest radiography image data to obtain the chest radiography image data.
In one embodiment, the preprocessing the original chest image data to obtain the chest image data includes: windowing the original chest radiography image data to obtain windowed image data; and performing drying treatment and/or image enhancement treatment on the windowed image data to obtain the chest radiography image data.
In one embodiment, the method further comprises: training a deep learning network model by using first training image data to obtain a first segmentation model; inputting first test image data into the first segmentation model to obtain segmented first test result data; obtaining second training image data, wherein the second training image data is obtained by performing manual repair on the first test result data; training the deep learning network model by using the first training image data and the second training image data to obtain a second segmentation model; and determining the image segmentation model based on the second segmentation model.
In one embodiment, said determining said image segmentation model based on said second segmentation model comprises: a) inputting the n-1 test image data into the n-1 segmentation model to obtain segmented n-1 test result data, wherein when n is 3, the n-1 segmentation model is the second segmentation model; b) obtaining nth training image data, wherein the nth training image data is obtained by carrying out manual repair on the nth-1 test result data; c) training the deep learning network model by using the first training image data to the nth training image data to obtain an nth segmentation model; iteratively executing the steps a), b) and c) by using test image data to obtain an Nth segmentation model, wherein N is an integer which is greater than or equal to 3 and less than or equal to N; and determining the image segmentation model based on the nth segmentation model.
According to a second aspect of the embodiments of the present application, there is provided a training method of an image detection model, including: inputting lung training image data into a first neural network for feature extraction, and outputting a plurality of layers of feature layers; inputting the multilayer characteristic layers into a second neural network for training to obtain a first detection submodel; inputting the multilayer characteristic layers into a third neural network for training to obtain a second detection submodel; and determining the image detection model based on the first detection submodel and the second detection submodel.
In one embodiment, the training method further comprises: and acquiring the lung training image data, wherein the lung training image data is labeled lung training image data after manual labeling. The labels comprise detection labels and at least one of category labels and segmentation labels, the detection labels are manually marked detection frames of focuses of the lung training images, the category labels are manually marked classifications of the focuses of the lung training images, the segmentation labels are manually marked focus regions of the lung training images, and the focus regions of the lung training images are regions extending outwards from the centers of the detection frames of the focuses of the lung training images by preset distances.
In one embodiment, the inputting the multi-layer feature layers into a second neural network for training to obtain a first detection submodel includes: determining a plurality of first feature maps based on the multi-layer feature layer data; generating a plurality of prior boxes on each of the plurality of first feature maps; carrying out post-processing on the plurality of prior frames to obtain a first prior frame matched with the detection frame, a first prediction probability of the first prior frame for accurately predicting the focus of the lung training image and a prediction position of the first prior frame; calculating a first probability difference between the first prediction probability and the true probability of the detection tag and a position deviation between the predicted position of the first prior box and the true position of the detection tag, and back-propagating the first probability difference and the position deviation to adjust the first neural network and the second neural network; and iteratively executing the steps to obtain the first detection submodel after training is finished.
In one embodiment, the inputting the multi-layer feature layers into a third neural network for training to obtain a second detection submodel includes: carrying out up-sampling and fusion operation on the multilayer feature layers to generate a second feature map; convolving the second feature map in the third neural network a plurality of times to obtain a first matrix, activating the first matrix with a classifier to obtain a second prediction probability predicting whether the lesion of the lung training image exists, calculating a second probability difference between the second prediction probability and the true probability of the class label, and back-propagating the second probability difference to adjust the first neural network and the third neural network, and iteratively performing the above steps to obtain the second detection model after training, and/or convolving the second feature map in the third neural network a plurality of times to obtain a second matrix, activating the second matrix with a classifier to obtain a third prediction probability predicting whether each point in the second matrix is in the lesion region of the lung training image, and calculating a third probability difference value between the third prediction probability and the real probability of the segmentation label, reversely propagating the third probability difference value to adjust the first neural network and the third neural network, and iteratively executing the steps to obtain the trained second detection submodel.
In one embodiment, the training method further comprises: and carrying out image scaling operation on the size of the second feature map to obtain a thermodynamic diagram of a lesion region of the lung training image.
According to a third aspect of embodiments of the present application, there is provided an apparatus for pulmonary lesion detection, comprising: the segmentation module is configured to perform segmentation processing on the chest radiography image data to obtain segmented lung detection image data; the feature acquisition module is configured to input the lung detection image data into an image detection model to acquire multilayer feature layer data; a first detection module configured to input the multi-layer feature layer data into a first detection submodel in the image detection model, and output a detection frame of a lesion of the lung detection image and a first prediction probability that the detection frame accurately predicts the lesion of the lung detection image; and a second detection module configured to input the multi-layer feature layer data into a second detection submodel in the image detection model, and output a second prediction probability for predicting whether a lesion of the lung detection image exists and/or a thermodynamic diagram of a lesion region of the lung detection image.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program for executing the method for lung lesion detection according to any of the above embodiments or executing the method for training an image detection model according to any of the above embodiments.
According to a fifth aspect of embodiments of the present application, there is provided an electronic apparatus, including: a processor for performing the method for lung lesion detection according to any of the above embodiments or for performing the method for training the image detection model according to any of the above embodiments; and a memory for storing the processor-executable instructions.
According to the lung lesion detection method provided by the embodiment of the application, segmented lung detection image data can be obtained by segmenting chest image data, then the lung detection image data is input into an image detection model, multi-layer feature layer data can be obtained firstly, then the multi-layer feature layer data is input into a first detection sub-model in the image detection model, a detection frame of a lesion of the lung detection image and a first prediction probability of the lesion of the lung detection image can be output by the detection frame, and meanwhile, the multi-layer feature layer data is input into a second detection sub-model in the image detection model, and a second prediction probability of predicting whether the lesion of the lung detection image exists or not and/or the heat of a lesion region of the lung detection image can be output. Therefore, according to a plurality of output detection results, the precision of lung lesion detection can be improved, the missed diagnosis rate of medical workers for detecting lesions is reduced, and the lesion detection efficiency of medical workers is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
Fig. 2 is a schematic flow chart of a method for lung lesion detection according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a training method of an image detection model according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating a training process of an image detection model according to an embodiment of the present application.
Fig. 5 is a block diagram illustrating an apparatus for pulmonary lesion detection according to an embodiment of the present application.
Fig. 6 is a block diagram illustrating an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Summary of the application
A neural network is an operational model, which is formed by a large number of nodes (or neurons) connected to each other, each node corresponding to a policy function, and the connection between each two nodes representing a weighted value, called weight, for a signal passing through the connection. The neural network generally comprises a plurality of neural network layers, the upper network layer and the lower network layer are mutually cascaded, the output of the ith neural network layer is connected with the input of the (i + 1) th neural network layer, the output of the (i + 1) th neural network layer is connected with the input of the (i + 2) th neural network layer, and the like. After the training samples are input into the cascaded neural network layers, an output result is output through each neural network layer and is used as the input of the next neural network layer, therefore, the output is obtained through calculation of a plurality of neural network layers, the prediction result of the output layer is compared with a real target value, the weight matrix and the strategy function of each layer are adjusted according to the difference condition between the prediction result and the target value, the neural network continuously passes through the adjusting process by using the training samples, so that the parameters such as the weight of the neural network and the like are adjusted until the prediction result of the output of the neural network is consistent with the real target result, and the process is called the training process of the neural network. After the neural network is trained, a neural network model can be obtained.
In view of the foregoing technical problems, a basic concept of the present application is to provide a method for detecting lung lesions, which mainly detects lung lesions by using an image segmentation model and an image detection model obtained by deep learning. Specifically, by inputting chest image data into an image segmentation model, segmenting lung detection image data, and inputting the lung detection image data into the image detection model, multi-layer feature layer data can be obtained first, then the multi-layer feature layer data is input into a first detection sub-model in the image detection model, a detection frame of a lesion of the lung detection image and a first prediction probability of the lesion of the lung detection image accurately predicted by the detection frame can be output, and meanwhile, the multi-layer feature layer data is input into a second detection sub-model in the image detection model, and a second prediction probability of predicting whether the lesion of the lung detection image exists and/or a thermodynamic diagram of a lesion region of the lung detection image can be output. Therefore, according to a plurality of output detection results, the precision of lung lesion detection can be improved, the missed diagnosis rate of medical workers for detecting lesions is reduced, and the lesion detection efficiency of the medical workers is improved.
In addition, the application can be used for efficiently detecting novel coronavirus pneumonia (Corona Virus Disease 2019, COVID-19). Can come to train image detection model according to the pathological change of novel coronavirus pneumonia early stage, the positive ordinary patient of nucleic acid detection, severe patients and chest radiography image performance of different disease severity grades such as critically ill patient to let image detection model learn more about the characteristic of novel coronavirus pneumonia, thereby can the efficient detect novel coronavirus pneumonia, not only can improve the efficiency that detects like this, but also can improve the precision that detects.
The chest radiograph imaging of the early stage of pathology of the novel coronavirus pneumonia shows that: the flat sheet of the chest is often found to be abnormal; chest radiograph imaging of a general patient positive for nucleic acid detection of novel coronavirus pneumonia is shown as: the localised plaque-like or multisegment-like shadows of the external zona in both lungs and under the pleura predominate; chest radiographs of heavy patients positive for nucleic acid detection are shown as: the two lungs have multiple solid-variant shadows, and the parts of the two lungs are fused into large-piece solid variants with a small amount of pleural effusion; the positive nucleic acid detection of the novel coronavirus pneumonia is represented by critical chest radiography with the following characteristics: the two lungs become solid diffuse shadows, manifesting as "white lungs", possibly accompanied by a small amount of pleural effusion.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. The implementation environment includes a CT scanner 130, a server 120, and a computer device 110. The computer device 110 may acquire chest image data from the CT scanner 130, and the computer device 110 may be connected to the server 120 via a communication network. Optionally, the communication network is a wired network or a wireless network.
The CT scanner 130 is used for performing X-ray scanning on human tissue to obtain CT image data of the human tissue. In one embodiment, chest X-ray positive position films, i.e. chest image data in the present application, can be obtained by scanning the lungs with the CT scanner 130. However, the present application is not limited to the acquisition device of the chest image data, and may be other X-ray imaging systems.
The computer device 110 may be a general-purpose computer or a computer device composed of an application-specific integrated circuit, and the like, which is not limited in this embodiment. For example, the Computer device 110 may be a mobile terminal device such as a tablet Computer, or may be a Personal Computer (PC), such as a laptop portable Computer and a desktop Computer. One skilled in the art will appreciate that the number of computer devices 110 described above may be one or more, and that the types may be the same or different. For example, the number of the computer devices 110 may be one, or the number of the computer devices 110 may be several tens or hundreds, or more. The number and the type of the computer devices 110 are not limited in the embodiments of the present application. An image segmentation model and an image detection model may be deployed in the computer device 110 for segmenting and detecting chest image data. In some alternative embodiments, the computer device 110 may perform image segmentation on the chest image data acquired from the CT scanner 130 using the image segmentation model deployed thereon to segment lung examination image data, and then the computer device 110 may perform image examination on the lung examination image data using the image examination model deployed thereon to detect an examination result associated with the lung lesion. Therefore, the precision of lung focus detection can be improved, the missed diagnosis rate of focus detection of medical workers is reduced, and the focus detection efficiency of the medical workers is improved.
The server 120 is a server, or consists of several servers, or is a virtualization platform, or a cloud computing service center. In some optional embodiments, the server 120 receives a training image acquired by the computer device 110, and trains the neural network through the training image to obtain an image detection model and an image segmentation model. In other alternative embodiments, the computer device 110 may transmit the chest image data obtained from the CT scanner 130 to the server, the server 120 performs image segmentation on the chest image data by using the image segmentation model trained thereon to segment lung detection image data, and then the server 120 performs image detection on the lung detection image data by using the image detection model trained thereon to detect a detection result related to a lung lesion and transmits the detection result to the computer device 110 for medical staff to view. Therefore, the precision of lung focus detection can be improved, the missed diagnosis rate of focus detection of medical workers is reduced, and the focus detection efficiency of the medical workers is improved.
Exemplary method
Fig. 2 is a schematic flow chart of a method for lung lesion detection according to an embodiment of the present application. The method described in fig. 1 is performed by a computing device (e.g., a server), but the embodiments of the present application are not limited thereto. The server may be one server, or may be composed of a plurality of servers, or may be a virtualization platform, or a cloud computing service center, which is not limited in this embodiment of the present application. As shown in fig. 2, the method includes:
s201: and carrying out segmentation operation on the chest radiography image data to obtain segmented lung detection image data.
It should be appreciated that the lung detection image data may be a matrix, which may represent pixel values for various pixel points on the image. The chest image data may be a chest image matrix, which may represent pixel values of various pixel points on the chest image. The number of elements included in the matrix may be the same as the number of pixels on the image.
It should be noted that the embodiment of the present application is not limited to the specific implementation of the segmentation operation on the chest image data, as long as the lung detection image data can be segmented, and the lung detection image corresponding to the lung detection image data can distinguish the lung field region and the extrapulmonary region. The detection of the lung focus is performed based on the lung field region, so that the phenomenon of false positive focus detection in the extrapulmonary region can be avoided when the extrapulmonary region is detected, and the reduction of the focus detection precision is avoided.
S202: and inputting the lung detection image data into an image detection model to obtain multilayer characteristic layer data.
In one embodiment, the lung detection image data is input into the image detection model, and the lung detection image data is first subjected to a backbone network (e.g., resnext50 and FPN) to extract multiple layers of feature layer data, in other words, step S202 is a process of extracting underlying feature data by the image detection model.
It should be understood that the feature layer may be a matrix, and if the number of the multi-layer feature layers is three, and the input lung detection image size is 512x512, the matrix sizes of the output feature layers are batch x256x64x64, batch x256x32x32, and batch x256x16x16, respectively.
It should be noted that the number of extracted feature layers is not limited in the embodiments of the present application, and may be one layer or more layers; meanwhile, the embodiment of the present application also does not limit the specific configuration of the backbone Network, and the backbone Network may be at least one of Network structures such as a convolutional neural Network, a cyclic neural Network, and a deep neural Network, for example, it may be configured by resent 50 and a Feature Pyramid Network (FPN), or may be configured by densenet and FPN.
S203: and inputting the multilayer feature layer data into a first detection sub-model in the image detection model, and outputting a detection frame of the focus of the lung detection image and a first prediction probability for accurately predicting the focus of the lung detection image by the detection frame.
In an embodiment, after the bottom layer features are extracted through the backbone network, the multi-layer feature layer data are continuously input into the first detection sub-model, and after the first detection sub-model, a part of detection results of lung lesions, namely a detection frame of the lesions of the lung detection image and a first prediction probability of accurately predicting the lesions of the lung detection image by the detection frame, can be output.
It should be noted that the first detection submodel may be configured by at least one of network structures such as a convolutional neural network, a cyclic neural network, and a deep neural network, and this is not particularly limited in this embodiment of the application.
It should be understood that the first sub-detection model may be regarded as a sub-branch of the image detection model for detecting a detection box outputting a region where a lesion of the lung detection image is located and a first prediction probability that the detection box accurately predicts the lesion of the lung detection image. The detection frame of the focus of the lung detection image can be a circumscribed rectangle or a circumscribed circle of the focus of the lung detection image and is used for framing the focus of the lung detection image in a region; the first prediction probability refers to a probability that the detection box accurately predicts the lesion of the lung detection image, for example, if the lung detection image actually has the lesion, the probability of the lesion is 100%, and after the first detection sub-model, the first prediction probability is 90%, which indicates that the detection box can predict whether the lung detection image has the lesion by 90%.
S204: and inputting the multi-layer feature layer data into a second detection sub-model in the image detection model, and outputting a second prediction probability for predicting whether the focus of the lung detection image exists and/or a thermodynamic diagram of the focus region of the lung detection image.
In an embodiment, after the bottom layer features are extracted through the backbone network, the multi-layer feature layer data can be input into not only the first detection submodel, but also the second detection submodel, and after the second detection submodel is passed, another part of detection results of the lung focus can be output, that is, a second prediction probability for predicting whether the focus of the lung detection image exists and/or a thermodynamic diagram of the focus region of the lung detection image.
It should be noted that the second detection submodel may be configured by at least one of network structures such as a convolutional neural network, a cyclic neural network, and a deep neural network, and this is not particularly limited in this embodiment of the application.
It is to be understood that the second detection submodel may be understood as another sub-branch of the image detection model for detecting a second prediction probability that outputs a prediction of whether a lesion of the lung detection image is present and/or a thermodynamic diagram of a lesion region of the lung detection image. The second prediction probability is the prediction probability of whether the input lung detection image has a focus or not; the thermodynamic diagram of the lesion region of the lung detection image is an image representation form of visual network attention, and is used for representing the region of the lesion of the lung detection image in a special highlight form, and highlighting with different colors is used for representing the grade of the lesion severity.
In one embodiment, the lesion of the lung inspection image may include a common lesion and a novel coronavirus pneumonia, and the image inspection model of the present application may distinguish the features of the common lesion and the novel coronavirus pneumonia to detect the novel coronavirus pneumonia. Therefore, the detection frame of the focus of the lung detection image is used for framing the novel coronavirus pneumonia in a region, the first prediction probability refers to the probability that the detection frame accurately predicts the novel coronavirus pneumonia, the second prediction probability refers to the prediction probability that whether the input lung detection image has the novel coronavirus pneumonia, and the thermodynamic diagram of the focus region of the lung detection image is used for representing the region where the novel coronavirus pneumonia is located by highlight color, for example, yellow highlight represents that the disease severity level of the novel coronavirus pneumonia is high, and blue highlight represents that the disease median level of the novel coronavirus pneumonia is low.
It should be noted that, in the embodiment of the present application, the execution sequence of step S203 and step S204 is not limited, and after the bottom layer features of the backbone network are extracted, the multiple layers of feature layers may be input into the first detection sub-model and the second detection sub-model in parallel.
Therefore, after the detection steps are carried out, medical workers can more directly obtain a plurality of detection results of the lung focus, so that the precision of lung focus detection can be improved, the missed diagnosis rate of the medical workers for detecting the focus is reduced, and the focus detection efficiency of the medical workers is improved.
In another embodiment, the method further comprises: inputting lung training image data into a first neural network for feature extraction, and outputting a plurality of layers of feature layers; inputting the multilayer characteristic layers into a second neural network for training to obtain a first detection submodel; inputting the multilayer characteristic layers into a third neural network for training to obtain a second detection submodel; and determining the image detection model based on the first detection submodel and the second detection submodel.
It should be understood that the method for pulmonary lesion detection further includes a training process of an image detection model. Any chest image data can be detected and processed by using the trained image detection model so as to obtain the detection result of the lung focus.
In one embodiment, lung training image data is used as sample data for training the image detection model. Specifically, lung training image data is input into a first neural network to extract bottom layer features so as to output multilayer feature layers, the output multilayer feature layers can be respectively input into a second neural network and a third neural network in a parallel mode to train, a first detection submodel and a second detection submodel are respectively obtained, and at the moment, the first detection submodel and the second detection submodel jointly form an image detection model.
It should also be understood that the first neural network may be regarded as the backbone network described above, and the second neural network and the third neural network may be regarded as sub-networks behind the backbone network, but it should be noted that the embodiments of the present application do not limit the specific configurations of the backbone network and the sub-networks, and both the backbone network and the sub-networks may be configured as at least one of network structures such as a convolutional neural network, a cyclic neural network, and a deep neural network. Meanwhile, the backbone network may be the same as or different from the sub-networks, the backbone network may be formed by a neural network with more layers, and the sub-networks may be formed by a neural network with relatively fewer layers.
In another embodiment, the method further comprises: and acquiring the lung training image data, wherein the lung training image data is labeled lung training image data after manual labeling. The labels comprise detection labels and at least one of category labels and segmentation labels, the detection labels are manually marked detection frames of focuses of the lung training images, the category labels are manually marked classifications of the focuses of the lung training images, the segmentation labels are manually marked focus regions of the lung training images, and the focus regions of the lung training images are regions extending outwards from the centers of the detection frames of the focuses of the lung training images by preset distances.
It should be understood that the lung training image data may be labeled sample data that is labeled manually, and may be specifically labeled by a professional medical staff, and each lung training image data is labeled with a detection label, a category label, and a segmentation label, respectively.
In one embodiment, taking the new coronavirus pneumonia as an example, the detection label marks a detection frame of the location of the new coronavirus pneumonia, for example, a circumscribed rectangle or other shapes frame the area of the new coronavirus pneumonia; the category label mark is a category of the lung training image, for example, the healthy lung training image is marked as 0, the lung training image of a common focus is marked as 2, and the lung training image of the novel coronavirus pneumonia is marked as 1; the segmentation label marks each pixel point of the region where the novel coronavirus pneumonia is located. The area where the novel coronavirus pneumonia is located is an area extending outward from the center of the detection frame by a preset distance, and it should be noted that, the embodiment of the present application does not limit what the preset distance is, and what the preset distance is may be determined empirically, for example, the preset distance may be a position extending outward from the center of the detection frame to 80% of the length or width of the detection frame.
In another embodiment, the inputting the multi-layer feature layers into a second neural network for training to obtain a first detection submodel includes: determining a plurality of first feature maps based on the multi-layer feature layer; generating a plurality of prior boxes on each of the plurality of first feature maps; carrying out post-processing on the plurality of prior frames to obtain a first prior frame matched with the detection frame, a first prediction probability of the first prior frame for accurately predicting the focus of the lung training image and a prediction position of the first prior frame; calculating a first probability difference between the first prediction probability and the true probability of the detection tag and a position deviation between the predicted position of the first prior box and the true position of the detection tag, and back-propagating the first probability difference and the position deviation to adjust the first neural network and the second neural network; and iteratively executing the steps to obtain the first detection submodel after training is finished.
It should be understood that the multi-layer feature layer may directly serve as the plurality of first feature maps of the detection subbranch, wherein each of the plurality of first feature maps has a different size, so that the detection subbranch may implement multi-scale detection.
In an embodiment, a plurality of prior frames with different scales or aspect ratios are set for each unit in each first feature map, and the second neural network can output, for each prior frame, the probability that the prior frame accurately predicts the lesion of the lung training image and the size of the prior frame, which needs to be adjusted relative to the detection frame of the lesion of the lung training image labeled by the human, through learning the detection frame of the lesion of the lung training image labeled by the human.
In another embodiment, the post-processing of the multiple prior frames refers to removing redundant prior frames that do not match the detection frame of the lesion of the lung training image labeled by the human, and keeping a matching prior frame (i.e., a first prior frame) that matches the detection frame of the lesion of the lung training image labeled by the human, and whether the matching prior frame matches the detection frame of the lesion of the lung training image may be matched according to a specific shape of a region where the lesion of the lung training image is located, that is, the shape of the matching prior frame is a prior frame that is closest to the specific shape of the region where the lesion of the lung training image is located. This first prior box can then be compared to the detection box of the artificially labeled lung training image to constantly adjust the parameters of the first and second neural networks.
It should be understood that in each round of training, a first prior frame matching the detection frame of the detection label is found, and the image detection model is trained based on the first prior frame.
It should be further understood that the first prediction probability refers to the probability that the first prior frame accurately predicts the lesion of the lung training image, the true probability of the detection label refers to that if the lung training image really has the lesion, the true probability that the lesion exists in the detection frame of the artificially labeled lung training image is 100%, and the true probability that the lesion does not exist in the detection frame of the artificially labeled lung training image is 0; the predicted position of the first prior frame refers to the center coordinates and width and height of the first prior frame, and the real position of the detection label refers to the center coordinates and width and height of the detection frame of the lung training image labeled by human. Wherein the center coordinates and the width and height can both be expressed in pixel values.
In one embodiment, if the first prediction probability is 90% and the true probability of the presence of a lesion in the detection box of the artificially labeled lung training image is 100%, the first probability difference is 10%, and meanwhile, if the center coordinate of the predicted position of the first prior box is (10, 30), the length and width are 50 and 20, respectively, and the center coordinate of the true position of the detection label is (5, 23), the length and width are 48 and 16, respectively, the center coordinate deviation in the position deviation is (5, 7), and the length deviations of the length and width are 2 and 4, respectively, the error values of 10% and (5, 7, 2, 4) are propagated back to the forward networks, i.e., the first neural network and the second neural network, so that the parameters of the first neural network and the second neural network are continuously adjusted according to the error values, and the above steps are iteratively performed until convergence, the first detection submodel after the training is completed is obtained at this time.
In another embodiment, the inputting the multi-layer feature layers into a third neural network for training to obtain a second detection submodel includes: carrying out up-sampling and fusion operation on the multilayer feature layers to generate a second feature map; convolving the second feature map in the third neural network a plurality of times to obtain a first matrix, activating the first matrix with a classifier to obtain a second prediction probability predicting whether the lesion of the lung training image exists, calculating a second probability difference between the second prediction probability and the true probability of the class label, and back-propagating the second probability difference to adjust the first neural network and the third neural network, and iteratively performing the above steps to obtain the second detection model after training, and/or convolving the second feature map in the third neural network a plurality of times to obtain a second matrix, activating the second matrix with a classifier to obtain a third prediction probability predicting whether each point in the second matrix is in the lesion region of the lung training image, and calculating a third probability difference value between the third prediction probability and the real probability of the segmentation label, reversely propagating the third probability difference value to adjust the first neural network and the third neural network, and iteratively executing the steps to obtain the trained second detection submodel.
It should be understood that when the multi-layer feature layers are input into the third neural network for training, at least one type of second detection submodel may be obtained, wherein the first type of second detection submodel may be obtained by training according to the class labels, and the second type of second detection submodel may be obtained by training according to the segmentation labels. However, regardless of which type of second detection submodel is obtained, firstly, the multi-layer feature layer is subjected to upsampling and fusion operation, for example, for the three-layer feature layer, if the size of the input lung training image is 512x512, and the matrix sizes of the three-layer feature layer are respectively batchx256x64x64, batchx256x32x32 and batchx256x16x16, two of the two feature layers are respectively subjected to 2-fold and 4-fold upsampling, and then are fused with the other feature layer to obtain the second feature map with the matrix size of batchx768x64x 64.
In one embodiment, for the second detection submodel of the first type, a series of convolution operations are performed on the second feature map in a third neural network, so as to obtain a first matrix, wherein the first matrix is a matrix of batchxn, where n is equal to 1 and represents only two categories, namely focus and health, and then the classifier is used for activating the first matrix, so as to obtain a second prediction probability whether the focus exists in the lung training image.
It should be understood that the second prediction probability refers to a probability of predicting whether a lesion exists in the lung training image, and the true probability of the category label refers to that if a lesion actually exists in the lung training image, the true probability of the lung training image that a lesion exists is 100%, and the true probability of the lung training image that a lesion does not exist is 0. For example, the second prediction probability is 80%, which means that 80% of the lung training images predict the focus, and the true probability of the lung training images having the focus is 100%, then the difference between the second probabilities is 20%, and then the error value of 20% is propagated back to the forward networks, i.e. the first neural network and the third neural network, so as to continuously adjust the parameters of the first neural network and the third neural network according to the error value, and the above steps are iteratively performed until convergence, at this time, the second detection submodel of the first type after training is obtained.
In one embodiment, for the second detection submodel of the second type, performing another series of convolution operations on the second feature map in the third neural network may obtain a second matrix, where the second matrix is a matrix of batchxnx64x64, where n is equal to 2, which indicates that there will be a 64x64 feature map for both the lesion and the foreground to represent the region of the lesion, and then activating the second matrix with the classifier, a third prediction probability of predicting whether each pixel in the second matrix is in the lesion region of the lung training image may be obtained.
It should be understood that the third prediction probability refers to a prediction probability of whether each pixel point in the second matrix is in a lesion region of the lung training image, and the true probability of the segmentation label refers to a probability of 100% that a true pixel point in the lesion region of the lung training image is a lesion point if the lung training image actually has a lesion, and a probability of 0 that a true pixel point in the lesion region of the lung training image is not a lesion point. For example, if the third prediction probability is 80%, which means that the probability that the pixel point is a focal point in the focal region of the lung training image is 80%, and the probability that the real pixel point in the focal region of the lung training image is a focal point is 100%, then the third probability difference is 20%, and then the error value of 20% is propagated back to the forward networks, i.e., the first neural network and the third neural network, so that the parameters of the first neural network and the third neural network are continuously adjusted according to the error value, the above steps are iteratively performed until convergence, and at this time, the second detection submodel of the second type after training is obtained.
It should be understood that in the process of training and continuously iterating, parameters of the whole network (i.e., parameters of the first neural network, parameters of the second neural network, and parameters of the third neural network) are continuously corrected, the first type of second detection submodel and the second type of second detection submodel continuously learn according to the category label and the segmentation label of each lung training image, respectively, and in the process of back propagation, the learning not only affects parameter correction of the third neural network, but also affects parameter correction of the first neural network through the three feature layers, and further affects parameter correction of the second neural network. Meanwhile, the first detection submodel can continuously learn according to the detection label of each lung training image, and in the process of back propagation, the learning not only influences the parameter correction of the second neural network, but also influences the parameter correction of the first neural network through the three feature layers, and further influences the parameter correction of the third neural network.
Therefore, the whole network can learn more characteristic information of chest radiographs of different disease severity levels such as early lesion stage, common patients with positive nucleic acid detection, severe patients, critical patients and the like of the lesion type and the novel coronavirus pneumonia, and the increase of the characteristic information can reduce overfitting on one hand and improve the detection precision to a certain degree on the other hand.
In another embodiment, the method further comprises: and carrying out image scaling operation on the size of the second feature map to obtain a thermodynamic diagram of a lesion region of the lung training image.
It should be understood that the method for detecting lung focuses further comprises the step of generating a thermodynamic diagram of a focus region of a lung training image according to the acquired second feature map in the process of training the image detection model, so that when any chest image data is detected by using the image detection model, the thermodynamic diagram of the focus region of the lung detection image can be output, and therefore medical staff can obtain focus information more intuitively.
In one embodiment, since the second feature map is convolved a plurality of times during the training of the image detection model, the pixel size of the second feature map may be compressed or enlarged relative to the pixel size of the lung training image of the initial input neural network, and then the second feature map is scaled in order to output an image having the same pixel size as the lung training image of the initial input neural network.
It should be noted that, the embodiment of the present application does not limit the specific implementation form of the scaling operation, for example, the scaling of the pixel size of the second feature map to the pixel size of the lung training image of the initial input neural network may be implemented by using a nearest neighbor algorithm, a bilinear interpolation algorithm, a bicubic interpolation algorithm, or the like.
In another embodiment, the segmenting the chest image data to obtain segmented lung detection image data includes: inputting the chest image data into the image segmentation model to obtain first lung field segmentation image data; post-processing the first lung field segmentation image data based on the full-connection conditional random field model to obtain second lung field segmentation image data; and generating a mask according to the lung field segmentation image data to segment the lung detection image data.
It should be understood that, after the chest image data is segmented by the image segmentation model, the first lung Field segmentation image data may be obtained, but for the chest image with a larger lesion area, there may be segmentation discontinuity and edge discontinuity on the first lung Field segmentation image corresponding to the first lung Field segmentation image data, so a Full Connected/Dense Conditional Random Field (Full Connected/Dense Conditional Random Field) model may be used to perform post-processing on the first lung Field segmentation image data to obtain the second lung Field segmentation image data. The fully connected conditional random field model considers not only the shape, texture, position and color of the image, but also the contrast, i.e., the relationship between each pixel and all other pixels, so that great refinement and segmentation can be realized.
In an embodiment, the image segmentation model may be a deep learning network model, and may be composed of at least one of a back propagation neural network, a convolutional neural network, a cyclic neural network, a deep neural network, and the like. The image segmentation model can be obtained by training the deep learning network model by using a plurality of sample data, and the image segmentation model obtained by training can be used for segmenting the lung fields of the chest radiography images.
It is to be understood that the second lung field segmented image data may comprise a second image matrix, each element of which may be represented by 1 or 0, wherein 1 represents a lung field region and 0 represents an extrapulmonary region, i.e. the second image matrix may be considered as one binary image. The first lung field segmentation image data can comprise a first image matrix, each element in the first image matrix can also be represented by 0 or 1, segmentation discontinuity and edge discontinuity areas can exist on the first lung field segmentation image, the values (0 or 1) of the elements corresponding to the areas can be inaccurate, and the second lung field segmentation image data with continuous and clear edges can be obtained by performing post-processing on the first image data through a fully connected conditional random field model.
In an embodiment, after the second lung field segmentation image data with continuous and clear edges is obtained, a mask may be generated according to the second lung field segmentation image data to obtain lung detection image data, and then the lung detection image data is input into the image detection model to perform lung lesion detection. It should be understood that the masking may be performed by extracting the region of interest, i.e., multiplying the image to be processed (the second lung field segmentation image) by using a pre-made region of interest (i.e., lung field region) mask, so as to obtain a region of interest (i.e., lung field region) image, wherein the image values in the region of interest (i.e., lung field region) are all 1 while the image values outside the lung are all 0. However, the embodiment of the present application is not limited to the specific implementation of generating the mask, and may be any embodiment as long as the image values of the lung field region and the extrapulmonary region can be distinguished from each other.
In another embodiment, the method further comprises: preprocessing the original chest radiography image data to obtain the chest radiography image data.
It should be appreciated that the raw chest image data may be acquired by Computed Tomography (CT), Computed Radiography (CR), Digital Radiography (DR), nuclear magnetic resonance (nmr), or ultrasound, among other techniques. For original chest film image data obtained by adopting different photographic techniques, chest film image data with a uniform format can be obtained through preprocessing, and an image segmentation model can conveniently carry out image segmentation processing. The original chest image data in the embodiment of the present application may also be data that satisfies other criteria, as long as the chest image data that can be processed by the image segmentation model can be obtained through preprocessing.
In another embodiment, the preprocessing the original chest image data to obtain the chest image data includes: windowing the original chest radiography image data to obtain windowed image data; and performing drying treatment and/or image enhancement treatment on the windowed image data to obtain the chest radiography image data.
It should be appreciated that the raw chest image data may be data that meets Digital Imaging and Communications in Medicine (DICOM) standards. The DICOM medical image includes a background region and a target region, where the target region may be a region to be diagnosed (such as a lung) of a human body, and in order to make the display of the target region clearer and facilitate the diagnosis of medical staff, display parameters, such as a window width and a window level, of the DICOM medical image need to be adjusted. The window width and level may be provided in DICOM medical image data or may be determined based on DICOM medical image data by other models.
In one embodiment, the DICOM medical image has a pixel value range of [0, 4095], and is converted to an image having a pixel value range of [0, 255] for display by a display device. The pixel value may represent luminance information of a region corresponding to the pixel point. For example, the larger the pixel value of a pixel, the brighter the pixel is. Here the pixel value may be positively correlated with the grey value or the grey value may be the pixel value.
In another embodiment, the window width is used to represent the pixel value range of the window area, and the areas above this range on the medical image are all displayed in white shadow, and the areas below this range are all displayed in black shadow. The window width is increased, the organization structures with different densities in the image finally displayed by the display device are increased, but the contrast among the structures is low, and the detailed parts in the image are difficult to observe; the window width is reduced, the organization structures with different densities in the image finally displayed by the display device are reduced, but the contrast among the structures is high, and the detailed parts in the image can be clearly observed. The window level is used to represent the pixel value at the center position of the window area. In the case of a certain window width, the window levels are different, and the specific pixel value ranges of the window regions are also different. For example, the window width is 60, and when the window level is 40, the pixel value range of the window area is [10, 70 ]; when the window level is 50, the pixel value range of the window region is [20, 80 ]. The pixel value range of the window region is only exemplary, and is for explaining the technical solution of the present application, and in practical use, the pixel value range of the window region may be selected according to an actual situation.
It should be understood that, when the medical image is a CT image, the larger the CT value of the human tissue corresponding to a pixel point is, the closer the color of the pixel point on the CT image is to white (or the brighter the pixel point is); the smaller the CT value of the human tissue corresponding to the pixel point is, the closer the color of the pixel point on the CT image is to black (or the darker the pixel point is). The pixel values may be positively correlated with the CT values.
It should also be appreciated that chest image data based on the windowed image data can be obtained, and the lung field region can be clearly displayed on a display device by the lung detection image data obtained after the image segmentation model segmentation, the post-processing of the full-connected conditional random field model and the masking operation. However, in the process of capturing the lung image, noise may be introduced to affect clear and accurate display of the image, for example, white noise, so that the windowed image data needs to be preprocessed, for example, white noise in the windowed image data can be removed by using a gaussian filter.
Specifically, the image enhancement processing may include resizing, cropping, rotation, normalization, and the like. In the preprocessing process, one or more of the above processes can be used to enhance the image for the subsequent image segmentation and post-processing processes. The image enhancement processing may be performed before or after the denoising processing.
In an optional embodiment of the present application, in the preprocessing process, denoising processing may be performed on the original chest film image data, and then windowing processing and image enhancement processing may be performed on the denoised image data.
In another embodiment, the method further comprises: training a deep learning network model by using first training image data to obtain a first segmentation model; inputting the first test image data into a first segmentation model to obtain segmented first test result data; acquiring second training image data, wherein the second training image data is obtained by manually repairing the first test result data; training a deep learning network model by using the first training image data and the second training image data to obtain a second segmentation model; an image segmentation model is determined based on the second segmentation model.
Specifically, the method for detecting the lung lesion further comprises a training process of an image segmentation model. Any chest image data can be segmented by using the trained image segmentation model to obtain a lung field segmentation image with continuous and clear boundaries.
The training image data is sample data, and the training image data utilized each time the deep learning network model is trained may include a plurality of sample data. Each sample data may include sample lung image data and sample lung field segmentation image data corresponding to the sample lung image data. The sample data is used for training a deep learning network model, and a segmentation model can be obtained.
The test image data may include one or more test lung image data, the test lung image data is image data without image segmentation, and the test lung image data is input into the segmentation model, so that a segmented lung field segmentation image can be obtained.
Specifically, the deep learning network model is trained using the first training image data, and a first segmentation model can be obtained, which can be regarded as a first round of training. And inputting the first test image data into the first segmentation model to obtain segmented first test result data, wherein the first test result data comprises a plurality of segmented lung field segmentation images. Since the first segmentation model has limited accuracy due to limited number and limited types of sample data included in the first training image data utilized by the first segmentation model in the training process, the lung field segmentation image in the first test result data may have inaccurate positions. And performing manual restoration on the first test result data, adjusting the position of inaccurate division in the lung field segmentation image, and taking the first test result data after manual restoration as second training image data.
Similarly, training the deep learning network model using the first training image data and the second training image data may result in a second segmentation model, which may be considered a second round of training. Through the testing and manual repairing processes, the process of manually marking the training image data can be simplified while the training image data is added.
In another embodiment, determining the image segmentation model based on the second segmentation model comprises: a) inputting the n-1 test image data into an n-1 segmentation model to obtain segmented n-1 test result data, wherein when n is 3, the n-1 segmentation model is a second segmentation model; b) acquiring nth training image data, wherein the nth training image data is obtained by manually repairing nth-1 test result data; c) training a deep learning network model by using the first training image data to the nth training image data to obtain an nth segmentation model; iteratively performing steps a), b) and c) by using the test image data to obtain an Nth segmentation model, wherein N is an integer which is greater than or equal to 3 and less than or equal to N; an image segmentation model is determined based on the nth segmentation model.
Specifically, the second test image data is input into the second segmentation model, and the segmented second test result data can be obtained. And manually repairing the second test result data to obtain third training image data. And training the deep learning network model by using the first training image data to the third training image data to obtain a third segmentation model, which can be regarded as a third round of training.
By analogy, the training, testing and manual repairing processes are continuously and iteratively executed, and more training image data can be obtained. The more training image data is utilized when the deep learning network model is trained, the higher the accuracy of the segmentation model obtained by training.
The nth training round obtains the nth segmentation model as the image segmentation model. The numerical value of N can be set according to actual conditions, and the greater the value of N is, the higher the accuracy of the corresponding image segmentation model is. Alternatively, the accuracy of the test result of the nth segmentation model may be used as a cutoff condition for training, for example, when the accuracy of the test result of the nth segmentation model is greater than or equal to a threshold (e.g., 90%), the nth segmentation model is used as the image segmentation model; and when the accuracy of the test result of the N-th segmentation model is smaller than the threshold, performing the (N + 1) th round of training until the accuracy of the test result of the segmentation model is larger than or equal to the threshold.
In this embodiment, in order to improve the segmentation accuracy of the image segmentation model for image data corresponding to various lung diseases, a plurality of lung images may be adopted as training image data in the process of training the deep learning network model. Different images of the lungs may correspond to different diseases, such as hydrothorax, pneumothorax, emphysema, pneumonia, mass, tuberculosis, etc. Therefore, in the training process, the diversity of the samples can be increased, and the adaptability of the image segmentation model is improved.
Further, when the segmentation model is used to segment the lung image data during the test, poor segmentation may occur, for example, when pneumonia occurs, the pixel value of the relevant lung tissue is higher than that of the surrounding tissue, and the segmented region of the lung may bypass the pneumonia region. According to the embodiment of the application, the test result data which is not well segmented is manually repaired and is used as new sample data to be added into the training image data of the next round of training, so that the number of samples can be further increased under the condition of increasing the sample types of the training image data.
According to the lung image segmentation method provided by the embodiment of the application, the training set of the image segmentation model can be rapidly increased by continuously repeating the training, testing and manual repair processes and adding the data after manual repair to the sample data of the next training, so that the segmentation accuracy of the model on the lung image which is large in lesion area and difficult to segment is improved.
According to an embodiment of the present application, the training image data and the test image data may be the image data after the preprocessing.
According to an embodiment of the application, the image segmentation model comprises a U-net network model.
In other embodiments, the image segmentation model includes any of a Full Convolutional Network (FCN), SegNet, and Deeplab network structure.
Fig. 3 is a schematic flowchart of a training method for an image detection model according to an embodiment of the present application, and specific implementation details of the training method are the same as those of the training process for the image detection model in the method for detecting a lung lesion in the above embodiment, and are not described herein again. The training method comprises the following steps:
s301: and inputting the lung training image data into a first neural network for feature extraction, and outputting a plurality of layers of feature layers.
S302: and inputting the multilayer characteristic layers into a second neural network for training to obtain a first detection submodel.
S303: and inputting the multilayer characteristic layers into a third neural network for training to obtain a second detection submodel.
S304: determining the image detection model based on the first detection submodel and the second detection submodel.
For a clearer understanding of the training method of the image detection model of the present application, the above training process of the image detection model may refer to a flowchart shown in fig. 4.
It should be noted that the image detection model may be obtained by using the lung training image data to complete the learning training of the lung lesion, and may also be obtained by using other medical image data to complete the learning training of the lesion at different positions, for example, the breast image data, the brain image data, or other image data related to the human body structure, which is not limited in this embodiment of the present application.
Exemplary devices
The embodiment of the device can be used for executing the embodiment of the method. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 5 is a block diagram of an apparatus 500 for pulmonary lesion detection according to an embodiment of the present application. As shown in fig. 5, the apparatus 500 includes:
a segmentation module 510 configured to perform segmentation processing on the chest image data to obtain segmented lung detection image data.
A feature obtaining module 520, configured to input the lung detection image data into an image detection model, so as to obtain multiple layers of feature layer data.
A first detection module 530 configured to input the multi-layer feature layer data into a first detection sub-model in the image detection model, and output a detection frame of the lesion of the lung detection image and a first prediction probability that the detection frame accurately predicts the lesion of the lung detection image.
A second detection module 540 configured to input the multi-layer feature layer data into a second detection submodel in the image detection model, and output a second prediction probability for predicting whether a lesion of the lung detection image exists and/or a thermodynamic diagram of a lesion region of the lung detection image.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 6. Fig. 6 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 6, the electronic device 60 includes one or more processors 61 and a memory 62.
The processor 61 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 60 to perform desired functions.
Memory 62 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 61 to implement the method for lung lesion detection, the method for training an image detection model, the method for training an image segmentation model, and/or other desired functions of the various embodiments of the present application described above. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 60 may further include: an input device 63 and an output device 64, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 63 may also include, for example, a keyboard, a mouse, and the like.
The output device 64 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 64 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for the sake of simplicity, only some of the components of the electronic device 60 relevant to the present application are shown in fig. 6, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 60 may include any other suitable components depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the method for lung lesion detection, the method for training an image detection model, the method for training an image segmentation model according to various embodiments of the present application described above in this specification.
Furthermore, embodiments of the present application may also be a computer readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps of the method for lung lesion detection, the method for training an image detection model, and the method for training an image segmentation model according to various embodiments of the present application, described above in the present specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (19)

1. A method for pulmonary lesion detection, comprising:
carrying out segmentation processing on the chest radiography image data to obtain segmented lung detection image data;
inputting the lung detection image data into an image detection model to obtain multilayer characteristic layer data;
inputting the multilayer feature layer data into a first detection sub-model in the image detection model, and outputting a detection frame of a focus of the lung detection image and a first prediction probability for accurately predicting the focus of the lung detection image by the detection frame; and
inputting the multi-layer feature layer data into a second detection submodel in the image detection model, outputting a second prediction probability for predicting whether the lesion of the lung detection image exists and/or a thermodynamic diagram of a lesion region of the lung detection image,
wherein the first detection submodel and the second detection submodel are arranged in parallel.
2. The method of claim 1, further comprising:
inputting lung training image data into a first neural network for feature extraction, and outputting a plurality of layers of feature layers;
inputting the multilayer characteristic layers into a second neural network for training to obtain a first detection submodel;
inputting the multilayer characteristic layers into a third neural network for training to obtain a second detection submodel; and
determining the image detection model based on the first detection submodel and the second detection submodel.
3. The method of claim 2, further comprising:
acquiring the lung training image data, wherein the lung training image data is artificially labeled lung training image data with labels,
the detection label is a detection frame of a focus of the lung training image marked by manual work, the category label is a category of the focus of the lung training image marked by manual work, the segmentation label is a focus area of the lung training image marked by manual work, and the focus area of the lung training image is an area which extends outwards from the center of the detection frame of the focus of the lung training image by a preset distance.
4. The method of claim 3, wherein the inputting the plurality of feature layers into a second neural network for training to obtain a first detection submodel comprises:
d) determining a plurality of first feature maps based on the multi-layer feature layer;
e) generating a plurality of prior boxes on each of the plurality of first feature maps;
f) carrying out post-processing on the plurality of prior frames to obtain a first prior frame matched with the detection frame, a first prediction probability of the first prior frame for accurately predicting the focus of the lung training image and a prediction position of the first prior frame;
g) calculating a first probability difference between the first prediction probability and the true probability of the detection tag and a position deviation between the predicted position of the first prior box and the true position of the detection tag, and back-propagating the first probability difference and the position deviation to adjust the first neural network and the second neural network; and
and d), iteratively executing the step d), the step e), the step f) and the step g) to obtain the first detection submodel after training is finished.
5. The method of claim 3, wherein the inputting the plurality of feature layers into a third neural network for training to obtain a second detection submodel comprises:
h) carrying out up-sampling and fusion operation on the multilayer feature layers to generate a second feature map;
i) convolving the second feature map in the third neural network a plurality of times to obtain a first matrix, activating the first matrix with a classifier to obtain a second prediction probability predicting whether a lesion of the lung training image exists, calculating a second probability difference between the second prediction probability and a true probability of the class label, and back-propagating the second probability difference to adjust the first and third neural networks, and iteratively performing the steps h) and i) to obtain the second detection submodel after training is completed,
and/or
j) Convolving the second feature map in the third neural network for a plurality of times to obtain a second matrix, activating the second matrix by using a classifier to obtain a third prediction probability for predicting whether each point in the second matrix is in a lesion region of the lung training image, calculating a third probability difference between the third prediction probability and a true probability of the segmentation label, and back-propagating the third probability difference to adjust the first neural network and the third neural network, and iteratively performing the step h) and the step j) to obtain the second detection submodel after training.
6. The method of claim 5, further comprising:
and carrying out image scaling operation on the size of the second feature map to obtain a thermodynamic diagram of a lesion region of the lung training image.
7. The method according to any one of claims 1 to 6, wherein the performing segmentation processing on the chest image data to obtain segmented lung detection image data comprises:
inputting the chest image data into an image segmentation model to obtain first lung field segmentation image data;
post-processing the first lung field segmentation image data based on a full-connection conditional random field model to obtain second lung field segmentation image data; and
and generating a mask according to the second lung field segmentation image data to segment the lung detection image data.
8. The method of any one of claims 1 to 6, further comprising:
preprocessing the original chest radiography image data to obtain the chest radiography image data.
9. The method of claim 8, wherein said preprocessing the raw chest image data to obtain the chest image data comprises:
windowing the original chest radiography image data to obtain windowed image data; and
and performing drying removal processing and/or image enhancement processing on the windowed image data to obtain the chest radiography image data.
10. The method of any one of claims 1 to 6, further comprising:
training a deep learning network model by using first training image data to obtain a first segmentation model;
inputting first test image data into the first segmentation model to obtain segmented first test result data;
obtaining second training image data, wherein the second training image data is obtained by performing manual repair on the first test result data;
training the deep learning network model by using the first training image data and the second training image data to obtain a second segmentation model; and
an image segmentation model is determined based on the second segmentation model.
11. The method of claim 10, wherein determining an image segmentation model based on the second segmentation model comprises:
a) inputting the n-1 test image data into the n-1 segmentation model to obtain segmented n-1 test result data, wherein when n is 3, the n-1 segmentation model is the second segmentation model;
b) obtaining nth training image data, wherein the nth training image data is obtained by carrying out manual repair on the nth-1 test result data;
c) training the deep learning network model by using the first training image data to the nth training image data to obtain an nth segmentation model;
iteratively executing the steps a), b) and c) by using test image data to obtain an Nth segmentation model, wherein N is an integer which is greater than or equal to 3 and less than or equal to N; and
determining the image segmentation model based on the Nth segmentation model.
12. A training method of an image detection model is characterized by comprising the following steps:
inputting lung training image data into a first neural network for feature extraction, and outputting a plurality of layers of feature layers;
inputting the multilayer characteristic layers into a second neural network for training to obtain a first detection submodel;
inputting the multilayer characteristic layers into a third neural network for training to obtain a second detection submodel; and
determining the image detection model based on the first detection submodel and the second detection submodel,
wherein the first detection submodel and the second detection submodel are arranged in parallel.
13. The training method of claim 12, further comprising:
acquiring the lung training image data, wherein the lung training image data is artificially labeled lung training image data with labels,
the detection label is a detection frame of a focus of the lung training image marked by manual work, the category label is a category of the focus of the lung training image marked by manual work, the segmentation label is a focus area of the lung training image marked by manual work, and the focus area of the lung training image is an area which extends outwards from the center of the detection frame of the focus of the lung training image by a preset distance.
14. The training method of claim 13, wherein the inputting the plurality of feature layers into a second neural network for training to obtain a first detection submodel comprises:
k) determining a plurality of first feature maps based on the multi-layer feature layer data;
l) generating a plurality of prior boxes on each of the plurality of first feature maps;
m) post-processing the plurality of prior frames to obtain a first prior frame matched with the detection frame, a first prediction probability of the first prior frame for accurately predicting the focus of the lung training image and a prediction position of the first prior frame;
n) calculating a first probability difference between the first predicted probability and the true probability of the detected tag and a position deviation between the predicted position of the first prior box and the true position of the detected tag, and back-propagating the first probability difference and the position deviation to adjust the first neural network and the second neural network; and
and the step k), the step l), the step m) and the step n) are executed in an iteration mode, and the first detection submodel after training is obtained.
15. The training method of claim 13, wherein the inputting the plurality of feature layers into a third neural network for training to obtain a second detection submodel comprises:
o) carrying out up-sampling and fusion operation on the multilayer feature layers to generate a second feature map;
p) convolving the second feature map in the third neural network a plurality of times to obtain a first matrix, activating the first matrix with a classifier to obtain a second prediction probability predicting whether a lesion of the lung training image exists, calculating a second probability difference between the second prediction probability and a true probability of the class label, and back-propagating the second probability difference to adjust the first and third neural networks, and iteratively performing the step o) and the step p) to obtain the second detection submodel after training is completed,
and/or
q) convolving the second feature map in the third neural network for a plurality of times to obtain a second matrix, activating the second matrix by using a classifier to obtain a third prediction probability for predicting whether each point in the second matrix is in a lesion region of the lung training image, calculating a third probability difference between the third prediction probability and a true probability of the segmentation label, and back-propagating the third probability difference to adjust the first neural network and the third neural network, and iteratively performing the step o) and the step q) to obtain the second detection submodel after training.
16. The training method of claim 15, further comprising:
and carrying out image scaling operation on the size of the second feature map to obtain a thermodynamic diagram of a lesion region of the lung training image.
17. An apparatus for pulmonary lesion detection, comprising:
the segmentation module is configured to perform segmentation processing on the chest radiography image data to obtain segmented lung detection image data;
the feature acquisition module is configured to input the lung detection image data into an image detection model to acquire multilayer feature layer data;
a first detection module configured to input the multi-layer feature layer data into a first detection submodel in the image detection model, and output a detection frame of a lesion of the lung detection image and a first prediction probability that the detection frame accurately predicts the lesion of the lung detection image; and
a second detection module configured to input the multi-layer feature layer data into a second detection submodel in the image detection model, output a second prediction probability predicting whether a lesion of the lung detection image exists and/or a thermodynamic diagram of a lesion region of the lung detection image,
wherein the first detection submodel and the second detection submodel are arranged in parallel.
18. A computer-readable storage medium storing a computer program for performing the method for lung lesion detection of any one of claims 1 to 11 or for performing the method for training an image detection model of any one of claims 12 to 16.
19. An electronic device, comprising:
a processor for performing the method for pulmonary lesion detection of any one of claims 1 to 11 or for performing the method for training the image detection model of any one of claims 12 to 16; and
a memory for storing the processor-executable instructions.
CN202010130331.2A 2020-02-28 2020-02-28 Method and device for detecting lung focus and training method of image detection model Active CN111325739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010130331.2A CN111325739B (en) 2020-02-28 2020-02-28 Method and device for detecting lung focus and training method of image detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010130331.2A CN111325739B (en) 2020-02-28 2020-02-28 Method and device for detecting lung focus and training method of image detection model

Publications (2)

Publication Number Publication Date
CN111325739A CN111325739A (en) 2020-06-23
CN111325739B true CN111325739B (en) 2020-12-29

Family

ID=71171345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010130331.2A Active CN111325739B (en) 2020-02-28 2020-02-28 Method and device for detecting lung focus and training method of image detection model

Country Status (1)

Country Link
CN (1) CN111325739B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768382B (en) * 2020-06-30 2023-08-15 重庆大学 Interactive segmentation method based on lung nodule growth morphology
CN111862075A (en) * 2020-07-30 2020-10-30 西南医科大学 Lung image analysis system and method based on deep learning
CN112037173B (en) * 2020-08-04 2024-04-05 湖南自兴智慧医疗科技有限公司 Chromosome detection method and device and electronic equipment
CN112116009B (en) * 2020-09-21 2024-04-26 长沙理工大学 New coronal pneumonia X-ray image identification method and system based on convolutional neural network
CN112308853A (en) * 2020-10-20 2021-02-02 平安科技(深圳)有限公司 Electronic equipment, medical image index generation method and device and storage medium
CN112435242A (en) * 2020-11-25 2021-03-02 江西中科九峰智慧医疗科技有限公司 Lung image processing method and device, electronic equipment and storage medium
CN112434612A (en) * 2020-11-25 2021-03-02 创新奇智(上海)科技有限公司 Smoking detection method and device, electronic equipment and computer readable storage medium
CN112349429B (en) * 2020-12-01 2021-09-21 苏州体素信息科技有限公司 Disease prediction method, disease prediction model training method and device, and storage medium
CN112633348B (en) * 2020-12-17 2022-03-15 首都医科大学附属北京天坛医院 Method and device for detecting cerebral arteriovenous malformation and judging dispersion property of cerebral arteriovenous malformation
CN112560964A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 Method and system for training Chinese herbal medicine pest and disease identification model based on semi-supervised learning
CN114220163B (en) * 2021-11-18 2023-01-06 北京百度网讯科技有限公司 Human body posture estimation method and device, electronic equipment and storage medium
CN115082405B (en) * 2022-06-22 2024-05-14 强联智创(北京)科技有限公司 Training method, detection method, device and equipment for intracranial focus detection model
CN116188469A (en) * 2023-04-28 2023-05-30 之江实验室 Focus detection method, focus detection device, readable storage medium and electronic equipment
CN116912258B (en) * 2023-09-14 2023-12-08 天津市胸科医院 Self-efficient estimation method for focus parameters of lung CT image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549926A (en) * 2018-03-09 2018-09-18 中山大学 A kind of deep neural network and training method for refining identification vehicle attribute
CN109147940A (en) * 2018-07-05 2019-01-04 北京昆仑医云科技有限公司 From the device and system of the medical image automatic Prediction physiological status of patient
CN109583321A (en) * 2018-11-09 2019-04-05 同济大学 The detection method of wisp in a kind of structured road based on deep learning
CN110827294A (en) * 2019-10-31 2020-02-21 北京推想科技有限公司 Network model training method and device and focus area determination method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903297B (en) * 2019-03-08 2020-12-29 数坤(北京)网络科技有限公司 Coronary artery segmentation method and system based on classification model
CN110175993A (en) * 2019-05-27 2019-08-27 西安交通大学医学院第一附属医院 A kind of Faster R-CNN pulmonary tuberculosis sign detection system and method based on FPN
CN110428426A (en) * 2019-07-02 2019-11-08 温州医科大学 A kind of MRI image automatic division method based on improvement random forests algorithm
CN110599448B (en) * 2019-07-31 2022-03-15 浙江工业大学 Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549926A (en) * 2018-03-09 2018-09-18 中山大学 A kind of deep neural network and training method for refining identification vehicle attribute
CN109147940A (en) * 2018-07-05 2019-01-04 北京昆仑医云科技有限公司 From the device and system of the medical image automatic Prediction physiological status of patient
CN109583321A (en) * 2018-11-09 2019-04-05 同济大学 The detection method of wisp in a kind of structured road based on deep learning
CN110827294A (en) * 2019-10-31 2020-02-21 北京推想科技有限公司 Network model training method and device and focus area determination method and device

Also Published As

Publication number Publication date
CN111325739A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
US11468564B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
EP3553742B1 (en) Method and device for identifying pathological picture
CN110428475B (en) Medical image classification method, model training method and server
Shen et al. An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy
US20200320685A1 (en) Automated classification and taxonomy of 3d teeth data using deep learning methods
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
US9269139B2 (en) Rib suppression in radiographic images
CN111062947B (en) X-ray chest radiography focus positioning method and system based on deep learning
CN112365973B (en) Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
CN112381762A (en) CT rib fracture auxiliary diagnosis system based on deep learning algorithm
CN113256670A (en) Image processing method and device, and network model training method and device
EP2178047A2 (en) Ribcage segmentation
US9672600B2 (en) Clavicle suppression in radiographic images
CN111401102A (en) Deep learning model training method and device, electronic equipment and storage medium
CN111918611B (en) Method for controlling abnormal display of chest X-ray image, recording medium and apparatus
CN112862786B (en) CTA image data processing method, device and storage medium
CN112862785B (en) CTA image data identification method, device and storage medium
CN115482223A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110570417B (en) Pulmonary nodule classification device and image processing equipment
CN112766332A (en) Medical image detection model training method, medical image detection method and device
CN112862787B (en) CTA image data processing method, device and storage medium
CN117392468B (en) Cancer pathology image classification system, medium and equipment based on multi-example learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Applicant after: Tuxiang Medical Technology Co., Ltd

Address before: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Applicant before: Beijing Tuoxiang Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant