CN112712508B - Pneumothorax determination method and pneumothorax determination device - Google Patents

Pneumothorax determination method and pneumothorax determination device Download PDF

Info

Publication number
CN112712508B
CN112712508B CN202011634613.2A CN202011634613A CN112712508B CN 112712508 B CN112712508 B CN 112712508B CN 202011634613 A CN202011634613 A CN 202011634613A CN 112712508 B CN112712508 B CN 112712508B
Authority
CN
China
Prior art keywords
region
cross
interest
image
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011634613.2A
Other languages
Chinese (zh)
Other versions
CN112712508A (en
Inventor
石磊
华铱炜
余沛玥
杨忠程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yitu Healthcare Technology Co ltd
Original Assignee
Hangzhou Yitu Healthcare Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yitu Healthcare Technology Co ltd filed Critical Hangzhou Yitu Healthcare Technology Co ltd
Priority to CN202011634613.2A priority Critical patent/CN112712508B/en
Publication of CN112712508A publication Critical patent/CN112712508A/en
Application granted granted Critical
Publication of CN112712508B publication Critical patent/CN112712508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method and a device for determining pneumothorax, wherein the method comprises the steps of obtaining medical images, wherein the medical images comprise multi-frame cross-section images. Pneumothorax in each frame of cross-sectional image is segmented to obtain a first region. And screening a second region from the first region based on the position and the area of the first region in each frame of cross-sectional image. A saccade in the multi-frame cross-sectional image is determined. And removing the capsule in the second region to obtain a third region. And when the third areas exist in the cross-sectional images with the continuous preset frames and the sum of the areas of the third areas is larger than or equal to a second threshold value, determining that the third areas in the cross-sectional images with the preset frames are pneumothorax. The method and the device improve the speed and accuracy of determining the pneumothorax in the chest image.

Description

Pneumothorax determination method and pneumothorax determination device
Technical Field
The present invention relates to the field of medical technology, and in particular, to a method and apparatus for determining pneumothorax, a computer device, and a computer readable storage medium.
Background
Pneumothorax refers to the state of qi accumulation caused by the entry of air into the pleural cavity. Currently, a chest CT is usually taken to determine whether there is a pneumothorax in the lungs of the subject. Specifically, the doctor determines whether there is pneumothorax in the acquired cross-sectional image according to his own experience. However, the artificial method is adopted to determine whether the lung of the tested person has pneumothorax, so that the efficiency is low, the subjectivity is high, misjudgment or missed judgment can occur, and the subsequent diagnosis and treatment are affected.
Therefore, how to quickly and accurately determine whether the lungs of the subject have pneumothorax becomes one of the problems to be solved in the present day.
Disclosure of Invention
The invention provides a method and a device for determining pneumothorax, computer equipment and a computer readable storage medium, which are used for solving the technical problems of lower efficiency and accuracy when determining whether a pneumothorax exists in the lung of a tested person in a manual mode in the prior art.
The invention provides a method for determining pneumothorax, comprising the following steps:
Acquiring a medical image, wherein the medical image comprises a plurality of frames of cross-section images;
dividing pneumothorax in each frame of cross-sectional image to obtain a first region;
Screening a second region from the first region based on the position and the area of the first region in each frame of cross-sectional image;
Determining a saccular shadow in the multi-frame cross-sectional image;
Removing the capsule in the second region to obtain a third region;
And when the third areas exist in the cross-sectional images with the continuous preset frames and the sum of the areas of the third areas is larger than or equal to a second threshold value, determining that the third areas in the cross-sectional images with the preset frames are pneumothorax.
Optionally, the acquiring the medical image includes:
Acquiring a plurality of frames of coronal images;
determining Z coordinates of a starting point and an ending point in each frame of coronal image;
determining a Z coordinate Z s of a starting point with the minimum Z coordinate in the multi-frame coronal image;
Determining a Z coordinate Z e of a termination point with the largest Z coordinate in the multi-frame coronal plane image;
An image located between z=z s and z=z e is determined as the medical image.
Optionally, the step of screening the second region from the first region based on the position and the area of the first region in each frame of cross-sectional image includes: and screening the first region from the first regions, wherein the region which is positioned in the lung and has the area larger than or equal to a first threshold value is the second region.
Optionally, when the third areas exist in the cross-sectional images of the continuous predetermined number of frames and the sum of areas of the third areas is greater than or equal to the second threshold, determining that the third areas in the cross-sectional images of the predetermined number of frames are pneumothorax includes:
and when the third area exists in the left lung or the right lung of the cross-sectional images with the continuous preset frame number and the sum of the areas of the third areas in the left lung or the right lung is larger than or equal to the second threshold value, determining that the third area in the left lung or the right lung of the cross-sectional images with the preset frame number is pneumothorax.
Optionally, determining a saccade in the multi-frame cross-sectional image includes:
dividing a saccular shadow in the multi-frame cross-sectional image to obtain a fourth region;
Acquiring a region of interest from the medical image based on a locating point in the fourth region, the region of interest including a saccade, the locating point being associated with a center or center of gravity of the fourth region;
classifying the region of interest through a classification model to obtain confidence that the region of interest is a saccular shadow;
A balloon in the cross-sectional image is determined based on a confidence that the region of interest is a balloon.
Optionally, classifying the region of interest by a classification model to obtain a confidence that the region of interest is a saccade comprises:
the region of interest and the region of interest associated therewith are input into the classification model to output a confidence that the region of interest is a saccade.
Optionally, the region of interest is the same size as the region of interest, and includes a geometric body that extends a predetermined distance around a locating point in a fourth region corresponding to the region of interest associated therewith.
The invention also provides a device for determining pneumothorax, comprising:
the acquisition unit is used for acquiring medical images, wherein the medical images comprise multi-frame cross-section images;
the segmentation unit is used for segmenting pneumothorax in each frame of cross-sectional image to obtain a first region;
the screening unit is used for screening a second area from the first area based on the position and the area of the first area in each frame of cross-section image;
The first determining unit is used for determining a saccular shadow in the multi-frame cross-section image;
A removing unit for removing the capsule in the second region to obtain a third region;
and the second determining unit is used for determining that the third area in the cross-sectional images with the preset frame number is pneumothorax when the third area exists in the cross-sectional images with the preset frame number continuously and the sum of the areas of the third areas is larger than or equal to a second threshold value.
The invention also provides a computer device comprising at least one processor and at least one memory, wherein the memory stores a computer program which, when executed by the processor, enables the processor to perform the above-described method of determining pneumothorax.
The invention also provides a computer readable storage medium, which when executed by a processor within a device, causes the device to perform the above-described method of determining pneumothorax.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
The method comprises the steps of obtaining a medical image, wherein the medical image comprises a plurality of frames of cross-sectional images, and dividing pneumothorax in each frame of cross-sectional images to obtain a first area. And then screening out a second region from the first region based on the position and the area of the first region in each frame of cross-sectional image. And determining the sacculus in the multi-frame cross-sectional image, and removing the sacculus in the second area to obtain a third area. And finally, when the third areas exist in the cross-sectional images of the continuous preset frames and the sum of the areas of the third areas is larger than or equal to a second threshold value, determining that the third areas in the cross-sectional images of the preset frames are pneumothorax. Since the pneumothorax in the lung image is determined, the pneumothorax in the cross-sectional image is not directly segmented to obtain the first area as the pneumothorax, but in the segmented first area, the second area is selected from the plurality of first areas according to the position and the area of the first area, then the saccular shadow in the second area is removed to obtain the third area, and finally when the third area exists in the cross-sectional images with continuous preset frames and the sum of the areas of the third areas is greater than or equal to the second threshold value, the third area in the cross-sectional image with the preset frames is determined to be the pneumothorax. Thus, the accuracy of the finalized pneumothorax is improved to a large extent. In addition, because the lung of the subject does not need to be judged by a manual mode, misjudgment or missed judgment caused by subjective judgment of a doctor is avoided, the speed and accuracy of determining the pneumothorax in the chest image are improved, and the diagnosis efficiency and the diagnosis accuracy of the doctor are improved to a certain extent. And further improves the accuracy of the subsequent relevant diagnosis based on the determined pneumothorax to a certain extent.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic illustration of a basic human face and basic axis in standard anatomy in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining pneumothorax according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a coronal image in accordance with an embodiment of the present invention;
FIG. 4 is a schematic illustration of a cross-sectional image of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Just as described in the prior art, at present, when a doctor reads a film, whether a pneumothorax exists in a lung image of a detected person is generally required to be judged in a manual mode, and judgment is performed in a manual mode, so that the speed is low, misjudgment and missed judgment are easy to occur, and further the film reading and diagnosis efficiency of the doctor are lower.
In order to better explain the technical scheme of the invention, the basic human face and the basic axis in the standard anatomy are firstly correspondingly described in the embodiment. Referring to fig. 1, fig. 1 is a schematic view of a basic human face and a basic axis in standard anatomy according to an embodiment of the present invention. As shown in fig. 1: the basic human body surface includes coronal (frontal), sagittal (median) and transverse (transverse, horizontal) surfaces. The basic axis of the human body comprises a vertical axis (Z axis is an axis which is from the top to the bottom to the tail and is perpendicular to the ground plane, the top is positive, the tail is negative), a sagittal axis (Y axis is an axis which is from the front to the back and is crossed with the vertical axis at right angles, the front side is positive, the rear side is negative) and a coronal axis (X axis is also called frontal axis, the left and right directions are parallel to the horizontal, the axes are perpendicular to the front two axes, and the left side is negative and the right side is positive). The three-dimensional orientation of the human body includes a front side (near the abdomen), a rear side (near the back), an upper side (near the head), and a lower side (near the feet). All facets, axes, orientations in this embodiment are applicable to fig. 1.
Fig. 2 is a flow chart of a method of determining pneumothorax according to an embodiment of the present invention. As shown in fig. 2, the method for determining pneumothorax in this embodiment includes:
s11: a medical image is acquired, the medical image comprising a plurality of frames of cross-sectional images.
S12: pneumothorax in each frame of cross-sectional image is segmented to obtain a first region.
S13: and screening a second region from the first region based on the position and the area of the first region in each frame of cross-sectional image.
S14: a saccade in the multi-frame cross-sectional image is determined.
S15: and removing the capsule in the second region to obtain a third region.
S16: and when the third areas exist in the cross-sectional images with the continuous preset frames and the sum of the areas of the third areas is larger than or equal to a second threshold value, determining that the third areas in the cross-sectional images with the preset frames are pneumothorax.
In the present embodiment, in order to determine whether there is a pneumothorax in the lung region, it is necessary to acquire a medical image, which is a multi-frame cross-sectional image including the lung region, first. In general, when CT is taken, the acquired chest image includes other areas besides the lung area, and in order to quickly locate the cross-sectional image of the lung area from the acquired chest image, in this embodiment, the area of the lung area needs to be located first, that is, the position of the lung area in the Z-axis. Specifically, the location of the lung region in the Z-axis is obtained by:
and acquiring a plurality of frames of coronal images.
And determining the Z coordinates of a starting point and an ending point in each frame of coronal image.
And determining a Z coordinate Z s of a starting point with the minimum Z coordinate in the multi-frame coronal image.
And determining the Z coordinate Z e of the termination point with the largest Z coordinate in the multi-frame coronal plane image.
An image located between z=z s and z=z e is determined as the medical image.
The multi-frame coronal image may be acquired by acquiring CT images of the chest in this embodiment, or by acquiring MRI images of the chest in other embodiments. Referring to fig. 3, fig. 3 is a schematic diagram of a coronal image of a chest, which is generally shown in fig. 3 and includes other regions in addition to a lung region when CT is taken, according to an embodiment of the present invention. To locate the position of the lung region in the Z-direction, the Z-coordinates of the start and end points of the region including the lung region in the coronal image need to be determined. The starting point and the ending point can also be the starting point and the ending point of the lung area, and can be determined according to actual requirements. With continued reference to fig. 3, the Z coordinate Z 1 of the start point and the Z coordinate Z 2 of the end point illustrated in fig. 3 are the Z coordinate of the start point of the lung region and the Z coordinate of the end point of the lung region.
In this embodiment, the Z coordinates of the start point and the end point in each frame of coronal image may be obtained through a neural network, specifically, for example, a 2D regression network. The 2D regression network comprises a feature extraction module and a full-connection regression module, wherein the output of the feature extraction module is the input of the full-connection regression module. The feature extraction module comprises N convolution modules and N maximum pooling layers, wherein the output of the convolution modules is connected with the input of the maximum pooling layers, namely the convolution modules are alternately connected with the maximum pooling layers. Each convolution module comprises a plurality of convolution blocks, wherein each convolution block comprises: a convolution layer (Conv 2 d), a batch normalization layer (BN, batch Normalization) and an activation layer, the activation function may be a linear rectification function (ReLU, recified Linear Unit). The full-connection regression module comprises M full-connection layers which are continuous in sequence, and a dropout layer with the passing rate of 0.5 can be arranged between the full-connection layers. The full-connection regression module finally outputs Z coordinates of a start point and an end point, wherein the Z coordinates of the start point and the end point can be pixel point coordinates, and for example, the Z coordinates of the start point or the end point can be the corresponding pixel points under a preset coordinate system.
In this embodiment, a plurality of coronal images (chest) may be used as training samples, and the labeling personnel marks the Z coordinates of the start point and the end point on each coronal image, where the start point and the end point may be the start point and the end point of the lung region. And then data enhancement (such as random rotation for a certain angle, random up-down and left-right translation for 0-30 pixels, random scaling for 0.85-1.15 times, small amount of dithering for image contrast and brightness, and the like) is carried out on the training sample, so that the data quantity is enhanced to 10 times of the original data quantity. And finally, inputting the training samples into a 2D regression network for training. And calculating a loss function according to the Z coordinates of the marked starting point and the marked ending point and the Z coordinates of the starting point and the marked ending point predicted by the 2D regression network during training, and training by a back propagation method, wherein the trained optimization algorithm can adopt an SGD algorithm with momentum and step attenuation. After the 2D regression network is obtained through training, each frame of coronal image can be input into the 2D regression network to obtain the Z coordinates of the starting point and the ending point in each frame of coronal image.
The Z coordinates of the start point and the end point in each frame of coronal image are obtained through the 2D regression network, then the Z coordinate Z s of the start point with the minimum Z coordinate is determined in the obtained Z coordinates of the start point of the multi-frame coronal image, and the Z coordinate Z e of the end point with the maximum Z coordinate is determined in the obtained Z coordinates of the end point of the multi-frame coronal image. For example, if 5 frames of coronal images are obtained, the Z coordinates of the start point and the end point in the 5 frames of coronal images are Z 1,z3,z5,z7,z9 and Z 2,z4,z6,z8,z10 respectively, Z s=z5 is the smallest Z 5 of the start point and Z e=z6 is the largest Z 6 of the end point. After determining the Z coordinate Z s of the start point with the smallest Z coordinate and the Z coordinate Z e of the end point with the largest Z coordinate, the image between z=z s and z=z e is the medical image to be acquired, and the medical image includes multiple frames of cross-sectional images.
In this embodiment, in order to obtain the position of the lung region in the Z axis more quickly, the coronal image of the frame located in the middle of the human body (commonly referred to as an intermediate frame coronal image) may be obtained first, and then the coronal image of the preset frame number adjacent to the intermediate frame may be obtained, for example, the coronal images of the front 3 frames and the rear 3 frames adjacent to the intermediate frame may be obtained, or the coronal images of the front 5 frames and the rear 5 frames adjacent to the intermediate frame may be obtained. The preset frame number can be determined according to actual requirements. After the intermediate frame and the crown image of the preset frame number adjacent to the intermediate frame are obtained, the Z coordinates of the starting point and the ending point in each frame of crown image of the intermediate frame and the crown image of the preset frame number adjacent to the intermediate frame are directly determined. The determination of the Z coordinates of the starting point and the ending point in the intermediate frame and the coronal image adjacent thereto with the preset frame number may also be obtained by using a 2D regression network, which is not described herein. After knowing the Z coordinates of the starting point and the Z coordinates of the ending point in the intermediate frame and the crown image of the preset frame number adjacent to the intermediate frame, determining the minimum Z coordinate Z s in the Z coordinates of the starting point and the maximum Z coordinate Z e in the Z coordinates of the ending point. The image between z=z s and z=z e is the medical image to be acquired, and the medical image includes multiple frames of cross-sectional images. The position of the lung region in the Z axis is determined by firstly acquiring the middle frame and the coronal image of the preset frame number adjacent to the middle frame, and the frame number of the coronal image to be acquired is reduced, so that the minimum Z coordinate in the starting point and the maximum Z coordinate in the ending point can be determined quickly, and the position of the lung region in the Z axis can be determined quickly.
After the medical image including the plurality of frames of cross-sectional images is acquired through S11, S12 is performed to segment the pneumothorax in each frame of cross-sectional images to obtain the first region. In this embodiment, the conventional image segmentation algorithm may be used, for example: the pneumothorax in the cross-sectional image can be segmented by a pneumothorax segmentation model, such as a threshold-based image segmentation method, a region-based image segmentation method, a watershed algorithm and the like, for example, the pneumothorax in the cross-sectional image can be segmented by a U-NET neural network model using VGG as a back, or can be segmented by a full convolution neural network (FCN, fully Convolutional Network) model and the like.
In this embodiment, specifically, the pneumothorax segmentation model may include a feature extraction module, a downsampling module, and an upsampling module that are sequentially connected. The feature extraction module may include a first convolution unit and a second convolution unit, where the first convolution unit may include a 2D convolution layer, a batch normalization (BN, batch Narmalization) layer, and an excitation function layer, and the second convolution unit may also include a 2D convolution layer, a batch normalization layer, and an excitation function layer. The excitation function in this embodiment may be various types of excitation functions, for example, may be a linear rectification function (ReLU, rectified Linear Unit).
In this embodiment, the number of up-sampling modules and down-sampling modules may be set by those skilled in the art according to practical experience, for example, one down-sampling module and one up-sampling module may be included, or a plurality (two or more) of up-sampling modules and down-sampling modules may be included. Each downsampling module may include a 2D downsampling layer and a convolution feature extraction module, and the size of the 2D downsampling layer may be 2 x 2. Accordingly, each upsampling module may include a 2D deconvolution upsampling layer, a stitching layer, and a convolution feature extraction module, and the size of the 2D deconvolution upsampling layer may be 2 x 2. In the embodiment of the invention, the splicing layer of the up-sampling module can correspond to the output result of the down-sampling layer of the down-sampling module, so that the output result of the down-sampling layer can be spliced to obtain the feature map. And finally, convolving the feature map to obtain a segmentation result. In this embodiment, when the input is a cross-sectional image, the confidence that each pixel point in the cross-sectional image is a pneumothorax is output.
In this embodiment, when a pneumothorax in the cross-sectional images is segmented by using the pneumothorax segmentation model, in order to improve accuracy of segmented pneumothorax, the cross-sectional images to be segmented are not directly input into the pneumothorax segmentation model, but are taken as intermediate frame cross-sectional images, at least one frame of cross-sectional images positioned in front of and behind the intermediate frame cross-sectional images is acquired, and an image layer formed by the intermediate frame cross-sectional images and at least one frame of cross-sectional images positioned in front of and behind the intermediate frame cross-sectional images is input into the pneumothorax segmentation model. Similarly, when the initial pneumothorax segmentation model is trained, at least three frames of cross-sectional images, in which pneumothorax is marked in at least one frame of cross-sectional images before and after the middle frame of cross-sectional images, are used as training samples to train the initial pneumothorax segmentation model so as to obtain the pneumothorax segmentation model. For example, if the middle frame cross-sectional image is an 8 th frame cross-sectional image, at least one frame located before the middle frame cross-sectional image may be an 7 th frame cross-sectional image, and at least one frame located after the middle frame cross-sectional image may be an 9 th frame cross-sectional image. At least two frames located in front of the frame may be a 7 th frame cross-sectional image and a 6 th frame cross-sectional image, and at least two frames located behind the frame may be a 9 th frame cross-sectional image and a 10 th frame cross-sectional image. In this embodiment, the image layers of the 3-frame cross-sectional image composed of the intermediate-frame cross-sectional image and the one-frame cross-sectional image located before and after it are input to the pneumothorax segmentation model to segment the pneumothorax in the intermediate-frame cross-sectional image. Such as inputting the 7 th, 8 th and 9 th cross-sectional images into a pneumothorax segmentation model to segment pneumothorax in the 8 th cross-sectional image. In this embodiment, the image layer is input to the pneumothorax segmentation model, so that the information of the pneumothorax in the segmented middle frame cross-section image combines the pneumothorax information of the previous frame and the next frame, and the accuracy of pneumothorax segmentation in each frame cross-section image can be improved.
In order to improve accuracy of pneumothorax recognition, S13 is executed, in this embodiment, instead of determining a first region obtained by dividing a pneumothorax in a cross-sectional image as a pneumothorax, the first region obtained by dividing is screened to obtain a second region. In this embodiment, the second area may be selected from the plurality of first areas according to the position and the area of the first area. Specifically, in this embodiment, it may be first determined whether the first region is located in the lung, and if the first region is located in the lung and the area of the first region is greater than or equal to the first threshold, the first region is the second region. In this embodiment, the coordinates of each point on the boundary of the first region are known, and the coordinates of each point on the boundary of the lung region can be obtained by dividing the lung region.
In this embodiment, the lung region may be segmented from the cross-sectional image by using an image segmentation algorithm, or may be segmented from the cross-sectional image by using a neural network, specifically, the cross-sectional image may be binarized first, and the binarization may be an adaptive histogram binarization algorithm, and the foreground region in the cross-sectional image is obtained by binarization. The foreground region is then subjected to an dilation-erosion operation (also referred to as a closing operation) to obtain a first image. Then, a plurality of connected blocks are extracted from the first image through a flood fill method (flood fill), then the number of pixels in each connected block is determined, the connected block with the largest number of pixels, namely the largest area, is used as a second image, and meanwhile, other connected blocks are deleted.
And performing expansion operation on the second image to obtain a third image, and finally, segmenting the third image based on a preset threshold value to obtain a lung region, namely segmenting the lung region by a threshold value method. In this embodiment, the predetermined threshold may be obtained by using a CT value histogram of the third image, where the CT value histogram of the third image is generated by using a CT value as an abscissa and the number of pixels corresponding to the CT value as an ordinate. The CT value distribution of the foreground region in the cross-sectional image can be obtained through the CT value histogram of the third image, and those skilled in the art know that the CT value of the lung region is far lower than that of the region other than the lung region, so that two peaks appear on the CT histogram, wherein the peak with a high CT value and its vicinity are regions other than the lung region, the peak with a low CT value and its vicinity are regions in the lung, and in this embodiment, the CT value corresponding to the valley between the two peaks is taken as the segmentation threshold, that is, the CT value corresponding to the valley between the two peaks is taken as the predetermined threshold. The third image is segmented with a determined predetermined threshold to obtain the lung region. Of course, in other embodiments, other algorithms such as a watershed algorithm may be used to segment the lung region in the cross-sectional image, thereby obtaining the boundary of the lung region.
After the coordinates of each point on the boundary of the lung region are obtained, it is possible to determine whether the first region is located inside or outside the lung by the positional relationship between the coordinates of the point on the boundary of the first region and the coordinates of the point on the boundary of the lung region.
After determining that the first area is located in the lung, determining whether the area of the first area is greater than or equal to the first threshold, where in this embodiment, the area of the first area may be determined by the number of pixels in the first area, and if the number of pixels is 20, the area of the first area may be 20 pixels. In this embodiment, the first threshold may be determined according to practical experience, for example, the first threshold may be any one of values [20, 30], and when the first region is located in the lung and the area of the first region is greater than or equal to the first threshold, the first region is the screened second region.
The second area is screened from the plurality of first areas through the process, and false alarms possibly occurring in the second area are continuously removed.
S14 is executed: a saccade in the multi-frame cross-sectional image is determined. Since the saccular image is a black high-density image in the cross-sectional image, which is similar to the pneumothorax, there may be saccular images in the second areas obtained by screening, and these saccular images need to be removed to further improve the accuracy of pneumothorax identification. In this embodiment, specifically, the saccade in the multi-frame cross-sectional image is determined as follows.
First, a saccade in a multi-frame cross-sectional image is segmented to obtain a fourth region. In this embodiment, the segmentation of the saccade may use a conventional image segmentation algorithm such as: the image segmentation method based on the threshold value, the image segmentation method based on the region, the watershed algorithm and the like can be used for segmenting the sacculus in the cross-sectional image through a sacculus segmentation model, for example, a U-NET neural network model taking VGG as a back bone can be used for segmenting the sacculus in the cross-sectional image, and a full convolution neural network (FCN, fully Convolutional Network) model and the like can be used for segmenting the sacculus in the cross-sectional image. In this embodiment, the structure of the saccular image segmentation model is similar to that of the pneumothorax segmentation model, and will not be described here again. When the saccade segmentation model is used for segmenting the saccade, in order to improve the accuracy of segmenting the saccade, the cross-section image to be segmented can be used as an intermediate frame cross-section image, at least one frame of cross-section image positioned in front of and behind the intermediate frame cross-section image can be obtained, and an image layer formed by the intermediate frame cross-section image and the at least one frame of cross-section image positioned in front of and behind the intermediate frame cross-section image can be input into the saccade segmentation model. In this embodiment, specifically, an image layer of a 3-frame cross-sectional image composed of an intermediate-frame cross-sectional image and one frame cross-sectional image located before and after it may be input to a capsule image segmentation model to segment a capsule image in the intermediate-frame cross-sectional image. Such as inputting the 7 th, 8 th and 9 th cross-sectional images into a capsule segmentation model to segment capsules in the 8 th cross-sectional image. The image layer is input into the saccular image segmentation model, so that the saccular image information of the previous frame and the later frame is combined with the saccular image information of the segmented middle frame cross-section image, and the accuracy of saccular image segmentation in each frame cross-section image can be improved.
A region of interest is then acquired from the medical image based on an anchor point in the fourth region, the region of interest comprising a capsule, the anchor point being associated with a center or center of gravity of the fourth region. In this embodiment, specifically, the region of interest may be a geometric body obtained by expanding a preset distance around the center of the fourth region, and the geometric body may be a cube, a cuboid, or the like, which is not limited in this embodiment.
For each fourth region, there is a furthest distance between the point on the boundary of the fourth region and the location point (center or center of gravity) of the fourth region. For the fourth plurality of regions, there are a plurality of the longest distances, and the preset distance may be 1.2 to 1.6 times the longest distance having the largest distance value among the plurality of the longest distances. If the region of interest is a cube, the cube is cut from the medical image with the locating point (center or gravity center) in the fourth region as the center and with the preset distance as half of the side length of the cube.
Next, the region of interest is classified by a classification model to obtain a confidence that the region of interest is a capsule. From the above, the region of interest is taken from the medical image and is thus three-dimensional, so it is necessary to classify the region of interest by a three-dimensional classification model. In this embodiment, the classification model may be a classification model with two classification, and the output of the classification model is: saccular, non-saccular. Specifically, the classification model in the present embodiment may include a feature extraction module and a fully connected classification module. The feature extraction module may include a plurality of sequential convolution modules, each of which may include a 3*3 2D convolution layer, a normalization (BN, batch Normalization) layer, an activation function layer, and a2 x 2 max pooling (MP, max pooling) layer. In this embodiment, the activation function may be a plurality of types of activation functions, for example, may be a linear rectification function (ReLU, rectified Linear Unit), which is not limited herein. In this embodiment, the full-connection classification module may include a first full-connection layer, a second full-connection layer, and a third full-connection layer, where output results of multiple continuous convolution modules are combined through the first full-connection layer and the second full-connection layer, and finally input to the third full-connection layer, and the third full-connection layer outputs confidence degrees of the respective categories. In this embodiment, in order to reduce the data amount processed by the second full-connection layer and the third full-connection layer, a dropoff layer with a passing rate of 0.5 may be disposed between the first full-connection layer and the second full-connection layer and between the second full-connection layer and the third full-connection layer, that is, the output result of the first full-connection layer may be filtered by the dropoff layer and output to the second full-connection layer, and the output result of the second full-connection layer may be filtered by the dropoff layer and further output to the third full-connection layer. Finally, the output of the third fully connected layer is normalized by softmax to output the confidence of the region of interest belonging to one of the two categories. When the confidence coefficient of a certain category is greater than or equal to the confidence coefficient threshold value, the category is used as a final output result of the classification model, and in this embodiment, the confidence coefficient threshold value may be 0.5.
In this embodiment, in order to improve the accuracy of classification of the classification model, the classification model is input not only to the region of interest, but also to the region of interest associated with the region of interest.
The region of interest is the same size as the region of interest and includes a geometry that extends a predetermined distance around a location point in a fourth region corresponding to the region of interest associated therewith. In this embodiment, the region of interest may include a geometric body that extends a predetermined distance around a center (or center of gravity) of a fourth region corresponding to the region of interest associated with the region of interest. For example, the region of interest a is a cube that is truncated from the medical image with the center (or center of gravity) of the fourth region a as the center, and the predetermined distance a is half the side length of the cube. The region of interest a associated with the region of interest a may be a geometric body including a predetermined distance extending around the center (or center of gravity) of the fourth region a, and in this embodiment, the geometric body may be a sphere. The predetermined distance is set to a value such that the size of the geometry included in the region of interest a approximates the size of the saccade included in the region of interest a. Specifically, the predetermined distance is associated with a maximum value of distances between a positioning point in the fourth region a corresponding to the region of interest a associated with the region of interest a and a point on the boundary of the fourth region a. Taking the positioning point as a center or a center of gravity as an example, the predetermined distance may be a maximum value of a distance between a center (or center of gravity) of the fourth area a corresponding to the region of interest a associated with the region of interest a and a point on a boundary of the fourth area a. It may be slightly greater than the maximum value or slightly less than the maximum value. For example, the distance between the center (or center of gravity) of the fourth region a and the point a on the boundary of the fourth region a is the largest, and the predetermined distance may be the distance between the center (or center of gravity) of the fourth region a and the point a. In other embodiments, the predetermined distance may also be slightly less than the maximum value of the distance between the center (or center of gravity) of the fourth area a and a point on the boundary of the fourth area a. In this embodiment, the gray value of at least a part of the region of interest is 255. Specifically, the region of interest may include a geometry, such as a sphere, having a gray value of 255 and the remainder having a gray value of 0.
After the region of interest associated with the region of interest is obtained in the above manner, the region of interest and the region of interest associated therewith are input into a classification model to output a confidence that the region of interest is a capsule. By inputting the region of interest and the region of interest associated therewith into the classification model, the classification model can be made to focus more on the saccade in the classification process, and the accuracy of the classification model in classifying the region of interest can be improved.
And finally, determining the saccade in the cross-sectional image based on the confidence that the region of interest is the saccade, specifically, if the confidence that the region of interest is the saccade is greater than or equal to a confidence threshold, determining that the region of interest is the saccade, and determining that a fourth region in the cross-sectional image corresponding to the region of interest is the saccade. In this embodiment, the confidence threshold may be 0.5.
After determining the saccade in each frame of cross-sectional image, S15 is performed to remove the saccade in the second area to obtain a third area, i.e. from the second area located in a certain frame of cross-sectional image, the fourth area in the second area being the saccade is removed to obtain the third area in the frame of cross-sectional image (the second area from which the saccade is removed).
S16 is executed: and when the third areas exist in the cross-sectional images with the continuous preset frames and the sum of the areas of the third areas is larger than or equal to a second threshold value, determining that the third areas in the cross-sectional images with the preset frames are pneumothorax. Those skilled in the art know that, when determining whether there is a pneumothorax in the lung, it will be determined whether there is a pneumothorax in the left lung and whether there is a pneumothorax in the right lung respectively, so in this embodiment, after screening the second area from the first area and removing the saccular shadow from the second area to obtain the third area, it may be determined whether there is a pneumothorax in the left lung or the right lung of the cross-sectional image respectively. In particular, the method comprises the steps of,
If a third region exists in the left lung of the cross-sectional images of the predetermined number of consecutive frames and the sum of the areas of the third regions in the left lung is greater than or equal to the second threshold, then the third region in the left lung of the cross-sectional images of the predetermined number of frames is determined to be a pneumothorax. In this embodiment, determining whether the third region is located in the left lung or the right lung may be determined by determining the location of the center (coordinates are known) of the third region and the sternal vertebra connection (equations are known), where the third region is located in the left lung if the center of the third region is located on the left side of the sternal vertebra connection, and where the third region is located in the right lung if the center of the third region is located on the right side of the sternal vertebra connection.
In this embodiment, the sternal vertebra connection line can be obtained by detecting sternum and vertebra in the cross-sectional image by a detection model and then connecting the center of sternum and the center of vertebra. The detection model may include: and the characteristic extraction module and the detection frame acquisition module. The detection frame acquisition module detects the feature map output by the feature extraction module. In this embodiment, the feature extraction module may include: l convolution units, M max pooling layers, N2D deconvolution layers of 2 gamma, and tensor superposition layers. Each convolution unit comprises: a convolution layer (Conv 2 d), a batch normalization layer (BN, batch Normalization) and an activation layer, the activation function may be a linear rectification function (ReLU, recified Linear Unit). In this embodiment, the feature extraction module may be a feature pyramid network (FPN, momenta PAPER READING), and the detection frame acquisition module may be a SSD (Single Shot MultiBox Detector) network.
After determining that the third region exists in the left lung of the cross-sectional images of the continuous predetermined number of frames, if the sum of the areas of the third regions in the left lung of the cross-sectional images of the predetermined number of frames is greater than or equal to the second threshold value, determining that the third region in the left lung of the cross-sectional images of the predetermined number of frames is a pneumothorax. In this embodiment, the area of the third area may be determined by the number of pixels in the third area, and if the number of pixels is 100, the area of the third area may be 100 pixels. In this embodiment, the predetermined number of frames may take any value between [2,4], for example, the predetermined number of frames may be 2 frames, and when the third areas exist in the left lung of the cross-sectional images of 2 consecutive frames, and the sum of the areas of the third areas in the left lung of the cross-sectional images of 2 consecutive frames is greater than or equal to the second threshold, it is determined that the third areas in the left lung of the cross-sectional images of 2 frames are pneumothorax.
In this embodiment, the second threshold may be determined according to practical experience, and the second threshold may be any value of [800, 1200], for example, the second threshold may be 1000. Still taking 2 frames as an example of the predetermined frame number, when the sum of areas of the third areas in the left lung of the 2 consecutive cross-sectional images is greater than or equal to 1000, it is determined that the third areas in the left lung of the 2 frames of cross-sectional images are pneumothorax.
Similarly, if the third regions are present in the right lung of the cross-sectional images of consecutive predetermined frames and the sum of the areas of the third regions in the right lung is greater than or equal to the second threshold, then the third regions in the right lung of the cross-sectional images of predetermined frames are determined to be pneumothorax. Taking the predetermined frame number as 3 frames and the second threshold value as 1000 as an example, when the sum of the areas of the third areas in the right lung of the continuous 3-frame cross-sectional images is greater than or equal to 1000, it is determined that the third areas in the left lung of the 3-frame cross-sectional images are pneumothorax. Fig. 4 is a schematic diagram of a cross-sectional image of an embodiment of the present invention, wherein the portion of the black area on the upper side of the right lung in fig. 4 is a certain pneumothorax.
In this embodiment, the pneumothorax is segmented from the cross-sectional images to obtain the first region, then the first regions are screened from the plurality of first regions based on the positions and the areas of the first regions to obtain the second region, the saccular shadows in the second region are removed, and finally the third regions exist in the cross-sectional images with continuous preset frames, and when the sum of the areas of all the third regions is greater than or equal to the second threshold value, the third regions in the cross-sectional images with continuous preset frames are determined to be pneumothorax. Since false positives that may exist in the plurality of first regions obtained by segmentation are removed, accuracy of the final determined pneumothorax is improved. In addition, because whether pneumothorax exists in the lung image is determined in a manual mode, the film reading accuracy is improved, and meanwhile the film reading efficiency is also improved.
Based on the same technical concept, an embodiment of the present invention provides a device for determining pneumothorax, including:
And the acquisition unit is used for acquiring medical images, and the medical images comprise multi-frame cross-section images.
And the segmentation unit is used for segmenting the pneumothorax in each frame of cross-sectional image to obtain a first region.
And the screening unit is used for screening a second area from the first area based on the position and the area of the first area in each frame of cross-sectional image.
And the first determining unit is used for determining the saccular shadow in the multi-frame cross-section image.
And the removing unit is used for removing the sacculus shadow in the second area to obtain a third area.
And the second determining unit is used for determining that the third area in the cross-sectional images with the preset frame number is pneumothorax when the third area exists in the cross-sectional images with the preset frame number continuously and the sum of the areas of the third areas is larger than or equal to a second threshold value.
The implementation of the apparatus for determining pneumothorax in this embodiment may refer to the implementation of the method for determining pneumothorax described above, and will not be described herein.
Based on the same technical idea, an embodiment of the present invention provides a computer device, including at least one processor, and at least one memory, wherein the memory stores a computer program, and when the program is executed by the processor, the processor is enabled to perform the method for determining pneumothorax described above.
Based on the same technical idea, embodiments of the present invention provide a computer-readable storage medium, which when executed by a processor within an apparatus, enables the apparatus to perform the above-described method of determining pneumothorax.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, or as a computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (7)

1. A method of determining pneumothorax, comprising:
Acquiring a medical image, wherein the medical image comprises a plurality of frames of cross-section images;
dividing pneumothorax in each frame of cross-sectional image to obtain a first region;
Screening a second region from the first region based on the position and the area of the first region in each frame of cross-sectional image;
Determining a saccular shadow in the multi-frame cross-sectional image;
Removing the capsule in the second region to obtain a third region;
When the third areas exist in the cross-sectional images with the continuous preset frames and the sum of the areas of the third areas is larger than or equal to a second threshold value, determining that the third areas in the cross-sectional images with the preset frames are pneumothorax;
Wherein:
determining a saccade in a multi-frame cross-sectional image, comprising:
dividing a saccular shadow in the multi-frame cross-sectional image to obtain a fourth region;
Acquiring a region of interest from the medical image based on a locating point in the fourth region, the region of interest including a saccade, the locating point being associated with a center or center of gravity of the fourth region;
classifying the region of interest through a classification model to obtain confidence that the region of interest is a saccular shadow;
Determining a capsule in the cross-sectional image based on a confidence that the region of interest is a capsule;
classifying the region of interest by a classification model to obtain a confidence that the region of interest is a capsule shadow, comprising:
Inputting the region of interest and the region of interest associated therewith into the classification model to output a confidence that the region of interest is a saccade;
The region of interest is the same size as the region of interest and includes a geometry that extends a predetermined distance around a location point in a fourth region corresponding to the region of interest associated therewith.
2. The method of claim 1, wherein the acquiring a medical image comprises:
Acquiring a plurality of frames of coronal images;
determining Z coordinates of a starting point and an ending point in each frame of coronal image;
determining a Z coordinate Z s of a starting point with the minimum Z coordinate in the multi-frame coronal image;
Determining a Z coordinate Z e of a termination point with the largest Z coordinate in the multi-frame coronal plane image;
An image located between z=z s and z=z e is determined as the medical image.
3. The method of claim 1, wherein selecting a second region from the first region based on a location, an area, of the first region in each frame of the cross-sectional image comprises: and screening the first region from the first regions, wherein the region which is positioned in the lung and has the area larger than or equal to a first threshold value is the second region.
4. A method according to any one of claims 1 to 3, wherein determining that the third region in the cross-sectional images of the predetermined number of frames is a pneumothorax when the third region is present in the cross-sectional images of the predetermined number of consecutive frames and the sum of the areas of the third regions is greater than or equal to the second threshold value comprises:
and when the third area exists in the left lung or the right lung of the cross-sectional images with the continuous preset frame number and the sum of the areas of the third areas in the left lung or the right lung is larger than or equal to the second threshold value, determining that the third area in the left lung or the right lung of the cross-sectional images with the preset frame number is pneumothorax.
5. An apparatus for determining pneumothorax, comprising:
the acquisition unit is used for acquiring medical images, wherein the medical images comprise multi-frame cross-section images;
the segmentation unit is used for segmenting pneumothorax in each frame of cross-sectional image to obtain a first region;
the screening unit is used for screening a second area from the first area based on the position and the area of the first area in each frame of cross-section image;
The first determining unit is used for determining a saccular shadow in the multi-frame cross-section image;
A removing unit for removing the capsule in the second region to obtain a third region;
A second determining unit, configured to determine that the third region in the cross-sectional images of the predetermined frame number is a pneumothorax when the third region exists in the cross-sectional images of the predetermined frame number continuously and the sum of the areas of the third regions is greater than or equal to a second threshold value;
Wherein:
The first determining unit determines a saccade in the multi-frame cross-section image by the following method:
dividing a saccular shadow in the multi-frame cross-sectional image to obtain a fourth region;
Acquiring a region of interest from the medical image based on a locating point in the fourth region, the region of interest including a saccade, the locating point being associated with a center or center of gravity of the fourth region;
classifying the region of interest through a classification model to obtain confidence that the region of interest is a saccular shadow;
Determining a capsule in the cross-sectional image based on a confidence that the region of interest is a capsule;
The first determining unit classifies the region of interest by a classification model to obtain a confidence that the region of interest is a capsule shadow by:
Inputting the region of interest and the region of interest associated therewith into the classification model to output a confidence that the region of interest is a saccade;
The region of interest is the same size as the region of interest and includes a geometry that extends a predetermined distance around a location point in a fourth region corresponding to the region of interest associated therewith.
6. A computer device comprising at least one processor, and at least one memory, wherein the memory stores a computer program that, when executed by the processor, enables the processor to perform the method of any one of claims 1-4.
7. A computer readable storage medium, which when executed by a processor within a device, causes the device to perform the method of any of claims 1-4.
CN202011634613.2A 2020-12-31 2020-12-31 Pneumothorax determination method and pneumothorax determination device Active CN112712508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011634613.2A CN112712508B (en) 2020-12-31 2020-12-31 Pneumothorax determination method and pneumothorax determination device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011634613.2A CN112712508B (en) 2020-12-31 2020-12-31 Pneumothorax determination method and pneumothorax determination device

Publications (2)

Publication Number Publication Date
CN112712508A CN112712508A (en) 2021-04-27
CN112712508B true CN112712508B (en) 2024-05-14

Family

ID=75547865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011634613.2A Active CN112712508B (en) 2020-12-31 2020-12-31 Pneumothorax determination method and pneumothorax determination device

Country Status (1)

Country Link
CN (1) CN112712508B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10055564A1 (en) * 2000-11-09 2002-06-06 Siemens Ag Device for recognizing a pneumothorax with a thorax MR exposure has an evaluatory device to segment the thorax and its surrounding area with an MR signal matching a noise signal in air and to detect exhaled zones between lungs and thorax.
CN102240212A (en) * 2010-05-14 2011-11-16 Ge医疗系统环球技术有限公司 Method and apparatus for measuring pneumothorax
WO2015048767A1 (en) * 2013-09-30 2015-04-02 Grisell Ronald Automatic focused assessment with sonography for trauma exams
CN110782446A (en) * 2019-10-25 2020-02-11 杭州依图医疗技术有限公司 Method and device for determining volume of lung nodule
CN110895815A (en) * 2019-12-02 2020-03-20 西南科技大学 Chest X-ray pneumothorax segmentation method based on deep learning
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning
WO2020164493A1 (en) * 2019-02-14 2020-08-20 腾讯科技(深圳)有限公司 Method and apparatus for filtering medical image area, and storage medium
CN111950544A (en) * 2020-06-30 2020-11-17 杭州依图医疗技术有限公司 Method and device for determining interest region in pathological image
CN112150406A (en) * 2019-06-28 2020-12-29 复旦大学 CT image-based pneumothorax lung collapse degree accurate calculation method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10055564A1 (en) * 2000-11-09 2002-06-06 Siemens Ag Device for recognizing a pneumothorax with a thorax MR exposure has an evaluatory device to segment the thorax and its surrounding area with an MR signal matching a noise signal in air and to detect exhaled zones between lungs and thorax.
CN102240212A (en) * 2010-05-14 2011-11-16 Ge医疗系统环球技术有限公司 Method and apparatus for measuring pneumothorax
WO2015048767A1 (en) * 2013-09-30 2015-04-02 Grisell Ronald Automatic focused assessment with sonography for trauma exams
WO2020164493A1 (en) * 2019-02-14 2020-08-20 腾讯科技(深圳)有限公司 Method and apparatus for filtering medical image area, and storage medium
CN112150406A (en) * 2019-06-28 2020-12-29 复旦大学 CT image-based pneumothorax lung collapse degree accurate calculation method
CN110782446A (en) * 2019-10-25 2020-02-11 杭州依图医疗技术有限公司 Method and device for determining volume of lung nodule
CN110895815A (en) * 2019-12-02 2020-03-20 西南科技大学 Chest X-ray pneumothorax segmentation method based on deep learning
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning
CN111950544A (en) * 2020-06-30 2020-11-17 杭州依图医疗技术有限公司 Method and device for determining interest region in pathological image

Also Published As

Publication number Publication date
CN112712508A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN109685060B (en) Image processing method and device
US20180116620A1 (en) Deep Learning Based Bone Removal in Computed Tomography Angiography
CN106340021B (en) Blood vessel extraction method
CN106372629A (en) Living body detection method and device
US9888896B2 (en) Determining a three-dimensional model dataset of a blood vessel system with at least one vessel segment
CN111507965A (en) Novel coronavirus pneumonia focus detection method, system, device and storage medium
CN110992377A (en) Image segmentation method, device, computer-readable storage medium and equipment
CN111402254A (en) CT image pulmonary nodule high-performance automatic detection method and device
CN115136189A (en) Automated detection of tumors based on image processing
CN112381805B (en) Medical image processing method
CN111080556A (en) Method, system, equipment and medium for strengthening trachea wall of CT image
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
CN112508858A (en) Medical image processing method and device and computer equipment
CN112712508B (en) Pneumothorax determination method and pneumothorax determination device
CN112712507B (en) Method and device for determining calcified region of coronary artery
CN116258671B (en) MR image-based intelligent sketching method, system, equipment and storage medium
CN112308820B (en) Rib positioning method and device, computer equipment and storage medium
CN114092470B (en) Deep learning-based automatic detection method and device for pulmonary fissure
CN112184664B (en) Vertebra detection method and computer equipment
CN115330732A (en) Method and device for determining pancreatic cancer
CN110533637B (en) Method and device for detecting object
KR101494975B1 (en) Nipple automatic detection system and the method in 3D automated breast ultrasound images
CN112862786A (en) CTA image data processing method, device and storage medium
Xiao et al. Segmentation of Cerebrovascular Anatomy from TOF‐MRA Using Length‐Strained Enhancement and Random Walker
CN112869758A (en) Method and device for determining pleural effusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant