CN111260627A - Pulmonary lobe-based emphysema region judgment method and device - Google Patents

Pulmonary lobe-based emphysema region judgment method and device Download PDF

Info

Publication number
CN111260627A
CN111260627A CN202010042872.XA CN202010042872A CN111260627A CN 111260627 A CN111260627 A CN 111260627A CN 202010042872 A CN202010042872 A CN 202010042872A CN 111260627 A CN111260627 A CN 111260627A
Authority
CN
China
Prior art keywords
lung
lobe
image
feature
extracted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010042872.XA
Other languages
Chinese (zh)
Other versions
CN111260627B (en
Inventor
杨英健
郭英委
应立平
郭嘉琦
高宇宁
孟繁聪
康雁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202010042872.XA priority Critical patent/CN111260627B/en
Publication of CN111260627A publication Critical patent/CN111260627A/en
Application granted granted Critical
Publication of CN111260627B publication Critical patent/CN111260627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method and a device for judging an emphysema region based on lung lobes, which relate to the field of biomedical engineering, wherein the method for judging the emphysema region based on lung lobes comprises the following steps: extracting lung lobes with CT values; judging whether the CT value on the lung lobes is an emphysema region or not according to the lung lobes with the CT value and a set threshold; if yes, coloring the emphysema region and displaying the emphysema region; if not, the coloring is not carried out or the coloring color is different from the color of the emphysema region. The method solves the problems that the quantitative analysis of the whole lung in the emphysema region positioning process causes huge data volume and low calculation speed, and the quantitative analysis and display of the emphysema can not be performed by utilizing the determined or single lung lobe CT value.

Description

Pulmonary lobe-based emphysema region judgment method and device
Technical Field
The invention relates to the field of biomedical engineering, in particular to a method and a device for judging an emphysema region based on lung lobes.
Background
In the field of medical image processing, it is often necessary to reconstruct raw data and then analyze the raw data to obtain a desired or region of interest. In the field of small airway lesions or emphysema in lung disease, only the most severe lobes often need to be quantitatively analyzed.
Such as Chronic Obstructive Pulmonary Disease (COPD), abbreviated as chronic obstructive pulmonary disease. Emphysema is mainly characterized by incomplete reversible airflow limitation, and tends to be progressively aggravated. COPD patients were classified into 4 grades according to "global strategy interpretation for diagnosis, treatment and prevention of GOLD chronic obstructive pulmonary disease in 2018): GOLD level 1, GOLD level 2, GOLD level 3, and GOLD level 4. The spatial distribution characteristics of the emphysema reflect the severity of different pulmonary lobe airflow limitations, and researchers at home and abroad carry out correlation research on the influence of different pulmonary lobe emphysema on the lung function. The lower lobes of the double lungs are mainly higher in correlation with the limitation of the lung airflow, and probably because the airway of the lower lobes of the double lungs is closed earlier during expiration due to the action of gravity, so that the airflow is limited; at the same time, the exhaled air is mostly the air of the lower lobes of the lungs, and therefore, the diffusion function is mainly caused by the upper lobes of the lungs. Further, COPD first occurs in the right upper lung lobe. Meanwhile, with the severity of lung function getting worse, COPD gradually progresses to the lower lobes of both lungs. Such as rapid localization and assessment of progressive obstructive pulmonary disease of the right upper lung lobe. If the upper lobe of the right lung has no emphysema sign, the lung can be preliminarily diagnosed to have no emphysema; if the upper right lobe shows emphysema, the lower two lobes are further judged.
At present, the quantitative analysis of the whole lung is required to cause the problems of huge data volume, slow calculation speed and incapability of utilizing the determined or single lung lobe CT value to carry out the quantitative analysis. In conclusion, if the extraction of a single lung lobe can be realized, the problems of huge data volume and low calculation speed caused by quantitative analysis of the whole lung can be solved, and the rapid positioning and evaluation of the chronic obstructive pulmonary disease can be realized.
Disclosure of Invention
In view of this, the present invention provides a method and a device for determining an emphysema region based on lobes, so as to solve the problems that quantitative analysis of the whole lung in the emphysema region positioning process results in huge data size, slow calculation speed, and capacity of quantitative analysis and display of emphysema by using a determined or single CT value of a lobe.
In a first aspect, the present invention provides a method for determining an emphysema region based on lung lobes, including:
extracting lung lobes with CT values;
judging whether the CT value on the lung lobes is an emphysema region or not according to the lung lobes with the CT value and a set threshold;
if yes, coloring the emphysema region and displaying the emphysema region;
if not, the coloring is not carried out or the coloring color is different from the color of the emphysema region.
Preferably, the method for extracting lung lobes with CT values includes:
acquiring a lung lobe segmentation image of a lung image;
determining lung lobes to be extracted;
marking the lung lobes to be extracted;
and obtaining the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung images.
Preferably, before the acquiring the lung lobe segmentation image of the lung image, the method further includes:
acquiring the lung image;
carrying out lung lobe segmentation on the lung image to obtain a lung lobe segmentation image;
and/or
The method for segmenting the lung image into the lung lobe segmentation image comprises the following steps:
acquiring lung lobe fissure characteristics of a lung image in a sagittal plane, lung lobe fissure characteristics in a coronal plane and lung lobe fissure characteristics in a transverse plane;
correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane;
and segmenting the lung image by using the corrected lung lobe fissure characteristics.
Preferably, the specific method for marking the lung lobes to be extracted and obtaining the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung image is as follows:
obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted, and multiplying the marked mask image by the lung image to obtain the lung lobe to be extracted; and/or
The method for obtaining a mask image according to the lung lobe segmentation image and obtaining a marked mask image according to the mask image and the mark of the to-be-extracted lung lobe comprises the following steps:
masking the lung lobe segmentation image to obtain a mask image of each lung lobe, and obtaining a marked mask image according to a preset mask value of the mask image of each lung lobe and the mark of the to-be-extracted lung lobe; setting pixels in the marked mask image to be 1 and setting pixels in the region of the lung lobe segmentation image outside the marked mask image to be 0; and/or
The method for correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane comprises the following steps:
mapping the arbitrary two lung lobe fissure features to the view angle of the third lung lobe fissure feature;
and correcting the third lung lobe fissure characteristic by using the mapped lung lobe fissure characteristics of any two.
Preferably, the specific method for obtaining the lung lobes to be extracted by obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobes to be extracted, and multiplying the marked mask image by the lung image comprises:
and multiplying the marked mask images with the same number of layers by the lung image to obtain a layer of the lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of the lung lobes to be extracted to obtain the three-dimensional lung lobes to be extracted.
Preferably, a mask image is obtained according to the lung lobe segmentation image, a marked mask image is obtained according to the mask image and the mark of the to-be-extracted lung lobe, and the number of layers of the lung image and the number of layers of the marked mask image are respectively determined before the to-be-extracted lung lobe is obtained by multiplying the marked mask image by the lung image;
judging whether the number of layers of the lung image is equal to that of the marked mask image;
if the number of the marked mask images is equal to the number of the marked mask images, multiplying the marked mask images by the lung images to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted;
if not, interpolating the marked mask images to obtain mask images with the same number of layers as the lung images, then multiplying the marked mask images with the same number of layers by the lung images in sequence to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
Preferably, the method for correcting the third lung lobe fissure characteristic by using the mapped lung lobe fissure characteristics of any two of the lung lobe fissure characteristics comprises the following steps:
respectively carrying out space attention feature fusion by using the mapped two lung lobe fissure features and the mapped third lung lobe fissure feature to obtain a first fusion feature and a second fusion feature;
and obtaining the corrected third lung lobe fissure characteristic according to the first fusion characteristic and the second fusion characteristic.
Preferably, the method for performing spatial attention feature fusion by using the mapped two lung lobe fissure features and the mapped third lung lobe fissure feature respectively to obtain the first fusion feature and the second fusion feature comprises:
respectively connecting the arbitrary two lung lobe fissure characteristics with the third lung lobe fissure characteristic to obtain a first connecting characteristic and a second connecting characteristic;
performing a first convolution operation on the first connection feature to obtain a first convolution feature, and performing the first convolution operation on the second connection feature to obtain a second convolution feature;
performing a second convolution operation on the first convolution characteristic to obtain a first attention coefficient, and performing a second convolution operation on the second convolution characteristic to obtain a second attention coefficient;
the first fusion feature is obtained using the first convolution feature and the first attention coefficient, and the second fusion feature is obtained using the second convolution feature and the second attention coefficient.
Preferably, the method for obtaining the first fusion feature by using the first convolution feature and the first attention coefficient, and obtaining the second fusion feature by using the second convolution feature and the second attention coefficient is:
adding the feature multiplied by the first attention coefficient and the first convolution feature to obtain the first fusion feature; adding the feature multiplied by the second convolution feature and the second attention coefficient to the second convolution feature to obtain a second fusion feature; and/or
Adding the feature multiplied by the first convolution feature and the first attention coefficient to the first convolution feature, and performing a plurality of convolution operations on the added feature to obtain a first fusion feature; and adding the feature obtained by multiplying the second convolution feature by the second attention coefficient to the second convolution feature, and performing a plurality of convolution operations on the added feature to obtain the second fusion feature.
In a second aspect, the present invention provides a pulmonary lobe-based emphysema region determination device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method for determining a region of pulmonary emphysema based on lung lobes as described above.
The invention has at least the following beneficial effects:
the invention provides a method and a device for judging an emphysema region based on lung lobes, which aim to solve the problems that the data size is huge and the calculation speed is slow due to the fact that the whole lung is subjected to quantitative analysis in the emphysema region positioning process, and the emphysema can not be quantitatively analyzed and displayed by utilizing a determined or single lung lobe CT value.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
fig. 1 is a schematic flowchart of a method for determining an emphysema region based on lung lobes according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a lung lobe extraction method with CT value according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a lung lobe extraction device with CT value according to an embodiment of the present invention;
FIG. 4 is a raw CT image extracted by a lung lobe extraction method and/or device with CT values according to an embodiment of the present invention;
FIG. 5 is a mask image extracted by a lung lobe extraction method and/or device with CT values according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the right superior lung lobe extracted by the lung lobe extraction method and/or device with CT value according to an embodiment of the present invention;
fig. 7 is a flow chart illustrating a lung lobe segmentation method based on multi-view according to an embodiment of the present invention;
fig. 8 is a schematic network structure diagram of a lung lobe segmentation method and/or device based on multiple viewing angles according to an embodiment of the present invention.
Detailed Description
The present invention will be described below based on examples, but it should be noted that the present invention is not limited to these examples. In the following detailed description of the present invention, certain specific details are set forth. However, the present invention may be fully understood by those skilled in the art for those parts not described in detail.
Furthermore, those skilled in the art will appreciate that the drawings are provided solely for the purposes of illustrating the invention, features and advantages thereof, and are not necessarily drawn to scale.
Also, unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, the meaning of "includes but is not limited to".
Fig. 1 is a schematic flow chart illustrating a method for determining an emphysema region based on lung lobes according to an embodiment of the present invention. As shown in fig. 1, a method for determining an emphysema region based on lung lobes includes: step 1001: extracting lung lobes with CT values; step 1002: judging whether the CT value on the lung lobes is an emphysema region or not according to the lung lobes with the CT value and a set threshold; step 1003: if yes, coloring the emphysema region and displaying the emphysema region; step 1004: if not, the coloring is not carried out or the coloring color is different from the color of the emphysema region. The method solves the problems that the quantitative analysis of the whole lung in the emphysema region positioning process causes huge data volume and low calculation speed, and the quantitative analysis and display of the emphysema can not be performed by utilizing the determined or single lung lobe CT value.
Step 1001: and extracting lung lobes with CT values.
Step 1001: the extraction of the lung lobes with the CT value aims to provide the lung lobes to be extracted after segmentation, wherein the lung comprises a left lung and a right lung, the right lung comprises a right upper lobe, a right middle lobe and a right lower lobe, and the left lung comprises a left upper lobe and a left lower lobe. And extracting the lung lobes with the CT values to be one or more of the lung lobes to be extracted, namely the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe.
Fig. 2 is a flow chart of a lung lobe extraction method with CT values. As shown in fig. 2, a method for extracting lung lobes with CT values includes: s101, acquiring a lung lobe segmentation image of a lung image; step S102, determining lung lobes to be extracted; step S103, marking the lung lobes to be extracted; step S104 obtains the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung images. The method solves the problems that the data size is huge, the calculation speed is slow and the quantitative analysis cannot be carried out by utilizing the determined or single lung lobe CT value caused by the quantitative analysis of the whole lung at present. Meanwhile, the lung lobes to be extracted are extracted, and then the three-dimensional reconstruction of a single lung lobe is quicker, so that a doctor can further observe each lung lobe independently, and the condition of lung lobe blocking cannot occur. It should be noted that the lung lobes to be extracted in the present invention are the lung lobes to be extracted.
Step S101 acquires a lung lobe segmentation image of a lung image.
In an embodiment of the present invention, a lung image is first acquired, where the lung image is an original lung image, i.e., thin-layer scan data obtained from an influencing device, such as a CT machine.
Step S102 determines lung lobes to be extracted.
Specifically, the lung is divided into a right lung and a left lung, and has 5 lobes, and the right lung includes 3 lobes, namely, a right upper lobe, a right middle lobe, and a right lower lobe. The left lung includes 2 lobes, the upper left lobe and the lower left lobe, respectively. The invention can realize the extraction of any one or more than 5 lung lobes.
Step S103 marks the lung lobes to be extracted.
The labeling of the lung lobes to be extracted is to identify the lung lobes to be extracted, and a plurality of lung lobes may be labeled or only one lung lobe may be labeled.
Step S104 obtains the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung images.
If the lung lobes to be extracted (to be extracted) are determined and marked, the lung lobes can be extracted from the lung image before segmentation, as will be described in detail below.
If the upper right lobe of the right lung needs to be extracted, the following operations are performed, firstly, a lung image is obtained, and lung lobe segmentation is performed on the lung image to obtain a lung lobe segmentation image. And then, acquiring a lung lobe segmentation image of the lung image, determining that the lung lobe to be extracted is the upper right lobe of the right lung, marking the lung lobe of the upper right lobe of the right lung, and obtaining the upper right lobe of the right lung to be extracted according to the marked lung lobe of the upper right lobe of the right lung and the lung image.
In an embodiment of the present invention, a specific method for obtaining the lung lobes to be extracted according to the labeled lung lobes to be extracted and the lung image in step S104 is as follows: and obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted, and multiplying the marked mask image by the lung image to obtain the lung lobe to be extracted. The marked mask image is multiplied by the lung image to obtain the lung lobes to be extracted, i.e. the mask image, the marked mask image and the lung image are of the same size (size).
That is, after the lung lobe segmentation image of the lung image is obtained, mask operation is performed on each lung lobe of the lung lobe segmentation image, the mask operation refers to a string of binary digits in computer science and digital logic, and mask pointing is achieved through bitwise operation with a target digit to achieve the requirement.
The method for obtaining a mask image according to the lung lobe segmentation image and obtaining a marked mask image according to the mask image and the mark of the to-be-extracted lung lobe comprises the following steps: masking the lung lobe segmentation image to obtain a mask image of each lung lobe, and obtaining a marked mask image according to a preset mask value of the mask image of each lung lobe and the mark of the to-be-extracted lung lobe; and setting pixels within the marked mask image to 1 and pixels of a region of the lung lobe segmentation image outside the marked mask image to 0. And multiplying the marked mask image by the lung image to obtain the lung lobes to be extracted.
The specific operation of performing mask processing on the segmented lung lobe image to obtain a mask image of each lung lobe, that is, performing mask processing on the segmented lung lobe image of the obtained lung image to obtain a mask image of each lung lobe in the lung image, and then obtaining the mask image of the marker according to the preset mask value of the mask image of each lung lobe and the marker of the lung lobe to be extracted is as follows: determining the lung lobes to be extracted according to the preset mask value of the mask image of the lung lobes to be extracted and the mark to obtain the marked mask image.
Specifically, the lung lobes to be extracted are marked, the lung lobes to be extracted are determined according to preset mask values 1, 2, 3, 4 and 5 of upper right, middle right, lower right, upper left and lower left lobes to obtain a marked mask image, and the marking values for marking the lung lobes to be extracted can only take one or more of 1-5.
More specifically, a mask masking operation is performed on each lung lobe of the lung lobe segmentation image, which has completed segmentation, to distinguish 5 lung lobes of an upper right lobe, a middle right lobe, a lower right lobe, an upper left lobe, and a lower left lobe, and the lung lobes of the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe, and the lower left lobe may be defined as a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4, and a preset mask value 5, respectively. And marking the to-be-extracted lung lobes as selecting one or more of a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4 and a preset mask value 5. If the mark is 1, the same as the preset mask value 1 indicates that the mark of the lung lobe to be extracted is the upper right lobe, and the marked mask image is obtained.
It should be noted that, before obtaining the lung lobes to be extracted according to the labeled lung lobes to be extracted and the lung image, it is further necessary to determine whether the label is within a range of a preset mask value, if so, obtaining the lung lobes to be extracted according to the labeled lung lobes to be extracted and the lung image, and if not, prompting. If the mark is 6, the mark is not in the range of the preset mask value, and prompting is carried out, such as prompting error.
Further, if the preset mask value is in the range of the preset mask value, whether the preset mask value is the same as the mark of the lung lobe to be extracted is further judged, if the preset mask value is the same as the mark of the lung lobe to be extracted, the pixel in the marked mask image is not required to be set to be 1, and then the lung lobe to be extracted is obtained according to the marked lung lobe to be extracted and the lung image; otherwise (if not the same), setting the pixel in the upper right lobe of the right lung to be 1, and then obtaining the lung lobe to be extracted according to the marked lung lobe to be extracted and the lung image. The preset mask value is a pixel or a pixel value.
For example, it is determined that the lung lobe to be extracted is the upper right lobe of the right lung, the upper right lobe of the right lung is a preset mask value 1, and the mark is 1, which indicates that the upper right lobe of the right lung with the preset mask value 1 is extracted, because the preset mask value 1 is the same as the pixel value 1, it is not necessary to set the internal pixel in the mask image of the upper right lobe of the right lung to 1 at this time, set the pixel of the region of the segmented image of the lung lobe other than the mark to 0 to obtain the mask image of the mark, and multiply the mask image of the mark by the lung image to obtain the lung lobe to be extracted.
In an embodiment of the present invention, a mask image is obtained according to the lung lobe segmentation image, a marked mask image is obtained according to the mask image and the mark of the to-be-extracted lung lobe, and the specific method for obtaining the to-be-extracted lung lobe by multiplying the marked mask image by the lung image includes: and multiplying the marked mask images with the same number of layers by the lung image to obtain a layer of the lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of the lung lobes to be extracted to obtain the three-dimensional lung lobes to be extracted.
In the embodiment of the invention, a marked mask image is constructed, and before the mask image is multiplied by a lung image to obtain the lung lobes to be extracted, the number of layers of the lung image and the number of layers of the mask image are respectively determined; judging whether the number of layers of the lung image is equal to that of the mask image; if the number of the marked mask images is equal to the number of the marked mask images, multiplying the marked mask images by the lung images to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted; if not, interpolating the mask images to obtain mask images with the same number of layers as the lung images, then multiplying the marked mask images with the same number of layers by the lung images in sequence to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of lung lobes to be extracted to obtain the three-dimensional lung lobes to be extracted.
For example, the lung image is an original image acquired from a video device, the number of layers of the lung image is 400, the number of layers of the mask image is also 400, and the lung image and each layer of the mask image correspond to each other. Masking the lung lobe segmentation image to obtain a mask image of each lung lobe, and obtaining a marked mask image according to a preset mask value of the mask image of each lung lobe and the mark of the to-be-extracted lung lobe; and setting pixels within the marked mask image to 1 and pixels of a region of the lung lobe segmentation image outside the marked mask image to 0. And multiplying the first layer lung image by the marked mask image to obtain first layer data of the lung lobes to be extracted, repeating the steps to obtain the data of the 400 th layer of the lung lobes to be extracted, and then performing three-dimensional reconstruction on the data from the first layer data of the lung lobes to be extracted to the data of the 400 th layer of the lung lobes to be extracted to obtain the three-dimensional lung lobes to be extracted. The method of three-dimensional reconstruction (3d reconstruction) is prior art and can be freely selected by the skilled person as required.
Before the acquiring the lung lobe segmentation image of the lung image, the method further comprises: acquiring the lung image; carrying out lung lobe segmentation on the lung image to obtain a lung lobe segmentation image; and/or the lung image is segmented by lung lobes, and the method for obtaining the lung lobe segmented image has various methods, such as a traditional lung lobe mode, a method based on deep learning, for example, a U-net segmentation network or a V-net segmentation network is used for segmenting the lung lobes, and a PDV network proposed by the paper automation segmentation of pulmonary lobe using a progressive dense V-network is used for segmenting the lung lobes.
According to the method for segmenting the lung image into the lung lobe segmentation images, a multi-view-angle-based lung lobe segmentation method and device can be selected, so that the problems that information is lost and the lung lobes cannot be segmented accurately due to the fact that the lung lobes are not segmented by fully utilizing information of other view angles are solved.
Specifically, a lung lobe segmentation method and device based on multi-view angles includes: acquiring lung lobe fissure characteristics of a lung image in a sagittal plane, lung lobe fissure characteristics in a coronal plane and lung lobe fissure characteristics in a transverse plane; and correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane. See in particular the detailed description in fig. 7 and 8.
Meanwhile, the present invention further provides a lung lobe extraction device with CT value, as shown in fig. 3, including: an acquiring unit 201, configured to acquire a lung lobe segmentation image of a lung image; a determining unit 202 for determining lung lobes to be extracted; a marking unit 203 for marking the lung lobes to be extracted; an extracting unit 204, configured to obtain lung lobes to be extracted according to the labeled lung lobes to be extracted and the lung image; the obtaining unit 201 is connected with the determining unit 202 and the extracting unit, the determining unit 202 is further connected with the marking unit 203, and the marking unit 203 is further connected with the extracting unit 204. The method solves the problems that the data size is huge, the calculation speed is slow and the quantitative analysis cannot be carried out by utilizing the determined or single lung lobe CT value caused by the quantitative analysis of the whole lung at present. Meanwhile, the lung lobes to be extracted are extracted, and then the three-dimensional reconstruction of a single lung lobe is quicker, so that a doctor can further observe each lung lobe independently, and the condition of lung lobe blocking cannot occur. It should be noted that the lung lobes to be extracted in the present invention are the lung lobes to be extracted. Reference may be made in particular to the description in a method for extracting lung lobes with CT values.
In fig. 3, the present invention provides a lung lobe extraction device with CT value, further comprising: a dividing unit; the segmentation unit is connected to the obtaining unit 201, and is configured to obtain a lung image, and perform lung lobe segmentation on the lung image to obtain a lung lobe segmentation image. And/or the segmentation unit, performing the following operations: acquiring lung lobe fissure characteristics of a lung image in a sagittal plane, lung lobe fissure characteristics in a coronal plane and lung lobe fissure characteristics in a transverse plane; correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane; and segmenting the lung image by using the corrected lung lobe fissure characteristics.
That is, before the acquiring the lung lobe segmentation image of the lung image, the method further includes: acquiring the lung image; carrying out lung lobe segmentation on the lung image to obtain a lung lobe segmentation image; and/or the lung image is segmented by lung lobes, and the method for obtaining the lung lobe segmented image has various methods, such as a traditional lung lobe mode, a method based on deep learning, for example, a method for segmenting lung lobes by using a U-net segmentation network or a V-net segmentation network, and a method for segmenting lung lobes by using a PDV network proposed by the thesis of Automatic segmentation of pulmonary patches using a progressive dense day V-network.
According to the method for segmenting the lung image into the lung lobe segmentation images, a multi-view-angle-based lung lobe segmentation method and device can be selected, so that the problems that information is lost and the lung lobes cannot be segmented accurately due to the fact that the lung lobes are not segmented by fully utilizing information of other view angles are solved.
Specifically, a lung lobe segmentation method and device based on multi-view angles includes: acquiring lung lobe fissure characteristics of a lung image in a sagittal plane, lung lobe fissure characteristics in a coronal plane and lung lobe fissure characteristics in a transverse plane; and correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane. See in particular the detailed description in fig. 7 and 8.
In fig. 3, the extraction unit 204 of a lung lobe extraction device with CT value proposed by the present invention includes: a mask image construction unit and a pixel dot multiplication unit; the marked mask image constructing unit is respectively connected with the acquiring unit 201, the marking unit 203 and the pixel point multiplying unit, and is configured to obtain a mask image according to the lung lobe segmentation image and obtain a marked mask image according to the mask image and the mark of the lung lobe to be extracted; and the pixel point multiplication unit is used for multiplying the lung image by the marked mask image to obtain the lung lobes to be extracted. The marked mask image is multiplied by the lung image to obtain the lung lobes to be extracted, i.e. the mask image, the marked mask image and the lung image are of the same size (size).
In fig. 3, the marked mask image constructing unit obtains a mask image according to the lung lobe segmentation image, obtains a marked mask image according to the mask image and the mark of the to-be-extracted lung lobe, and multiplies the lung image by the marked mask image to obtain the to-be-extracted lung lobe to perform the following operations: masking the lung lobe segmentation image to obtain a mask image of each lung lobe, and obtaining a marked mask image according to a preset mask value of the mask image of each lung lobe and the mark of the to-be-extracted lung lobe; and setting the pixels in the marked mask image as 1, setting the pixels in the region of the lung lobe segmentation image except the marked mask image as 0, and multiplying the marked mask image by the lung image to obtain the lung lobe to be extracted.
In an embodiment of the present invention, the operation performed to obtain a mask image according to the lung lobe segmentation image is: and performing mask processing on the lung lobe segmentation images of the obtained lung images to obtain a mask image of each lung lobe. That is, after the lung lobe segmentation image of the lung image is acquired, the mask masking operation is performed on each lung lobe of which the lung lobe segmentation image has been completed. mask masking operations refer to a string of binary digits in computer science and digital logic, and achieve the requirement of masking designated bits by bitwise operation with target digits.
In an embodiment of the present invention, specifically, a mask processing is performed on a segmented lung lobe image of an acquired lung image, mask images of 5 lung lobes in the lung image are respectively obtained, and a mask image of a marker is obtained by determining the marker of the lung lobe to be extracted according to a preset mask value of the mask image. The method comprises the steps of performing mask masking operation on each lung lobe of a lung lobe segmentation image, namely completing area positioning of 5 lung lobes for distinguishing a right upper lobe, a right middle lobe, a right lower lobe, a left upper lobe and a left lower lobe, wherein the areas of the right upper lobe, the right middle lobe, the right lower lobe, the left upper lobe and the left lower lobe can be respectively defined as a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4 and a preset mask value 5. And marking the to-be-extracted lung lobes as selecting one or more of a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4 and a preset mask value 5.
In an embodiment of the present invention, the to-be-extracted lung lobes are determined according to preset mask values of the mask image of the to-be-extracted lung lobes to obtain the marked mask image, specifically, the to-be-extracted lung lobes are marked to obtain the marks of the to-be-extracted lung lobes, the marked mask image is determined according to the preset mask values 1, 2, 3, 4, 5 and the marks of the to-be-extracted lung lobes of the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe, and the mark values for marking the to-be-extracted lung lobes can only take one or several of 1 to 5.
And if the lung lobe to be extracted is the upper right lobe, the mark of the lung lobe to be extracted is 1.
It should be noted that, before obtaining the lung lobes to be extracted according to the labeled lung lobes to be extracted and the lung image, it is further necessary to determine whether the label is within a range of a preset mask value, if so, obtaining the lung lobes to be extracted according to the labeled lung lobes to be extracted and the lung image, and if not, prompting. If the mark is 6, the mark is not in the range of the preset mask value, and prompting is carried out, such as prompting error.
Further, if the preset mask value is in the range of the preset mask value, whether the preset mask value is the same as the mark of the lung lobe to be extracted is further judged, if the preset mask value is the same as the mark of the lung lobe to be extracted, the pixel in the marked mask image is not required to be set to be 1, and then the lung lobe to be extracted is obtained according to the marked lung lobe to be extracted and the lung image; otherwise (if not the same), setting the pixel in the upper right lobe of the right lung to be 1, and then obtaining the lung lobe to be extracted according to the marked lung lobe to be extracted and the lung image.
For example, it is determined that the lung lobe to be extracted is the upper right lobe of the right lung, the upper right lobe of the right lung is a preset mask value 1, and the mark is 1, which indicates that the upper right lobe of the right lung with the preset mask value 1 is extracted, because the preset mask value 1 is the same as the pixel value 1, it is not necessary to set the internal pixel in the mask image of the upper right lobe of the right lung to 1 at this time, set the pixel of the region of the segmented image of the lung lobe other than the mark to 0 to obtain the mask image of the mark, and multiply the mask image of the mark by the lung image to obtain the lung lobe to be extracted. The preset mask value is a pixel or a pixel value.
And if the lung lobes to be extracted are the right middle lobe, the right lower lobe, the left upper lobe and the left lower lobe, setting the pixels in the marked mask image to be 1, and setting the pixels in the region of the lung lobe segmentation image outside the marked mask image to be 0. And multiplying the marked mask image by the lung image to obtain the lung lobes to be extracted.
In fig. 3, the extraction unit 204 of a lung lobe extraction device with CT value according to the present invention further includes: a judgment unit; the judging unit is respectively connected with the mask image constructing unit and the pixel dot multiplication unit and is used for judging whether the number of layers of the lung image is equal to that of the mask image; if the number of the marked mask images is equal to the number of the marked mask images, multiplying the marked mask images by the lung images to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted; if not, interpolating the mask images to obtain mask images with the same number of layers as the lung images, then multiplying the marked mask images with the same number of layers by the lung images in sequence to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of lung lobes to be extracted to obtain the three-dimensional lung lobes to be extracted.
In addition, the present invention also proposes a storage medium comprising: a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the computer program is a method for extracting lung lobes with CT values as above, and the processor executes the program to implement the following steps: acquiring a lung lobe segmentation image of a lung image; determining lung lobes to be extracted; marking the lung lobes to be extracted; and obtaining the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung images.
FIG. 4 is a raw CT image extracted by a lung lobe extraction method and/or device with CT values according to an embodiment of the present invention; FIG. 5 is a mask image extracted by a lung lobe extraction method and/or device with CT values according to an embodiment of the present invention; fig. 6 is a schematic diagram of the extraction of the right superior lung lobe by a method and/or a device for extracting lung lobes with CT values according to an embodiment of the present invention.
Fig. 7 is a flowchart illustrating a lung lobe segmentation method based on multiple viewing angles according to an embodiment of the present invention. Fig. 8 is a schematic network structure diagram of a lung lobe segmentation method and/or device based on multiple viewing angles according to an embodiment of the present invention. As shown in fig. 7 and 8, an executing main body of the lung lobe segmentation method based on multiple viewing angles provided by the embodiments of the present disclosure may be any image processing apparatus, for example, the lung lobe segmentation method based on multiple viewing angles may be executed by a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. The server may be a local server or a cloud server. In some possible implementations, the multi-view based lung lobe segmentation method may be implemented by a processor calling computer readable instructions stored in a memory.
Fig. 7 is a flowchart illustrating a lung lobe segmentation method based on multiple viewing angles according to an embodiment of the present invention. As shown in fig. 7, a lung lobe segmentation method or segmentation unit based on multiple viewing angles in the embodiment of the present disclosure includes: step 101: acquiring lung lobe fissure characteristics of a lung image in a sagittal plane, lung lobe fissure characteristics in a coronal plane and lung lobe fissure characteristics in a transverse plane; in some possible embodiments, the lung lobe fissure features of the lung images at different viewing angles may be extracted by means of a feature extraction process. The lobe slit feature is a feature for performing segmentation processing of each lobe region in the lung image.
The embodiment of the disclosure can respectively perform feature extraction processing on lung images in a sagittal plane, a coronal plane and a transverse plane, and obtain the slit features of the lung images in corresponding visual angles, that is, the lung lobe slit features of the lung images in the sagittal plane, the lung lobe slit features in the coronal plane and the lung lobe slit features in the transverse plane can be respectively obtained. In the embodiment of the present disclosure, the lung lobe slit feature at each viewing angle may be represented in a matrix or vector form, and the lung lobe slit feature may represent a feature value of the lung image at each pixel point at the corresponding viewing angle.
In some possible implementations, the embodiments of the present disclosure may obtain lung images at different viewing angles by taking CT (computed tomography). Correspondingly, a plurality of tomographic images, namely lung images, can be obtained at each viewing angle, and the plurality of lung images at the same viewing angle can be constructed to form a three-dimensional lung image. For example, the plurality of lung images at the same viewing angle may be stacked to obtain a three-dimensional lung image, or linear fitting or surface fitting may be performed to obtain a three-dimensional lung image.
In some possible implementations, the feature extraction process may be performed by a feature extraction neural network. For example, the neural network can be trained, the neural network can accurately extract the lung lobe fissure characteristics of the lung image, and the lung lobe segmentation is performed through the obtained characteristics. Under the condition that the precision of the lung lobe segmentation exceeds the precision threshold, the precision of the lung lobe fissure features obtained by the neural network meets the requirement, at the moment, the network layer for performing segmentation in the neural network can be removed, and the reserved network part can be used as the feature extraction neural network of the embodiment of the disclosure. The feature extraction neural network may be a convolutional neural network, such as a residual error network, a pyramid feature network, and a U network, which are only exemplary illustrations and are not specific limitations of the present disclosure.
Step 102: and correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane.
In some possible embodiments, in the case that the lobe fissure characteristics at three viewing angles are obtained, the lobe fissure characteristics at the third viewing angle may be corrected by using the lobe fissure characteristics at two viewing angles, so as to improve the accuracy of the lobe fissure characteristics at the third viewing angle.
In one example, embodiments of the present disclosure may utilize lobe fissure characteristics at coronal and transverse views to correct lobe fissure characteristics at sagittal views. In other embodiments, another lobe slit feature may also be corrected by any two of the lobe slit features of the three viewing angles. For convenience of description, the following embodiments describe the correction of the third lung lobe slit characteristic by the first lung lobe slit characteristic and the second lung lobe slit characteristic. The first lung lobe slit feature, the second lung lobe slit feature, and the third lung lobe slit feature respectively correspond to the lung lobe slit features at three viewing angles of the embodiment of the present disclosure.
In some possible embodiments, the first and second lobe slit features may be converted to the viewing angle of the third lobe slit feature by using a mapping manner, and feature fusion is performed by using the two lobe slit features obtained by mapping and the third lobe slit feature, so as to obtain the corrected lobe slit feature.
Step 103: and segmenting the lung image by using the corrected lung lobe fissure characteristics.
In some possible embodiments, the lung lobe segmentation may be directly performed through the corrected lung lobe fissure characteristics, resulting in a segmentation result of the lung lobe fissure. Alternatively, in another embodiment, the feature fusion processing may be performed on the corrected lung lobe slit features and the third lung lobe slit features, and lung lobe segmentation may be performed based on the fusion result to obtain a segmentation result of the lung lobe slit. The segmentation result may include position information corresponding to each partition in the identified lung image. For example, the lung image may include five lung lobe regions, which are the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe, respectively, and the obtained segmentation result may include the position information of the five lung lobes in the lung image. The segmentation result may be represented in a mask form by a mask feature, that is, the segmentation result obtained in the embodiment of the present disclosure may be represented in a mask form, for example, the embodiment of the present disclosure may allocate unique corresponding mask values (set mask values), such as 1, 2, 3, 4, and 5, to the above five lung lobe regions, respectively, and a region formed by each mask value is a position region where a corresponding lung lobe is located. The mask values described above are merely exemplary, and other mask values may be configured in other embodiments.
Based on the embodiment, the lung lobe fissure characteristics under three visual angles can be fully fused, the information content and accuracy of the corrected fissure characteristics are improved, and the accuracy of the lung lobe segmentation result is further improved.
In order to explain the embodiments of the present disclosure in detail, the respective processes of the embodiments of the present disclosure are explained below.
In an embodiment of the present disclosure, the method for acquiring the lobe slit feature of the lung image in the sagittal plane, the lobe slit feature of the lung image in the coronal plane, and the lobe slit feature of the lung image in the transverse plane includes:
obtaining a plurality of series of lung images in sagittal, coronal and transverse planes; and respectively extracting lung lobe fissure characteristics of the multi-sequence lung images in the sagittal plane, the coronal plane and the transverse plane to obtain lung lobe fissure characteristics in the sagittal plane, lung lobe fissure characteristics in the coronal plane and lung lobe fissure characteristics in the transverse plane.
The embodiment of the present disclosure may first acquire a multi-sequence lung image at three viewing angles, and as described in the above embodiment, a multi-layer lung image (multi-sequence image) of a lung image at different viewing angles may be acquired in a CT imaging manner, and a three-dimensional lung image may be obtained from the multi-layer lung image at each viewing angle.
In the case of obtaining a multi-sequence lung image at three viewing angles, feature extraction processing may be performed on each lung image, for example, by performing feature extraction processing on the lung image at each viewing angle through the above-described feature extraction neural network, to obtain lung lobe slit features of each image at the three viewing angles, such as a lung lobe slit feature in a sagittal plane, a lung lobe slit feature in a coronal plane, and a lung lobe slit feature in a transverse plane. Because each view angle can include a plurality of lung images, the embodiment of the present disclosure can execute feature extraction processing of the plurality of lung images in parallel through a plurality of feature extraction neural networks, thereby improving feature extraction efficiency.
Fig. 8 is a schematic network structure diagram of a lung lobe segmentation method and/or device based on multiple viewing angles according to an embodiment of the present invention. As shown in fig. 8, the network for performing the feature extraction process according to the embodiment of the present disclosure may be a U network (U-net), or may be another convolutional neural network capable of performing feature extraction.
Under the condition of obtaining the lobe fissure characteristics of the lung image at each view angle, the third lobe fissure characteristic can be corrected by using the lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane, and the process can include: mapping the arbitrary two lung lobe fissure features to the view angle of the third lung lobe fissure feature; and correcting the third lung lobe fissure characteristic by using the mapped lung lobe fissure characteristics of any two.
For convenience of description, the following description will be given taking an example in which the first and second lobe slit features correct the third lobe slit feature.
Since the extracted lung lobe fissure features are different at different viewing angles, the embodiment of the present disclosure can convert the lung lobe fissure feature mapping at three viewing angles to one viewing angle. Wherein the method for mapping the two arbitrary lung lobe fissure features to the view angle of the third lung lobe fissure feature is as follows: and mapping the lung lobe fissure characteristics of the multi-sequence lung images in any two of the sagittal plane, the coronal plane and the transverse plane to the view angle of the third lung lobe fissure characteristic. That is, the first and second lobe slit features may be mapped to the viewing angle at which the third lobe slit feature is located. And through mapping conversion of the visual angle, the characteristic information of the visual angle before mapping can be fused in the lung lobe fissure characteristics obtained after mapping.
As described in the foregoing embodiments, the embodiments of the present disclosure may obtain a plurality of lung images at each viewing angle, where the plurality of lung images correspondingly have a plurality of lung lobe fissure features. And each characteristic value in the lung lobe fissure characteristic corresponds to each pixel point of the corresponding lung image one by one.
The embodiment of the disclosure may determine, according to a three-dimensional lung image formed by a plurality of lung images at one viewing angle, a position mapping relationship between pixel points in the lung image when the viewing angle is converted to another viewing angle, and if a certain pixel point is switched from a first position at the first viewing angle to a second position at the second viewing angle, at this time, a feature value corresponding to the first position at the first viewing angle is mapped to the second position. By the embodiment, the mapping conversion between the lung lobe crack characteristics of the lung images under different visual angles can be realized.
In some possible embodiments, in a case where the lobe slit features of three viewing angles are mapped to the same viewing angle, the mapped two lobe slit features may be used to perform correction processing on the third lobe slit feature, so as to improve the information content and accuracy of the third lobe slit feature.
In an embodiment of the present disclosure, the method for correcting the third lung lobe slit characteristic by using the any two mapped lung lobe slit characteristics includes:
respectively carrying out space attention feature fusion by using the mapped two lung lobe fissure features and the mapped third lung lobe fissure feature to obtain a first fusion feature and a second fusion feature; and obtaining the corrected third lung lobe fissure characteristic according to the first fusion characteristic and the second fusion characteristic.
The disclosed embodiments may refer to the feature after the first lung lobe slit feature mapping as a first mapped feature, and refer to the feature after the second lung lobe slit feature mapping as a second mapped feature. In the case of obtaining the first mapped feature and the second mapped feature, a spatial attention feature fusion between the first mapped feature and the third lung lobe slit feature may be performed to obtain a first fused feature, and a spatial attention feature fusion between the second mapped feature and the third lung lobe slit feature may be performed to obtain a second fused feature.
The method for performing spatial attention feature fusion by using the mapped two arbitrary lung lobe fissure features and the mapped third lung lobe fissure feature respectively to obtain a first fusion feature and a second fusion feature comprises the following steps:
respectively connecting the arbitrary two lung lobe fissure characteristics with the third lung lobe fissure characteristic to obtain a first connecting characteristic and a second connecting characteristic; performing a first convolution operation on the first connection feature to obtain a first convolution feature, and performing the first convolution operation on the second connection feature to obtain a second convolution feature; performing a second convolution operation on the first convolution characteristic to obtain a first attention coefficient, and performing a second convolution operation on the second convolution characteristic to obtain a second attention coefficient; the first fusion feature is obtained using the first convolution feature and the first attention coefficient, and the second fusion feature is obtained using the second convolution feature and the second attention coefficient.
In some possible implementations, as shown in fig. 8, the spatial attention feature fusion process described above may be performed by a network module of a spatial attention mechanism, and the disclosed embodiments employ the spatial attention mechanism in consideration of the different importance of the characteristics of the lobe fissures at different locations. Wherein, the convolution processing based on the attention mechanism can be realized through a spatial attention neural network (attention), and important features are further highlighted in the obtained fusion features. The importance of each position of the spatial feature can be adaptively learned in the training process of the spatial attention neural network, and an attention coefficient of the feature object at each position is formed, for example, the coefficient can represent a coefficient value of a [0,1] interval, and the larger the coefficient, the more important the feature at the corresponding position is.
In the process of performing the spatial attention fusion process, a first connection feature may be obtained by performing a connection process on the first mapping feature and the third lung lobe fissure feature, and a second connection feature may be obtained by performing a connection process on the second mapping feature and the third lung lobe fissure feature, where the connection process may be a connection (association) in a channel direction. In the embodiment of the present disclosure, the dimensions of the first mapping feature, the second mapping feature and the third lung lobe fissure feature may be all identified as (C/2, H, W), where C represents the number of channels of each feature, H represents the height of the feature, and W represents the width of the feature. Correspondingly, the scale of the first connection feature and the second connection feature obtained by the connection process may be represented as (C, H, W).
In the case of obtaining the first connection feature and the second connection feature, a first convolution operation may be performed on each of the first connection feature and the second connection feature, for example, the first convolution operation may be performed by a convolution kernel of 3 × 3 using convolution layer a, and then batch normalization (bn) and activation function (relu) processing may be performed to obtain a first convolution feature corresponding to the first connection feature and a second convolution feature corresponding to the second connection feature. The scales of the first convolution feature and the second convolution feature can be expressed as (C/2, H, W), parameters in the feature map can be reduced through the first convolution operation, and subsequent calculation cost is reduced.
In some possible embodiments, in the case of obtaining the first convolution feature and the second convolution feature, a second convolution operation and a sigmoid function process may be performed on the first convolution feature and the second convolution feature, respectively, to obtain a corresponding first attention coefficient and a corresponding second attention coefficient, respectively. Wherein the first attention coefficient may represent the degree of importance of the characteristic of each element of the first convolution characteristic and the second attention coefficient may represent the degree of importance of the characteristic of the element in the second convolution characteristic.
As shown in fig. 8, for the first convolution feature or the second convolution feature, the second convolution operation may be performed by using two convolution layers B and C, where after the convolution layer B is processed by a convolution kernel of 1 × 1, batch normalization (bn) and activation function (relu) processing are performed to obtain a first intermediate feature, and the scale of the first intermediate feature may be represented as (C/8, H, W), and then the convolution operation of 1 × 1 convolution kernel is performed on the first intermediate feature by the second convolution layer C to obtain a second intermediate feature of (1, H, W). Further, the second intermediate feature map may be processed by using a sigmoid function to perform an activation function, so as to obtain an attention coefficient corresponding to the first convolution feature or the second feature, where a coefficient value of the attention coefficient may be a value in a range of [0,1 ].
The above second convolution operation can perform dimensionality reduction processing on the first connection feature and the second connection feature to obtain a single-channel attention coefficient.
In some possible embodiments, in the case of obtaining a first attention coefficient corresponding to the first convolution feature and a second attention coefficient corresponding to the second convolution feature, a product process may be performed on the first convolution feature and the first attention coefficient, and the product result may be added to the first convolution feature to obtain the first fusion feature. And performing product processing on the second convolution characteristic and the second attention coefficient matrix, and adding the product result and the second convolution characteristic to obtain a second fusion characteristic. Wherein the product processing (mul) may be the corresponding element multiplication, and the feature addition (add) may be the corresponding element addition. By the method, effective fusion of the features under three visual angles can be realized.
Alternatively, in other embodiments, a feature obtained by multiplying the first convolution feature by the first attention coefficient may be added to the first convolution feature, and several convolution operations may be performed on the added feature to obtain the first fused feature; and adding the feature multiplied by the second convolution feature and the second attention coefficient to the second convolution feature, and performing a plurality of convolution operations on the added feature to obtain the second fusion feature. By the method, the accuracy of the fused features can be further improved, and the content of fused information can be improved.
In the case of obtaining the first fused feature and the second fused feature, the corrected third lobe fissure feature may be obtained by using the first fused feature and the second fused feature.
In some possible embodiments, since the first fused feature and the second fused feature respectively include feature information at three viewing angles, the corrected third lung lobe fissure feature may be obtained directly by connecting the first fused feature and the second fused feature and performing a third convolution operation on the connected features. Or, the first fused feature, the second fused feature, and the third lung lobe slit feature may be connected, and a third convolution operation may be performed on the connected features, so as to obtain a corrected third lung lobe slit feature.
Wherein the third convolution operation may include a packet convolution process. Further fusion of the feature information in each feature may be further achieved by a third convolution operation. As shown in fig. 8, the third convolution operation of the embodiment of the present disclosure may include a packet convolution d (depth wise conv), wherein the packet convolution may speed up the convolution speed while improving the accuracy of the convolution characteristics.
In the case where the corrected third lung lobe slit feature is obtained by the third convolution operation, the lung image may be segmented using the corrected lung lobe slit feature. The embodiment of the disclosure can obtain the segmentation result corresponding to the corrected lung lobe fissure characteristic in a convolution mode. As shown in fig. 8, the embodiment of the present disclosure may input the corrected lung lobe fissure features into the convolution layer E, and perform standard convolution through a convolution kernel of 1 × 1 to obtain a segmentation result of the lung image. As described in the above embodiment, the segmentation result may indicate the position regions where the five lung lobes in the lung image are located. As shown in fig. 8, each lung lobe region in the lung image is distinguished by filling color.
Based on the above configuration, the lung lobe segmentation method based on multiple viewing angles provided by the embodiment of the present disclosure can solve the technical problems that information is lost and lung lobes cannot be accurately segmented due to the fact that the lung lobes are not segmented by fully utilizing information of other viewing angles.
As described in the above embodiments, the present disclosure may be implemented by a neural network, and as shown in fig. 8, the neural network that performs the lung lobe segmentation method in the multi-view may include a feature extraction neural network, a spatial attention neural network, and a segmentation network (including convolutional layers D and E).
The disclosed embodiment may include three feature extraction neural networks, each for extracting lung lobe fissure features at different viewing angles. Among them, the three feature extraction networks may be referred to as a first branch network, a second branch network, and a third branch network. The three branch networks of the embodiment of the present disclosure have the same structure, and the input images of the branch networks are different from each other. For example, a lung image sample of a sagittal plane is input to the first branch network, a lung image sample of a coronal plane is input to the second branch network, and a lung image sample of a transverse plane is input to the third branch network, so that feature extraction processing of the lung image sample at each view angle is performed respectively.
Specifically, in the embodiment of the present disclosure, the process of training the feature extraction neural network includes:
acquiring training samples under a sagittal plane, a coronal plane and a cross section, wherein the training samples are lung image samples with marked lung lobe fissure characteristics; performing feature extraction on a lung image sample under a sagittal plane by using the first branch network to obtain a first predicted lung lobe fissure feature; performing feature extraction on the lung image sample under the coronal plane by using the second branch network to obtain a second predicted lung lobe fissure feature; performing feature extraction on the lung image sample under the cross section by using the third branch network to obtain a third predicted lung lobe fissure feature; respectively obtaining network losses of the first branch network, the second branch network and the third branch network by using the first predicted lung lobe fissure characteristic, the second predicted lung lobe fissure characteristic and the third predicted lung lobe fissure characteristic and the corresponding marked lung lobe fissure characteristic, and adjusting parameters of the first branch network, the second branch network and the third branch network by using the network losses.
As described in the foregoing embodiment, the first branch network, the second branch network, and the third branch network are respectively used to perform feature extraction processing on lung image samples in a sagittal plane, a coronal plane, and a transverse plane, so that predicted features, that is, a first predicted lobe fissure feature, a second predicted lobe fissure feature, and a third predicted lobe fissure feature, can be obtained correspondingly.
Under the condition that the predicted lung lobe fissure features are obtained, the network losses of the first branch network, the second branch network and the third branch network can be obtained by respectively using the first predicted lung lobe fissure feature, the second predicted lung lobe fissure feature and the third predicted lung lobe fissure feature and the corresponding marked lung lobe fissure features. For example, the loss function of the embodiment of the present disclosure may be a logarithmic loss function, the network loss of the first branch network may be obtained by the first predicted lung lobe fissure characteristic and the marked real lung lobe fissure characteristic, the network loss of the second branch network may be obtained by the second predicted lung lobe fissure characteristic and the marked real lung lobe fissure characteristic, and the network loss of the third branch network may be obtained by the third predicted lung lobe fissure characteristic and the marked real lung lobe fissure characteristic.
In the case of obtaining the network loss of each of the branch networks, parameters of the first branch network, the second branch network, and the third branch network may be adjusted according to the network loss of each of the networks until a termination condition is satisfied. In this embodiment, the network loss of any branch of the first branch network, the second branch network, and the third branch network may be utilized to simultaneously adjust the network parameters of the first branch network, the second branch network, and the third branch network, such as convolution parameters, respectively. Therefore, the network parameters at any visual angle are related to the characteristics at the other two visual angles, the correlation between the extracted lung lobe fissure characteristics and the lung lobe fissure characteristics at the other two visual angles can be improved, and the primary fusion of the lung lobe fissure characteristics at each visual angle can be realized.
In addition, the training termination condition of the feature extraction neural network is that the network loss of each branch network is smaller than the first loss threshold, which indicates that each branch network of the feature extraction neural network can accurately extract the lung lobe fissure features of the lung image at the corresponding view angle.
Under the condition that the training is finished, the characteristic extraction neural network, the spatial attention neural network and the segmentation network can be used for simultaneously training, and the network loss of the whole neural network is determined by using the segmentation result output by the segmentation network and the corresponding marking result in the marked lung lobe fissure characteristics. And further feeding back and adjusting the characteristics by using the network loss of the whole neural network to extract network parameters of the neural network, the spatial attention neural network and the segmentation network until the network loss of the whole neural network is less than a second loss threshold value. The first loss threshold in the embodiments of the present disclosure is greater than or equal to the second loss threshold, so that the network accuracy of the network can be improved.
When the neural network of the embodiment of the disclosure is applied to perform lung lobe segmentation based on multiple viewing angles, lung images of the same lung under different viewing angles can be respectively and correspondingly input into the three branch networks, and finally, a final segmentation result of the lung image is obtained through the neural network.
In summary, the lung lobe segmentation method and device based on multiple viewing angles provided by the embodiments of the present disclosure can fuse multi-viewing angle feature information, perform lung lobe segmentation of a lung image, and solve the problem that information is lost and lung lobes cannot be accurately segmented due to insufficient use of information of other viewing angles to segment lung lobes.
In addition, the embodiment of the present disclosure further provides a lung lobe segmentation apparatus or a segmentation unit based on multiple viewing angles, which includes: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to call the instructions stored in the memory to execute the lung lobe segmentation method based on multi-view according to any one of the above embodiments.
In some embodiments, the present disclosure provides a lung lobe segmentation apparatus or a segmentation unit based on multiple viewing angles, which has functions or modules that are included in the apparatus or the segmentation unit, and the modules may be used to execute the method described in the foregoing lung lobe segmentation embodiment based on multiple viewing angles.
Step 1002: and judging whether the CT value on the lung lobe is an emphysema region or not according to the lung lobe with the CT value and a set threshold value.
In the embodiment of the present invention, the threshold is set to be a CT value, since the CT value of the emphysema region is not changed substantially during deep inhalation, but other normal regions (non-emphysema regions) enter air, the CT value of the air is 1024HU, and since the emphysema region is filled with air, the CT of the emphysema region is close to 1024 HU. In the medical field, the set threshold is generally selected to be-950 HU, and if the CT value of the lung lobe is less than-950 HU, the lung lobe is judged to be an emphysema region, and if the CT value of the lung lobe is greater than or equal to-950 HU, the lung lobe is judged not to be the emphysema region. In other disclosed papers or in some possible embodiments, the set threshold may fluctuate, and the invention is not particularly limited to the set threshold, and the set threshold can be adjusted as appropriate by those skilled in the art.
In the embodiment of the present invention, the extracted lung lobes are right upper lung lobes, which are the first lung lobes of COPD (chronic obstructive pulmonary disease), and are described as right upper lung lobes, and fig. 6 is a schematic diagram illustrating extraction of right upper lung lobes extracted by a method and/or apparatus for extracting lung lobes with CT values according to an embodiment of the present invention. The right superior lung lobe in fig. 6 is a lung lobe with a CT value, and the CT value of the right superior lung lobe is compared with a set threshold value. If the CT value of the upper lung lobe of the right lung is less than-950 HU, the position of the upper lung lobe of the right lung is an emphysema area; if the CT value of the right upper lung lobe is greater than or equal to-950 HU, the right upper lung lobe is judged not to be an emphysema region here.
Step 1003: and if so, coloring the emphysema region and displaying the emphysema region. Step 1004: if not, the coloring is not carried out or the coloring color is different from the color of the emphysema region.
In the embodiment of the invention, the coloring is pseudo-color, so that doctors can conveniently observe the emphysema region of the extracted lung lobes, or the extracted lung lobes are positioned and displayed. If the extracted lung lobe is the right superior lung lobe, and when the CT value of the right superior lung lobe is less than-950 HU, the area of the right superior lung lobe is the emphysema area, and the area is colored, such as: red; and if the CT value of the upper right lung lobe is more than or equal to-950 HU, judging that the upper right lung lobe is not the emphysema region, and not coloring or coloring green to distinguish the color of the emphysema region.
The above-mentioned embodiments are merely embodiments for expressing the invention, and the description is specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, substitutions of equivalents, improvements and the like can be made without departing from the spirit of the invention, and these are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A pulmonary lobe-based emphysema region judgment method is characterized by comprising the following steps:
extracting lung lobes with CT values;
judging whether the CT value on the lung lobes is an emphysema region or not according to the lung lobes with the CT value and a set threshold;
if yes, coloring the emphysema region and displaying the emphysema region;
if not, the coloring is not carried out or the coloring color is different from the color of the emphysema region.
2. The method according to claim 1, wherein the method for extracting lung lobes with CT values includes:
acquiring a lung lobe segmentation image of a lung image;
determining lung lobes to be extracted;
marking the lung lobes to be extracted;
and obtaining the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung images.
3. The method of claim 2, wherein prior to said obtaining the segmented image of the lung lobes of the lung image, further comprising:
acquiring the lung image;
carrying out lung lobe segmentation on the lung image to obtain a lung lobe segmentation image;
and/or
The method for segmenting the lung image into the lung lobe segmentation image comprises the following steps:
acquiring lung lobe fissure characteristics of a lung image in a sagittal plane, lung lobe fissure characteristics in a coronal plane and lung lobe fissure characteristics in a transverse plane;
correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane;
and segmenting the lung image by using the corrected lung lobe fissure characteristics.
4. The judgment method according to claim 3, wherein:
the specific method for marking the lung lobes to be extracted and obtaining the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung images comprises the following steps:
obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted, and multiplying the marked mask image by the lung image to obtain the lung lobe to be extracted; and/or
The method for obtaining a mask image according to the lung lobe segmentation image and obtaining a marked mask image according to the mask image and the mark of the to-be-extracted lung lobe comprises the following steps:
masking the lung lobe segmentation image to obtain a mask image of each lung lobe, and obtaining a marked mask image according to a preset mask value of the mask image of each lung lobe and the mark of the to-be-extracted lung lobe; setting pixels in the marked mask image to be 1 and setting pixels in the region of the lung lobe segmentation image outside the marked mask image to be 0; and/or
The method for correcting the third lung lobe fissure characteristic by using the lung lobe fissure characteristics of any two of the sagittal plane, the coronal plane and the transverse plane comprises the following steps:
mapping the arbitrary two lung lobe fissure features to the view angle of the third lung lobe fissure feature;
and correcting the third lung lobe fissure characteristic by using the mapped lung lobe fissure characteristics of any two.
5. The judgment method according to claim 4, wherein:
obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted, and multiplying the marked mask image by the lung image to obtain the lung lobe to be extracted by using the marked mask image:
and multiplying the marked mask images with the same number of layers by the lung image to obtain a layer of the lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of the lung lobes to be extracted to obtain the three-dimensional lung lobes to be extracted.
6. The judgment method according to claim 5, wherein:
obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the to-be-extracted lung lobe, and respectively determining the number of layers of the lung image and the number of layers of the marked mask image before obtaining the to-be-extracted lung lobe by multiplying the marked mask image by the lung image;
judging whether the number of layers of the lung image is equal to that of the marked mask image;
if the number of the marked mask images is equal to the number of the marked mask images, multiplying the marked mask images by the lung images to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on the plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted;
if not, interpolating the marked mask images to obtain mask images with the same number of layers as the lung images, then multiplying the marked mask images with the same number of layers by the lung images in sequence to obtain a layer of lung lobes to be extracted, and performing three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
7. The judgment method according to claim 4, wherein:
the method for correcting the third lung lobe fissure characteristic by using the mapped any two lung lobe fissure characteristics comprises the following steps:
respectively carrying out space attention feature fusion by using the mapped two lung lobe fissure features and the mapped third lung lobe fissure feature to obtain a first fusion feature and a second fusion feature;
and obtaining the corrected third lung lobe fissure characteristic according to the first fusion characteristic and the second fusion characteristic.
8. The judgment method according to claim 7, wherein:
the method for performing spatial attention feature fusion by using the mapped two arbitrary lung lobe fissure features and the mapped third lung lobe fissure feature respectively to obtain the first fusion feature and the second fusion feature comprises the following steps:
respectively connecting the arbitrary two lung lobe fissure characteristics with the third lung lobe fissure characteristic to obtain a first connecting characteristic and a second connecting characteristic;
performing a first convolution operation on the first connection feature to obtain a first convolution feature, and performing the first convolution operation on the second connection feature to obtain a second convolution feature;
performing a second convolution operation on the first convolution characteristic to obtain a first attention coefficient, and performing a second convolution operation on the second convolution characteristic to obtain a second attention coefficient;
the first fusion feature is obtained using the first convolution feature and the first attention coefficient, and the second fusion feature is obtained using the second convolution feature and the second attention coefficient.
9. The judgment method according to claim 8, wherein:
the method for obtaining the first fusion feature by using the first convolution feature and the first attention coefficient and obtaining the second fusion feature by using the second convolution feature and the second attention coefficient comprises the following steps:
adding the feature multiplied by the first attention coefficient and the first convolution feature to obtain the first fusion feature; adding the feature multiplied by the second convolution feature and the second attention coefficient to the second convolution feature to obtain a second fusion feature; and/or
Adding the feature multiplied by the first convolution feature and the first attention coefficient to the first convolution feature, and performing a plurality of convolution operations on the added feature to obtain a first fusion feature; and adding the feature obtained by multiplying the second convolution feature by the second attention coefficient to the second convolution feature, and performing a plurality of convolution operations on the added feature to obtain the second fusion feature.
10. An emphysema region determination device based on lung lobes is characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to execute the method for determining a region of lung lobe-based emphysema of any one of claims 1 to 9.
CN202010042872.XA 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device Active CN111260627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010042872.XA CN111260627B (en) 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010042872.XA CN111260627B (en) 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device

Publications (2)

Publication Number Publication Date
CN111260627A true CN111260627A (en) 2020-06-09
CN111260627B CN111260627B (en) 2023-04-28

Family

ID=70946974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010042872.XA Active CN111260627B (en) 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device

Country Status (1)

Country Link
CN (1) CN111260627B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010220904A (en) * 2009-03-25 2010-10-07 Fujifilm Corp Image processor, and image processing method and program
CN102429679A (en) * 2011-09-09 2012-05-02 华南理工大学 Computer-assisted emphysema analysis system based on chest CT (Computerized Tomography) image
US20150254843A1 (en) * 2012-09-13 2015-09-10 The Regents Of The University Of California Lung, lobe, and fissure imaging systems and methods
CN110399799A (en) * 2019-06-26 2019-11-01 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110473207A (en) * 2019-07-30 2019-11-19 赛诺威盛科技(北京)有限公司 A kind of method of the Interactive Segmentation lobe of the lung
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010220904A (en) * 2009-03-25 2010-10-07 Fujifilm Corp Image processor, and image processing method and program
CN102429679A (en) * 2011-09-09 2012-05-02 华南理工大学 Computer-assisted emphysema analysis system based on chest CT (Computerized Tomography) image
US20150254843A1 (en) * 2012-09-13 2015-09-10 The Regents Of The University Of California Lung, lobe, and fissure imaging systems and methods
CN110399799A (en) * 2019-06-26 2019-11-01 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110473207A (en) * 2019-07-30 2019-11-19 赛诺威盛科技(北京)有限公司 A kind of method of the Interactive Segmentation lobe of the lung
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CRAIG J. GALBÁN 等: "CT-based Biomarker Provides Unique Signature for Diagnosis of COPD Phenotypes and Disease Progression" *

Also Published As

Publication number Publication date
CN111260627B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN108830155B (en) Heart coronary artery segmentation and identification method based on deep learning
CN108537784B (en) CT image pulmonary nodule detection method based on deep learning
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN108389201B (en) Lung nodule benign and malignant classification method based on 3D convolutional neural network and deep learning
CN107665492B (en) Colorectal panoramic digital pathological image tissue segmentation method based on depth network
CN109146872B (en) Heart coronary artery image segmentation and identification method based on deep learning and optical flow method
CN111242931B (en) Method and device for judging small airway lesions of single lung lobes
CN111275673A (en) Lung lobe extraction method, device and storage medium
CN112132166B (en) Intelligent analysis method, system and device for digital cell pathology image
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN106780522A (en) A kind of bone marrow fluid cell segmentation method based on deep learning
CN111292343B (en) Lung lobe segmentation method and device based on multi-view angle
CN109447963A (en) A kind of method and device of brain phantom identification
CN104732520A (en) Cardio-thoracic ratio measuring algorithm and system for chest digital image
CN110619635A (en) Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
CN112381811A (en) Method, device and equipment for realizing medical image data labeling
CN113450305B (en) Medical image processing method, system, equipment and readable storage medium
CN112884792A (en) Lung image segmentation method and device, electronic equipment and storage medium
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN116758087B (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN117541574A (en) Tongue diagnosis detection method based on AI semantic segmentation and image recognition
WO2023142753A1 (en) Image similarity measurement method and device
CN111260627A (en) Pulmonary lobe-based emphysema region judgment method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant