CN111260627B - Pulmonary lobe-based emphysema area judging method and device - Google Patents

Pulmonary lobe-based emphysema area judging method and device Download PDF

Info

Publication number
CN111260627B
CN111260627B CN202010042872.XA CN202010042872A CN111260627B CN 111260627 B CN111260627 B CN 111260627B CN 202010042872 A CN202010042872 A CN 202010042872A CN 111260627 B CN111260627 B CN 111260627B
Authority
CN
China
Prior art keywords
lung
lobe
image
feature
extracted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010042872.XA
Other languages
Chinese (zh)
Other versions
CN111260627A (en
Inventor
杨英健
郭英委
应立平
郭嘉琦
高宇宁
孟繁聪
康雁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202010042872.XA priority Critical patent/CN111260627B/en
Publication of CN111260627A publication Critical patent/CN111260627A/en
Application granted granted Critical
Publication of CN111260627B publication Critical patent/CN111260627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a pulmonary lobe-based emphysema area judging method and a pulmonary lobe-based emphysema area judging device, and relates to the field of biomedical engineering, wherein the pulmonary lobe-based emphysema area judging method comprises the following steps: extracting lung lobes with CT values; judging whether the CT value on the lung lobes is an emphysema area according to the lung lobes with the CT values and a set threshold value; if so, coloring the emphysema area, and displaying the emphysema area; if not, the coloring is not performed or the color of the coloring is different from that of the emphysema area. The method solves the problems that in the positioning process of the emphysema area, the whole lung is quantitatively analyzed to cause huge data volume and slower calculation speed, and the quantitative analysis and display of the emphysema can not be performed by using the determined or independent CT value of the single lung lobe.

Description

Pulmonary lobe-based emphysema area judging method and device
Technical Field
The invention relates to the field of biomedical engineering, in particular to a pulmonary lobe-based emphysema area judging method and device.
Background
In the field of medical image processing, it is often necessary to reconstruct raw data and then analyze the raw data to obtain a desired or interesting region. In the field of small airway lesions or emphysema in pulmonary diseases, it is often only necessary to quantitatively analyze the most severe lobes of the lung.
Such as Chronic Obstructive Pulmonary Disease (COPD), abbreviated as chronic obstructive pulmonary disease. Emphysema is mainly characterized by incomplete reversibility of airflow limitation, and is often a progressive exacerbation trend. COPD patients were classified into 4 classes according to global policy interpretation for diagnosis, treatment and prevention of GOLD chronic obstructive pulmonary disease in 2018: GOLD 1, GOLD 2, GOLD 3, and GOLD 4. The spatial distribution characteristics of emphysema reflect the severity of different pulmonary emphysema air flow limitation, and researchers at home and abroad conduct correlation research on the influence of different pulmonary emphysema on lung function. The two lower lung lobes are mainly more relevant to pulmonary airflow limitation, probably due to the gravity effect, causing the two lower lung She Qidao to close earlier on exhalation, thereby causing airflow limitation; while the gas exhaled is mostly gas from the lower lobes of the lungs, therefore, the diffuse function is mainly caused by the upper lobes of the lungs. Further, COPD occurs first in the upper right lung lobe. Meanwhile, as the severity of lung function increases, COPD also gradually progresses toward the lower lobes of the two lungs. Such as rapid localization and assessment of progressive obstructive pulmonary disease in the upper lobe of the right lung. If the upper leaf of the right lung does not find the pulmonary emphysema, the lung can be initially diagnosed to have no pulmonary emphysema; if the upper lobe of the right lung is found to have the sign of emphysema, the lower lobe of the two lungs is further judged.
At present, quantitative analysis of the whole lung is required, so that the data volume is huge, the calculation speed is low, and quantitative analysis can not be performed by using a determined or independent single lung lobe CT value. In conclusion, if the extraction of a single lung lobe is realized, the problems of huge data volume and slower calculation speed caused by quantitative analysis of the whole lung can be solved, and the rapid positioning and evaluation of the chronic obstructive pulmonary disease can be realized.
Disclosure of Invention
In view of the above, the invention provides a pulmonary lobe-based emphysema area judging method and device, which are used for solving the problems that the whole lung is quantitatively analyzed in the pulmonary emphysema area positioning process, so that the data volume is huge, the calculation speed is low, and the quantitative analysis and display of pulmonary emphysema can not be performed by using a determined or independent single pulmonary lobe CT value.
In a first aspect, the present invention provides a method for determining an emphysema area based on lung lobes, comprising:
extracting lung lobes with CT values;
judging whether the CT value on the lung lobes is an emphysema area according to the lung lobes with the CT values and a set threshold value;
if so, coloring the emphysema area, and displaying the emphysema area;
if not, the coloring is not performed or the color of the coloring is different from that of the emphysema area.
Preferably, the method for extracting lung lobes with CT values comprises the following steps:
acquiring a lobe segmentation image of the lung image;
determining lung lobes to be extracted;
marking the lung lobes to be extracted;
and obtaining the lung lobes to be extracted according to the lung lobes to be extracted after marking and the lung images.
Preferably, before the capturing of the lobe segmented image of the lung image, further comprises:
acquiring the lung image;
performing lung lobe segmentation on the lung image to obtain the lung lobe segmented image;
and/or
The method for obtaining the lung lobe segmentation image by carrying out lung lobe segmentation on the lung image comprises the following steps:
acquiring lung image features of the lung under sagittal plane, features of the lung under coronal plane and features of the lung under transverse plane;
correcting a third lung lobe slit feature by utilizing the lung lobe slit features of any two of the sagittal plane, the coronal plane and the transverse plane;
the lung image is segmented using the corrected lung lobe slit features.
Preferably, the lung lobes to be extracted are marked, and the specific method for obtaining the lung lobes to be extracted according to the marked lung lobes to be extracted and the lung image is as follows:
Obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted, and multiplying the lung image by the marked mask image to obtain the lung lobe to be extracted; and/or
The method for obtaining a mask image according to the lung lobe segmentation image and obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted comprises the following steps:
carrying out mask processing on the lung lobe segmentation images to obtain mask images of each lung lobe, and obtaining mask images of the marks according to preset mask values of the mask images of each lung lobe and the marks of the lung lobes to be extracted; and sets 1 for pixels within the marked mask image and sets 0 for pixels of areas of the lung lobe segmentation image other than the marked mask image; and/or
The method for correcting the third lung lobe crack characteristic by utilizing the lung lobe crack characteristic of any two of the sagittal plane, the coronal plane and the transverse plane comprises the following steps:
mapping the features of the any two to the view angle at which the third pair of features of the lung lobes resides;
And correcting the third lung lobe crack characteristic by using the mapped lung lobe crack characteristics of any two.
Preferably, a mask image is obtained according to the lung lobe segmentation image, a marked mask image is obtained according to the mask image and the mark of the lung lobe to be extracted, and the mask image of the mark is multiplied by the lung image to obtain the specific method of the lung lobe to be extracted comprises the following steps:
and multiplying the mask images of the marks with the same layer number by the lung images to obtain one layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
Preferably, a mask image is obtained according to the lung lobe segmentation image, a marked mask image is obtained according to the mask image and the mark of the lung lobe to be extracted, and before the lung image is obtained by multiplying the marked mask image by the lung image, the number of layers of the lung image and the number of layers of the marked mask image are respectively determined;
judging whether the number of layers of the lung image is equal to the number of layers of the mask image of the mark;
if the number of the lung lobes is equal, multiplying the lung images by mask images of the marks with the same number of layers to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted;
If not, interpolating the marked mask images to obtain mask images with the same layer number as the lung images, multiplying the lung images by the marked mask images with the same layer number to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
Preferably, the method for correcting the third lung lobe crack feature by using the mapped lung lobe crack features of any two is as follows:
respectively carrying out spatial attention feature fusion by using the mapped lung lobe slit features of any two and the third lung lobe slit features to obtain a first fusion feature and a second fusion feature;
and obtaining the corrected third lung lobe slit feature according to the first fusion feature and the second fusion feature.
Preferably, the method for obtaining the first fusion feature and the second fusion feature by using the mapped lung lobe slit features of any two and the mapped third lung lobe slit feature to perform spatial attention feature fusion respectively includes:
respectively connecting the lung lobe slit features of any two with the third lung lobe slit feature to obtain a first connection feature and a second connection feature;
Performing a first convolution operation on the first connection feature to obtain a first convolution feature, and performing a first convolution operation on the second connection feature to obtain a second convolution feature;
performing a second convolution operation on the first convolution feature to obtain a first attention coefficient, and performing a second convolution operation on the second convolution feature to obtain a second attention coefficient;
the first fused feature is obtained using a first convolution feature and a first attention coefficient, and the second fused feature is obtained using a second convolution feature and a second attention coefficient.
Preferably, the method for obtaining the first fusion feature by using the first convolution feature and the first attention coefficient and obtaining the second fusion feature by using the second convolution feature and the second attention coefficient is as follows:
adding the characteristic multiplied by the first attention coefficient by the first convolution characteristic to obtain the first fusion characteristic; adding the characteristic multiplied by the second attention coefficient by the second convolution characteristic to obtain a second fusion characteristic; and/or
Adding the characteristic multiplied by the first convolution characteristic and the first attention coefficient with the first convolution characteristic, and carrying out a plurality of convolution operations on the added characteristic to obtain the first fusion characteristic; and adding the characteristic multiplied by the second attention coefficient by the second convolution characteristic to the second convolution characteristic, and carrying out a plurality of convolution operations on the added characteristic to obtain the second fusion characteristic.
In a second aspect, the present invention provides a pulmonary lobe-based emphysema area determination apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored by the memory to perform the lung lobe based emphysema area determination method as described above.
The invention has at least the following beneficial effects:
the invention provides a pulmonary lobe-based emphysema area judging method and device, which are used for solving the problems that the whole lung is quantitatively analyzed in the pulmonary emphysema area positioning process, so that the data volume is huge, the calculation speed is low, and the quantitative analysis and display of pulmonary emphysema can not be performed by utilizing a determined or independent single pulmonary lobe CT value.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for determining emphysema area based on lung lobes according to an embodiment of the invention;
FIG. 2 is a flow chart of a method for extracting lung lobes with CT values according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a lung lobe extraction device with CT values according to an embodiment of the present invention;
FIG. 4 is an original CT image of a lung lobe extraction method and/or apparatus with CT values according to an embodiment of the present invention;
FIG. 5 is a mask image extracted by a lung lobe extraction method and/or apparatus with CT values according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an embodiment of the present invention of an extraction method and/or apparatus for extracting upper right lung lobes with CT values;
FIG. 7 is a flow chart diagram of a method for lobe segmentation based on multiple perspectives in accordance with an embodiment of the present invention;
fig. 8 is a schematic diagram of a network structure of a lung lobe segmentation method and/or apparatus based on multiple views according to an embodiment of the present invention.
Detailed Description
The present invention is described below based on examples, but it should be noted that the present invention is not limited to these examples. In the following detailed description of the present invention, certain specific details are set forth in detail. However, for the part not described in detail, the present invention is also fully understood by those skilled in the art.
Furthermore, those of ordinary skill in the art will appreciate that the drawings are provided solely for the purposes of illustrating the objects, features, and advantages of the invention and that the drawings are not necessarily drawn to scale.
Meanwhile, unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, it is the meaning of "including but not limited to".
Fig. 1 is a flow chart of a lung lobe-based emphysema area determination method according to an embodiment of the invention. As shown in fig. 1, a method for determining an emphysema area based on lung lobes includes: step 1001: extracting lung lobes with CT values; step 1002: judging whether the CT value on the lung lobes is an emphysema area according to the lung lobes with the CT values and a set threshold value; step 1003: if so, coloring the emphysema area, and displaying the emphysema area; step 1004: if not, the coloring is not performed or the color of the coloring is different from that of the emphysema area. The method solves the problems that in the positioning process of the emphysema area, the whole lung is quantitatively analyzed to cause huge data volume and slower calculation speed, and the quantitative analysis and display of the emphysema can not be performed by using the determined or independent CT value of the single lung lobe.
Step 1001: lung lobes with CT values are extracted.
Step 1001: extracting lung lobes with CT values is intended to propose lung lobes to be extracted after segmentation, the lung comprising a left lung and a right lung, the right lung comprising an upper right lobe, a middle right lobe and a lower right lobe, the left lung comprising an upper left lobe and a lower left lobe. The lung lobes with CT values are extracted as one or more lung lobes of upper right lobe, middle right lobe, lower right lobe, upper left lobe and lower left lobe to be extracted.
Fig. 2 is a flow chart of a method for extracting lung lobes with CT values. As shown in fig. 2, a method for extracting lung lobes with CT values includes: step S101, obtaining a lung lobe segmentation image of a lung image; step S102, determining lung lobes to be extracted; step S103, marking the lung lobes to be extracted; step S104 obtains the lung lobes to be extracted according to the lung lobes to be extracted after marking and the lung image. The method solves the problems that the whole lung is required to be quantitatively analyzed at present, so that the data volume is huge, the calculation speed is low, and the quantitative analysis can not be performed by utilizing the determined or independent single lung lobe CT value. Meanwhile, the invention extracts the lung lobes to be extracted, and the three-dimensional reconstruction of the single lung lobe is quicker later, thereby being beneficial to the doctor to further observe each lung lobe independently and avoiding the condition of blocking the lung lobes. It is worth to say that the lung lobes to be extracted in the present invention are the lung lobes to be extracted.
Step S101 acquires a lobe segmented image of the lung image.
In an embodiment of the invention, a lung image is first acquired, where the lung image is the original lung image, i.e. thin layer scan data obtained from an influencing device, such as a CT machine.
Step S102 determines lung lobes to be extracted.
Specifically, the lung parts are right lung and left lung, and there are 5 lobes in total, the right lung includes 3 lobes, respectively the upper right lobe, the middle right lobe, and the lower right lobe. The left lung includes 2 lobes, the upper left lobe and the lower left lobe, respectively. The invention may enable the extraction of any one or more of the 5 lung lobes.
Step S103 marks the lung lobes to be extracted.
The labeling of lung lobes to be extracted is to determine lung lobes to be extracted, a plurality of lung lobes may be labeled, or only one lung lobe may be labeled.
Step S104 obtains the lung lobes to be extracted according to the lung lobes to be extracted after marking and the lung image.
If the lung lobes to be extracted (to be extracted) have been determined and marked, the extraction of the lung lobes can be performed from the lung image before segmentation, as will be described in more detail below.
If the upper right lobe of the right lung needs to be extracted, the following operation is performed, a lung image is first acquired, and lung lobe segmentation is performed on the lung image to obtain a lung lobe segmented image. And then executing a lung lobe segmentation image for acquiring a lung image, determining that the lung lobe to be extracted is the upper right lobe of the right lung, marking the lung lobe of the upper right lobe of the right lung, and obtaining the upper right lobe of the right lung to be extracted according to the marked upper right lobe of the right lung and the lung image.
In the embodiment of the present invention, the specific method for obtaining the lung lobes to be extracted according to the lung lobes to be extracted after marking and the lung image in step S104 is as follows: and obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted, and multiplying the lung image by the marked mask image to obtain the lung lobe to be extracted. The mask image of the mark is multiplied by the lung image to obtain the lung lobe to be extracted, that is, the mask image of the mark, and the lung image are the same in size (size).
That is, after the lobe segmented image of the lung image is acquired, mask masking operation is performed on each lobe for which the lobe segmented image has been completed, the mask masking operation refers to a series of binary digits in computer science and digital logic, and the mask specification bit is reached by bitwise operation with the target digit to achieve the demand.
The method for obtaining a mask image according to the lung lobe segmentation image and obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted comprises the following steps: carrying out mask processing on the lung lobe segmentation images to obtain mask images of each lung lobe, and obtaining mask images of the marks according to preset mask values of the mask images of each lung lobe and the marks of the lung lobes to be extracted; and sets 1 for pixels within the marked mask image and sets 0 for pixels of areas of the lung lobe segmentation image other than the marked mask image. Multiplying the mask image of the marker by the lung image to obtain the lung lobes to be extracted.
The mask processing is performed on the lung lobe segmentation image to obtain a mask image of each lung lobe, that is, mask processing is performed on the lung lobe segmentation image of the obtained lung image to obtain a mask image of each lung lobe in the lung image, and then the specific operation of obtaining the mask image of the mark according to the preset mask value of the mask image of each lung lobe and the mark of the lung lobe to be extracted is as follows: determining the lung lobes to be extracted according to the preset mask value of the mask image of the lung lobes to be extracted and the mark to obtain the mask image of the mark.
Specifically, the lung lobes to be extracted are marked, and the lung lobes to be extracted are determined according to a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4 and a preset mask value 5 of the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe to obtain the marked mask image, and the marking value for marking the lung lobes to be extracted can only take one or more of 1 to 5.
More specifically, mask masking is performed on each lung lobe of the lung lobe segmentation image for which segmentation has been completed to distinguish 5 lung lobes of the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe, and the lower left lobe, and the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe, and the lower left lobe may be lung lobes defined as a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4, and a preset mask value 5, respectively. The lung lobes to be extracted are marked as one or more of a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4 and a preset mask value 5. If the mark is 1, the mask value is the same as the preset mask value 1, and the mark representing the lung lobe to be extracted is the upper right lobe, so that the mask diagram of the mark is obtained.
It should be noted that before obtaining the lung lobe to be extracted according to the lung lobe to be extracted after marking and the lung image, it is further required to determine whether the marking is within a range of preset mask values, if the marking is within the range of preset mask values, obtaining the lung lobe to be extracted according to the lung lobe to be extracted after marking and the lung image, and if the marking is not within the range of preset mask values, prompting. If the mark is 6, the mark is not in the range of the preset mask value, and a prompt, such as prompting to report errors, is carried out.
Further, if the preset mask value is within the range of the preset mask value, further judging whether the preset mask value is the same as the mark of the lung lobe to be extracted, if so, setting 1 to pixels in the mask image of the mark is not needed, and then obtaining the lung lobe to be extracted according to the lung lobe to be extracted after marking and the lung image; otherwise (if not the same), the upper right lobe inner pixel of the right lung is set to 1, and then the lung lobe to be extracted is obtained according to the marked lung lobe to be extracted and the lung image. The preset mask value is a pixel or a pixel value.
For example, it is determined that the lung lobe to be extracted is the upper right lobe of the right lung, the upper right lobe of the right lung is a preset mask value 1, and the mark 1 represents the upper right lobe of the right lung from which the preset mask value 1 is extracted, and since the preset mask value 1 is the same as the pixel value 1, it is not necessary to set 1 in the mask image of the upper right lobe of the right lung, and set 0 in the pixels of the region of the lung lobe segmentation image other than the mark to obtain the mask image of the mark, and multiplying the mask image of the mark by the lung image to obtain the lung lobe to be extracted.
In the embodiment of the invention, a mask image is obtained according to the lung lobe segmentation image, a marked mask image is obtained according to the mask image and the mark of the lung lobe to be extracted, and the mask image of the mark is multiplied by the lung image to obtain the lung lobe to be extracted by the specific method that: and multiplying the mask images of the marks with the same layer number by the lung images to obtain one layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
In the embodiment of the invention, a marked mask image is constructed, and the number of layers of the lung image and the number of layers of the mask image are respectively determined before the mask image is multiplied by the lung image to obtain the lung lobes to be extracted; judging whether the number of layers of the lung image is equal to the number of layers of the mask image; if the number of the lung lobes is equal, multiplying the lung images by mask images of the marks with the same number of layers to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted; if not, interpolating the mask images to obtain mask images with the same number of layers as the lung images, multiplying the lung images by the mask images with the same number of layers to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
For example, the lung image is an original image acquired from the imaging device, the number of layers of the lung image is 400, the number of layers of the mask image is 400, and each layer of the lung image and the mask image corresponds to each other. Carrying out mask processing on the lung lobe segmentation images to obtain mask images of each lung lobe, and obtaining mask images of the marks according to preset mask values of the mask images of each lung lobe and the marks of the lung lobes to be extracted; and sets 1 for pixels within the marked mask image and sets 0 for pixels of areas of the lung lobe segmentation image other than the marked mask image. And multiplying the lung image of the first layer by the mask image of the mark to obtain first layer data of the lung lobes to be extracted, and then analogizing to obtain 400 th layer data of the lung lobes to be extracted, and then carrying out three-dimensional reconstruction on the first layer data of the lung lobes to be extracted to the 400 th layer data of the lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted. Methods of three-dimensional Reconstruction (3D Reconstruction) are known in the art and can be freely selected by those skilled in the art as desired.
Before the acquiring of the lobe segmentation image of the lung image, further comprising: acquiring the lung image; performing lung lobe segmentation on the lung image to obtain the lung lobe segmented image; and/or performing lobe segmentation on the lung image to obtain the lobe segmented image, wherein the lobe segmentation method is based on a traditional lobe mode, and the deep learning method is also available, such as performing lobe segmentation by using a U-net segmentation network or a V-net segmentation network, and also performing lobe segmentation by using a PDV network proposed in paper Automatic segmentation of pulmonary lobes using a progressive dense V-network.
According to the method for obtaining the lung lobe segmentation image, the lung lobe segmentation method and the device based on the multiple visual angles can be selected, so that the problem that information is lost and the lung lobe cannot be accurately segmented due to the fact that the lung lobe is segmented by not fully utilizing information of other visual angles is solved.
Specifically, a method and a device for segmenting lung lobes based on multiple views comprise the following steps: acquiring lung image features of the lung under sagittal plane, features of the lung under coronal plane and features of the lung under transverse plane; and correcting the third lung lobe crack characteristic by utilizing the lung lobe crack characteristic of any two of the sagittal plane, the coronal plane and the transverse plane. See in particular the detailed description in fig. 7 and 8.
Meanwhile, the invention also provides a lung lobe extraction device with CT value, as shown in figure 3, comprising: an acquisition unit 201 for acquiring a lobe segmented image of a lung image; a determining unit 202 for determining lung lobes to be extracted; a marking unit 203 for marking lung lobes to be extracted; an extraction unit 204 for obtaining lung lobes to be extracted from the labeled lung lobes to be extracted and the lung image; wherein the acquisition unit 201 is connected to the determination unit 202 and the extraction unit, respectively, the determination unit 202 is further connected to the marking unit 203, and the marking unit 203 is further connected to the extraction unit 204. The method solves the problems that the whole lung is required to be quantitatively analyzed at present, so that the data volume is huge, the calculation speed is low, and the quantitative analysis can not be performed by utilizing the determined or independent single lung lobe CT value. Meanwhile, the invention extracts the lung lobes to be extracted, and the three-dimensional reconstruction of the single lung lobe is quicker later, thereby being beneficial to the doctor to further observe each lung lobe independently and avoiding the condition of blocking the lung lobes. It is worth to say that the lung lobes to be extracted in the present invention are the lung lobes to be extracted. Reference is made in particular to a description in the method of lobe extraction with CT values.
In fig. 3, the lung lobe extraction device with CT value according to the present invention further includes: a dividing unit; the segmentation unit is connected to the acquisition unit 201, and is configured to acquire a lung image, and perform lobe segmentation on the lung image to obtain a lobe segmented image. And/or the dividing unit performs the following operations: acquiring lung image features of the lung under sagittal plane, features of the lung under coronal plane and features of the lung under transverse plane; correcting a third lung lobe slit feature by utilizing the lung lobe slit features of any two of the sagittal plane, the coronal plane and the transverse plane; the lung image is segmented using the corrected lung lobe slit features.
That is, before the capturing of the lobe segmented image of the lung image, it further includes: acquiring the lung image; performing lung lobe segmentation on the lung image to obtain the lung lobe segmented image; and/or performing lobe segmentation on the lung image to obtain the lobe segmented image, wherein the lobe segmentation method is based on a traditional lobe mode, and the deep learning method is also available, such as performing lobe segmentation by using a U-net segmentation network or a V-net segmentation network, and also performing lobe segmentation by using a PDV network proposed in paper Automatic segmentation of pulmonary lobes using a progressive dense V-network.
According to the method for obtaining the lung lobe segmentation image, the lung lobe segmentation method and the device based on the multiple visual angles can be selected, so that the problem that information is lost and the lung lobe cannot be accurately segmented due to the fact that the lung lobe is segmented by not fully utilizing information of other visual angles is solved.
Specifically, a method and a device for segmenting lung lobes based on multiple views comprise the following steps: acquiring lung image features of the lung under sagittal plane, features of the lung under coronal plane and features of the lung under transverse plane; and correcting the third lung lobe crack characteristic by utilizing the lung lobe crack characteristic of any two of the sagittal plane, the coronal plane and the transverse plane. See in particular the detailed description in fig. 7 and 8.
In fig. 3, the extraction unit 204 of the lung lobe extraction device with CT value according to the present invention comprises: a mask image construction unit and a pixel point multiplication unit; the marked mask image construction unit is respectively connected with the acquisition unit 201, the marking unit 203 and the pixel multiplication unit, and is used for obtaining a mask image according to the lung lobe segmentation image and obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted; the pixel point multiplication unit is used for multiplying the lung image by the mask image of the mark to obtain the lung lobes to be extracted. The mask image of the mark is multiplied by the lung image to obtain the lung lobe to be extracted, that is, the mask image of the mark, and the lung image are the same in size (size).
In fig. 3, the mask image construction unit of the mark obtains a mask image from the lung lobe segmentation image, and obtains a mask image of the mark from the mask image and the mark of the lung lobe to be extracted, multiplies the lung image by the mask image of the mark, and obtains the lung lobe to be extracted, and performs the following operations: carrying out mask processing on the lung lobe segmentation images to obtain mask images of each lung lobe, and obtaining mask images of the marks according to preset mask values of the mask images of each lung lobe and the marks of the lung lobes to be extracted; and setting 1 to pixels in the marked mask image and setting 0 to pixels in areas of the lung lobe segmentation image other than the marked mask image, the marked mask image being multiplied by the lung image to obtain the lung lobe to be extracted.
In an embodiment of the present invention, the operations performed to obtain the mask image from the lung lobe segmentation image are: and carrying out mask processing on the lung lobe segmentation image of the acquired lung image to obtain a mask image of each lung lobe. That is, after the lobe segmented image of the lung image is acquired, mask masking operation is performed on each lobe for which the lobe segmented image has been completed. mask masking operations refer to a string of binary digits in computer science and digital logic that achieve the need to mask a specified bit by bitwise manipulation with the target digit.
In the embodiment of the invention, specifically, mask processing is performed on the lobe segmentation image of the acquired lung image to obtain mask images of 5 lobes in the lung image respectively, and the mark of the lobe to be extracted is determined according to the preset mask value of the mask image to obtain the mask image of the mark. The mask masking operation is performed on each lung lobe of the lung lobe segmentation image, that is, the region positioning of 5 lung lobes that distinguish between the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe is completed, and the regions of the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe may be defined as a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4 and a preset mask value 5, respectively. The lung lobes to be extracted are marked as one or more selected from a preset mask value 1, a preset mask value 2, a preset mask value 3, a preset mask value 4 and a preset mask value 5.
In the embodiment of the invention, the mask image to be extracted is determined according to the preset mask value of the mask image to be extracted to obtain the mask image of the mark, specifically, the mark of the lung to be extracted is obtained after the lung to be extracted is marked, and the mask image of the mark is determined according to the preset mask values 1, 2, 3, 4 and 5 of the upper right, middle right, lower right, left and lower left lobes and the mark of the lung to be extracted, and the mark value of the lung to be extracted is only one or more of 1-5.
And if the lung lobe to be extracted is the upper right lobe, the mark of the lung lobe to be extracted is 1.
It should be noted that before obtaining the lung lobe to be extracted according to the lung lobe to be extracted after marking and the lung image, it is further required to determine whether the marking is within a range of preset mask values, if the marking is within the range of preset mask values, obtaining the lung lobe to be extracted according to the lung lobe to be extracted after marking and the lung image, and if the marking is not within the range of preset mask values, prompting. If the mark is 6, the mark is not in the range of the preset mask value, and a prompt, such as prompting to report errors, is carried out.
Further, if the preset mask value is within the range of the preset mask value, further judging whether the preset mask value is the same as the mark of the lung lobe to be extracted, if so, setting 1 to pixels in the mask image of the mark is not needed, and then obtaining the lung lobe to be extracted according to the lung lobe to be extracted after marking and the lung image; otherwise (if not the same), the upper right lobe inner pixel of the right lung is set to 1, and then the lung lobe to be extracted is obtained according to the marked lung lobe to be extracted and the lung image.
For example, it is determined that the lung lobe to be extracted is the upper right lobe of the right lung, the upper right lobe of the right lung is a preset mask value 1, and the mark 1 represents the upper right lobe of the right lung from which the preset mask value 1 is extracted, and since the preset mask value 1 is the same as the pixel value 1, it is not necessary to set 1 in the mask image of the upper right lobe of the right lung, and set 0 in the pixels of the region of the lung lobe segmentation image other than the mark to obtain the mask image of the mark, and multiplying the mask image of the mark by the lung image to obtain the lung lobe to be extracted. The preset mask value is a pixel or a pixel value.
If the lung lobes to be extracted are the right middle lobe, the right lower lobe, the left upper lobe and the left lower lobe, and pixels within the marked mask image are set to 1, and pixels of an area of the lung lobe segmentation image other than the marked mask image are set to 0. Multiplying the lung image by the mask image of the marker to obtain the lung lobes to be extracted.
In fig. 3, the extraction unit 204 of the lung lobe extraction device with CT value according to the present invention further comprises: a judging unit; the judging unit is respectively connected with the mask image constructing unit and the pixel point multiplying unit and is used for judging whether the number of layers of the lung image is equal to the number of layers of the mask image; if the number of the lung lobes is equal, multiplying the lung images by mask images of the marks with the same number of layers to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted; if not, interpolating the mask images to obtain mask images with the same number of layers as the lung images, multiplying the lung images by the mask images with the same number of layers to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
In addition, the present invention also proposes a storage medium comprising: the computer program is a lung lobe extraction method with CT value, and the processor executes the program to realize the following steps: acquiring a lobe segmentation image of the lung image; determining lung lobes to be extracted; marking the lung lobes to be extracted; and obtaining the lung lobes to be extracted according to the lung lobes to be extracted after marking and the lung images.
FIG. 4 is an original CT image of a lung lobe extraction method and/or apparatus with CT values according to an embodiment of the present invention; FIG. 5 is a mask image extracted by a lung lobe extraction method and/or apparatus with CT values according to an embodiment of the present invention; fig. 6 is a schematic diagram of an extraction of upper right lung lobes with CT values according to an embodiment of the present invention.
Fig. 7 is a flowchart illustrating a method for lobe segmentation based on multiple views according to an embodiment of the present invention. Fig. 8 is a schematic diagram of a network structure of a lung lobe segmentation method and/or apparatus based on multiple views according to an embodiment of the present invention. As shown in fig. 7 and 8, the execution subject of the lung lobe segmentation method based on multiple perspectives provided in the embodiments of the present disclosure may be any image processing apparatus, for example, the lung lobe segmentation method based on multiple perspectives may be executed by a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. The server may be a local server or a cloud server. In some possible implementations, the multi-view based lung lobe segmentation method may be implemented by a processor invoking computer readable instructions stored in a memory.
Fig. 7 is a flowchart illustrating a method for lobe segmentation based on multiple views according to an embodiment of the present invention. As shown in fig. 7, a lung lobe segmentation method or a segmentation unit based on multi-view in an embodiment of the present disclosure includes: step 101: acquiring lung image features of the lung under sagittal plane, features of the lung under coronal plane and features of the lung under transverse plane; in some possible embodiments, the features of the lung fissures of the lung images at different perspectives may be extracted by means of a feature extraction process. The lung lobe slit feature is a feature for performing a segmentation process of each lung lobe region in a lung image.
The embodiment of the disclosure can respectively perform feature extraction processing on the lung images under the sagittal plane, the coronal plane and the cross-sectional view angles to obtain the slit features of the lung images under the corresponding view angles, namely, the lobe slit features of the lung images under the sagittal plane, the lobe slit features under the coronal plane and the lobe slit features under the cross-sectional plane. In embodiments of the present disclosure, the lung lobe slit features at each view angle may be represented in the form of a matrix or vector, and the lung lobe slit features may represent feature values of the lung image at each pixel point at the corresponding view angle.
In some possible implementations, the disclosed embodiments may obtain lung images at different perspectives by taking CT (Computed Tomography ) images. Correspondingly, a plurality of tomographic images, namely lung images, can be obtained at each view angle, and the plurality of lung images at the same view angle can be constructed to form a three-dimensional lung image. For example, the plurality of lung images at the same view angle may be stacked to obtain a three-dimensional lung image, or linear fitting or surface fitting may be performed to obtain a three-dimensional lung image.
In some possible implementations, the feature extraction process may be performed by a feature extraction neural network. For example, the neural network may be trained to achieve accurate extraction of the features of the lung lobular fissures of the lung image by the neural network, and perform lobular segmentation by the resulting features. And under the condition that the precision of the lung lobe segmentation exceeds a precision threshold, the precision of the lung lobe fracture characteristics obtained by the neural network is required, at the moment, a network layer for executing segmentation in the neural network can be removed, and the reserved network part can be used as the characteristic extraction neural network of the embodiment of the disclosure. The feature extraction neural network may be a convolutional neural network, such as a residual network, a pyramid feature network, or a U network, which are only exemplary, and are not specifically defined in the present disclosure.
Step 102: and correcting the third lung lobe crack characteristic by utilizing the lung lobe crack characteristic of any two of the sagittal plane, the coronal plane and the transverse plane.
In some possible embodiments, where the features of the lung lobal fissures at the three perspectives are obtained, the features of the lung lobal fissures at the third perspective may be used to correct the features of the lung lobal fissures at the third perspective, improving the accuracy of the features of the lung lobal fissures at the third perspective.
In one example, embodiments of the present disclosure may utilize the features of the lung lobes at coronal and cross-sectional perspectives to correct for features of the lung lobes at sagittal perspectives. In other embodiments, another lung lobe slit feature may also be corrected by any two of the three perspectives. For ease of description, the correction of the third lung lobe slit feature is described in the following examples by the first lung lobe slit feature and the second lung lobe slit feature. Wherein the first, second, and third lung lobe slit features correspond to lung lobe slit features at three perspectives, respectively, of an embodiment of the present disclosure.
In some possible embodiments, the first and second features of the lung lobe slit may be converted into the third feature of the lung lobe slit in a mapping manner, and feature fusion may be performed using the two features of the lung lobe slit mapped with the third feature of the lung lobe slit to obtain the corrected feature of the lung lobe slit.
Step 103: the lung image is segmented using the corrected lung lobe slit features.
In some possible embodiments, the lung lobe segmentation may be performed directly from the corrected lung lobe slit features, resulting in a segmentation result of the lung lobe slit. Alternatively, in other embodiments, feature fusion processing may be performed on the corrected features of the lung lobe slit and the third features of the lung lobe slit, and lung lobe segmentation may be performed based on the fusion result, to obtain a segmentation result of the lung lobe slit. The segmentation result may include location information corresponding to each partition in the identified lung image, among other things. For example, the lung image may include five lung lobe regions, which are respectively an upper right lobe, a middle right lobe, a lower right lobe, an upper left lobe and a lower left lobe, and the obtained segmentation result may include position information of the five lung lobes in the lung image, respectively. The embodiment of the present disclosure may represent the segmentation result by means of a mask feature, that is, the segmentation result obtained by the embodiment of the present disclosure may be a feature represented as a mask, for example, the embodiment of the present disclosure may allocate unique corresponding mask values (set mask values) to the above five lung lobe regions respectively, for example, 1, 2, 3, 4 and 5, where each mask value forms a region that is a location region where a corresponding lung lobe is located. The above mask values are merely exemplary, and other mask values may be configured in other embodiments.
Based on the embodiment, the lung lobe slit characteristics under three visual angles can be fully fused, the information content and the accuracy of the corrected slit characteristics are improved, and the accuracy of a lung lobe segmentation result is further improved.
In order to describe the embodiments of the present disclosure in detail, each process of the embodiments of the present disclosure is described below.
In an embodiment of the disclosure, the method for acquiring the features of the lung lobe fissures of the lung image under the sagittal plane, the features of the lung lobe fissures under the coronal plane and the features of the lung lobe fissures under the transverse plane is as follows:
obtaining a multi-sequence lung image under the sagittal, coronal, and transverse planes; and respectively extracting the features of the lung lobe cracks of the multi-sequence lung images under the sagittal plane, the coronal plane and the cross section to obtain the features of the lung lobe cracks under the sagittal plane, the features of the lung lobe cracks under the coronal plane and the features of the lung lobe cracks under the cross section.
The embodiment of the disclosure may first acquire a multi-sequence lung image under three view angles, as described in the above embodiment, may acquire a multi-layer lung image (multi-sequence image) of the lung image under different view angles by using a CT imaging manner, and may obtain a three-dimensional lung image from the multi-layer lung image under each view angle.
In the case of obtaining a multi-sequence lung image at three view angles, feature extraction processing may be performed on each lung image, for example, by performing feature extraction processing on the lung image at each view angle through the above-described feature extraction neural network, resulting in a lung lobe slit feature such as a lung lobe slit feature under the sagittal plane, a lung lobe slit feature under the coronal plane, and a lung lobe slit feature under the transverse plane for each image at three view angles. Wherein, because a plurality of lung images can be included under each view angle, the embodiment of the disclosure can execute the feature extraction processing of the plurality of lung images in parallel through a plurality of feature extraction neural networks, thereby improving the feature extraction efficiency.
Fig. 8 is a schematic diagram of a network structure of a lung lobe segmentation method and/or apparatus based on multiple views according to an embodiment of the present invention. As shown in fig. 8, the network for performing the feature extraction processing in the embodiment of the present disclosure may be a U network (U-net), or may be another convolutional neural network capable of performing feature extraction.
In the case of obtaining the features of the lung lobal fissures of the lung image at each view angle, the third feature of the lung lobal fissures may be corrected using the features of the lung lobal fissures of any two of the sagittal, coronal, and transverse planes, and the process may include: mapping the features of the any two to the view angle at which the third pair of features of the lung lobes resides; and correcting the third lung lobe crack characteristic by using the mapped lung lobe crack characteristics of any two.
For ease of description, the correction of the third lung lobe slit feature will be described below by taking the first lung lobe slit feature and the second lung lobe slit feature as examples.
Because the extracted features of the lung flares are different at different perspectives, embodiments of the present disclosure may map the features of the lung flares at three perspectives to one perspective. Wherein the mapping of the features of any two of the lung lobes to the view angle at which the third pair of features of the lung lobes is located comprises: and mapping the features of the lung lobe fissures of the multi-sequence lung images of any two of the sagittal plane, the coronal plane and the transverse plane to the viewing angles of the features of the third lung lobe fissures. That is, the first and second lung lobe slit features may be mapped into view angles at which the third lung lobe slit feature is located. The feature information of the visual angle before mapping can be fused in the lung lobe fracture features obtained after mapping through mapping conversion of the visual angle.
As described in the embodiments above, embodiments of the present disclosure may obtain a plurality of lung images at each view angle, the plurality of lung images corresponding to a plurality of lung lobe slit features. And each characteristic value in the lung lobe crack characteristic corresponds to each pixel point of the corresponding lung image one by one.
According to the embodiment of the disclosure, a position mapping relationship between each pixel point in a lung image when the view angle is converted to another view angle can be determined according to a three-dimensional vertical lung image formed by a plurality of lung images under one view angle, for example, a certain pixel point is switched from a first position of a first view angle to a second position of a second view angle, and at the moment, a characteristic value corresponding to the first position under the first view angle is mapped to the second position. By the embodiment, mapping conversion between the lung lobe crack characteristics of each lung image under different visual angles can be realized.
In some possible embodiments, where the three perspectives of the lung-lobe-crack-feature are mapped to the same perspectives, a correction process may be performed on the third lung-lobe-crack-feature using the two mapped lung-lobe-crack-features, improving the information content and accuracy of the third lung-lobe-crack-feature.
In an embodiment of the disclosure, the method for correcting the third lung lobe crack feature by using the mapped lung lobe crack features of any two of the lung lobe crack features includes:
respectively carrying out spatial attention feature fusion by using the mapped lung lobe slit features of any two and the third lung lobe slit features to obtain a first fusion feature and a second fusion feature; and obtaining the corrected third lung lobe slit feature according to the first fusion feature and the second fusion feature.
Embodiments of the present disclosure may refer to the mapped features of the first lung lobe slit feature as a first mapped feature and the mapped features of the second lung lobe slit feature as a second mapped feature. Under the condition that the first mapping feature and the second mapping feature are obtained, spatial attention feature fusion between the first mapping feature and the third lung lobe slit feature can be performed to obtain a first fusion feature, and spatial attention feature fusion between the second mapping feature and the third lung lobe slit feature can be performed to obtain a second fusion feature.
The method for obtaining the first fusion feature and the second fusion feature by respectively utilizing the mapped lung lobe slit features of any two and the third lung lobe slit feature to perform spatial attention feature fusion comprises the following steps:
respectively connecting the lung lobe slit features of any two with the third lung lobe slit feature to obtain a first connection feature and a second connection feature; performing a first convolution operation on the first connection feature to obtain a first convolution feature, and performing a first convolution operation on the second connection feature to obtain a second convolution feature; performing a second convolution operation on the first convolution feature to obtain a first attention coefficient, and performing a second convolution operation on the second convolution feature to obtain a second attention coefficient; the first fused feature is obtained using a first convolution feature and a first attention coefficient, and the second fused feature is obtained using a second convolution feature and a second attention coefficient.
In some possible implementations, as shown in fig. 8, the spatial attention mechanism may be employed by the disclosed embodiments in consideration of the importance of the lung lobe slit features in different locations by performing the spatial attention feature fusion process described above through the network module of the spatial attention mechanism. The convolution processing based on the attention mechanism can be realized through a spatial attention neural network (attention), and important features are further highlighted in the obtained fusion features. The importance of each position of the spatial feature can be adaptively learned in the training process of the spatial attention neural network, and the attention coefficient of the feature object corresponding to each position is formed, for example, the coefficient can represent the coefficient value of the [0,1] interval, and the larger the coefficient is, the more important the feature of the corresponding position is.
In the process of performing the spatial attention fusion process, a connection process may be performed on the first mapping feature and the third lung lobe slit feature to obtain a first connection feature, and a connection process may be performed on the second mapping feature and the third lung lobe slit feature to obtain a second connection feature, where the connection process may be performing connection (connection) in a channel direction. In an embodiment of the present disclosure, the dimensions of the first mapping feature, the second mapping feature, and the third lung lobe crack feature may all be identified as (C/2, H, W), where C represents the number of channels of each feature, H represents the height of the feature, and W represents the width of the feature. Correspondingly, the dimensions of the first connection feature and the second connection feature obtained by the connection process may be expressed as (C, H, W).
In the case of obtaining the first connection feature and the second connection feature, a first convolution operation may be performed on each of the first connection feature and the second connection feature, for example, by a convolution kernel of 3*3 using the convolution layer a, and then batch normalization (bn) and activation function (relu) processing may be performed to obtain a first convolution feature corresponding to the first connection feature and a second convolution feature corresponding to the second connection feature. The scale of the first convolution feature and the second convolution feature can be expressed as (C/2, H, W), and parameters in the feature map can be reduced through the first convolution operation, so that subsequent calculation cost is reduced.
In some possible embodiments, in the case of obtaining the first convolution feature and the second convolution feature, a second convolution operation and sigmoid function processing may be performed on the first convolution feature and the second convolution feature, respectively, to obtain the corresponding first attention coefficient and the second attention coefficient, respectively. Wherein the first attention coefficient may represent the importance of the features of the individual elements of the first convolution feature and the second attention coefficient may represent the importance of the features of the elements of the second convolution feature.
As shown in fig. 8, the second convolution operation may be performed on either the first convolution feature or the second convolution feature using two convolution layers B and C, where after the convolution layer B performs a convolution kernel process of 1*1, a batch normalization (bn) and an activation function (relu) process is performed to obtain a first intermediate feature, where the scale of the first intermediate feature map may be denoted as (C/8, h, w), and then a convolution operation of 1*1 convolution kernels is performed on the first intermediate feature map by the second convolution layer C to obtain a second intermediate feature map of (1, h, w). Further, an activation function process may be performed on the second intermediate feature map using a sigmoid function, so as to obtain an attention coefficient corresponding to the first convolution feature or the second performance feature, where the coefficient value of the attention coefficient may be a value in the range of [0,1 ].
The second convolution operation can execute dimension reduction processing on the first connection feature and the second connection feature to obtain the attention coefficient of the single channel.
In some possible embodiments, in the case of obtaining the first attention coefficient corresponding to the first convolution feature and the second attention coefficient corresponding to the second convolution feature, product processing may be performed on the first convolution feature and the first attention coefficient, and the product result may be added to the first convolution feature to obtain the first fusion feature. And performing product processing on the second convolution feature and the second attention coefficient matrix, and adding the product result and the second convolution feature to obtain a second fusion feature. Where the product process (mul) may multiply for the corresponding elements and the feature addition (add) may add for the corresponding elements. By the mode, the characteristics under three visual angles can be effectively fused.
Alternatively, in other embodiments, the first convolution feature may be added to the feature obtained by multiplying the first convolution feature by the first attention coefficient, and a plurality of convolution operations may be performed on the added feature to obtain the first fusion feature; and adding the characteristic multiplied by the second attention coefficient by the second convolution characteristic with the second convolution characteristic, and carrying out a plurality of convolution operations on the added characteristic to obtain the second fusion characteristic. By the method, the accuracy of fusion characteristics can be further improved, and the information content of fusion can be improved.
In the case of obtaining the first fusion feature and the second fusion feature, a corrected third lung lobe slit feature may be obtained using the first fusion feature and the second fusion feature.
In some possible embodiments, since the first fusion feature and the second fusion feature respectively include feature information under three views, the first fusion feature and the second fusion feature may be directly connected, and a third convolution operation is performed on the connected features, so as to obtain a corrected third lung lobe slit feature. Alternatively, the first fusion feature, the second fusion feature, and the third lung lobe slit feature may be connected, and a third convolution operation may be performed on the connected features to obtain a corrected third lung lobe slit feature.
Wherein the third convolution operation may include a packet convolution process. Further fusion of feature information in each feature may be further achieved by a third convolution operation. As shown in fig. 8, a third convolution operation of embodiments of the present disclosure may include a group convolution D (depth wise conv), where the group convolution may increase the convolution speed while increasing the accuracy of the convolution characteristics.
In the case where the corrected third lung lobe slit feature is obtained by the third convolution operation, the lung image may be segmented using the corrected lung lobe slit feature. The embodiment of the disclosure can obtain the segmentation result corresponding to the corrected lung lobe fracture characteristic by using a convolution mode. As shown in fig. 8, the embodiment of the present disclosure may input the corrected lung lobe slit feature into the convolution layer E, and perform standard convolution through the convolution kernel of 1*1, to obtain a segmentation result of the lung image. As described in the above embodiment, the location areas where the five lung lobes in the lung image are respectively located may be represented in the segmentation result. As shown in fig. 8, the lung lobe areas in the lung image are distinguished by means of a filled-in color.
Based on the configuration, the lung lobe segmentation method based on multiple visual angles provided by the embodiment of the disclosure can solve the technical problems that information is lost and lung lobes cannot be accurately segmented due to the fact that information of other visual angles is not fully utilized to segment lung lobes.
As described in the above embodiments, the embodiments of the present disclosure may be implemented by a neural network, and as shown in fig. 8, the neural network performing the lobe segmentation method under multiple views of the embodiments of the present disclosure may include a feature extraction neural network, a spatial attention neural network, and a segmentation network (including convolution layers D and E).
Embodiments of the present disclosure may include three feature extraction neural networks for extracting the features of the lung lobes at different perspectives, respectively. Among these, three feature extraction networks may be referred to as a first branch network, a second branch network, and a third branch network. The three branch networks in the embodiment of the present disclosure have identical structures, and the input images of each branch network are different. For example, a sagittal plane lung image sample is input to the first branch network, a coronal plane lung image sample is input to the second branch network, and a transverse plane lung image sample is input to the third branch network, for performing feature extraction processing of the lung image samples at respective perspectives, respectively.
Specifically, in an embodiment of the present disclosure, the process of training the feature extraction neural network includes:
acquiring training samples under sagittal plane, coronal plane and cross section, wherein the training samples are lung image samples with marked lung lobe crack characteristics; performing feature extraction on a sagittal plane lung image sample by using the first branch network to obtain a first predicted lung lobe slit feature; performing feature extraction on the lung image sample under the coronal plane by using the second branch network to obtain second predicted lung lobe slit features; performing feature extraction on the lung image sample under the cross section by using the third branch network to obtain a third predicted lung lobe crack feature; and obtaining network losses of the first branch network, the second branch network and the third branch network by using the first predicted lung lobe slit characteristic, the second predicted lung lobe slit characteristic and the third predicted lung lobe slit characteristic and the corresponding marked lung lobe slit characteristic respectively, and adjusting parameters of the first branch network, the second branch network and the third branch network by using the network losses.
As described in the above embodiment, the feature extraction processing of the lung image samples under the sagittal plane, coronal plane, and transverse view angles is performed by using the first branch network, the second branch network, and the third branch network, respectively, so that the predicted features, that is, the first predicted lung lobe slit feature, the second predicted lung lobe slit feature, and the third predicted lung lobe slit feature, can be obtained correspondingly.
In case of obtaining each predicted lung lobe slit feature, network losses of the first branch network, the second branch network and the third branch network may be obtained by using the first predicted lung lobe slit feature, the second predicted lung lobe slit feature and the third predicted lung lobe slit feature, respectively, and the corresponding labeled lung lobe slit features. For example, the loss function of embodiments of the present disclosure may be a logarithmic loss function, the network loss of the first branch network may be obtained from the first predicted and labeled real lung lobe slit features, the network loss of the second branch network may be obtained from the second predicted and labeled real lung lobe slit features, and the network loss of the third branch network may be obtained from the third predicted and labeled real lung lobe slit features.
In the case of obtaining the network loss of each branch network, parameters of the first branch network, the second branch network, and the third branch network may be adjusted according to the network loss of each network until a termination condition is satisfied. The network loss of any one of the first branch network, the second branch network and the third branch network can be utilized to respectively and simultaneously adjust network parameters, such as convolution parameters and the like, of the first branch network, the second branch network and the third branch network. Therefore, the network parameters at any view angle can be related to the features at the other two view angles, the correlation between the extracted features of the lung lobe cracks and the features of the lung lobe cracks at the other two view angles can be improved, and the preliminary fusion of the features of the lung lobe cracks at each view angle can be realized.
In addition, the training termination condition of the feature extraction neural network is that the network loss of each branch network is smaller than a first loss threshold value, and at the moment, the feature extraction neural network indicates that each branch network of the feature extraction neural network can accurately extract the lung lobe fissure feature of the lung image under the corresponding visual angle.
Under the condition that the feature extraction neural network is trained, the feature extraction neural network, the spatial attention neural network and the segmentation network can be used for training simultaneously, and the segmentation result output by the segmentation network and the corresponding marking result in the marked lung lobe fissure feature are used for determining the network loss of the whole neural network. And further feeding back and adjusting network parameters of the characteristic extraction neural network, the spatial attention neural network and the segmentation network by utilizing the network loss of the whole neural network until the network loss of the whole neural network is smaller than a second loss threshold value. The first loss threshold value in the embodiment of the disclosure is greater than or equal to the second loss threshold value, so that network accuracy of the network can be improved.
When the neural network of the embodiment of the disclosure is applied to perform lobe segmentation based on multiple views, lung images under different views of the same lung can be respectively and correspondingly input into three branch networks, and finally a final lung image segmentation result is obtained through the neural network.
In summary, the method and the device for segmenting lung lobes based on multiple views provided by the embodiments of the present disclosure may fuse the feature information of multiple views, perform lung lobe segmentation of a lung image, and solve the problem that the lung lobes are segmented without fully utilizing information of other views, resulting in information loss and inaccurate segmentation of the lung lobes.
In addition, the embodiment of the disclosure further provides a lobe segmentation device or a segmentation unit based on multi-view, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the multi-view based lung lobe segmentation method according to any of the above embodiments.
In some embodiments, the embodiments of the present disclosure provide a lobe segmentation device or a segmentation unit with multiple views, where the functions or the included modules may be used to perform a method described in the foregoing embodiments of the lobe segmentation device or the segmentation unit with multiple views, and specific implementations of the method may refer to descriptions in the foregoing embodiments of the method, which are not repeated herein for brevity.
Step 1002: and judging whether the CT value on the lung lobes is an emphysema area according to the lung lobes with the CT values and a set threshold value.
In the embodiment of the invention, the set threshold is a set CT value, and the CT value of the emphysema area is basically unchanged during deep inhalation, and other normal areas (non-emphysema areas) enter air, wherein the CT value of the air is 1024HU, the emphysema area is full of air, and the CT of the emphysema area is close to 1024HU. In the medical field, the set threshold is generally selected to be-950 HU, and if the CT value of the lung lobe is smaller than-950 HU, the lung lobe is judged to be the emphysema area, and if the CT value of the lung lobe is larger than or equal to-950 HU, the lung lobe is judged to be not the emphysema area. In other published papers or in some possible embodiments, the set threshold may fluctuate, and the present invention is not limited in detail, and one skilled in the art may adjust the set threshold appropriately.
In the embodiment of the present invention, the extracted upper lobe is the upper right lobe, which is the first lobe of COPD (chronic obstructive pulmonary disease), and is described by the upper right lobe, and fig. 6 is an extraction schematic diagram of the upper right lobe extracted by a method and/or apparatus for extracting an upper right lobe with CT value in the embodiment of the present invention. The upper right lung lobe in fig. 6 is a lobe with a CT value, which is compared with a set threshold. If the CT value of the upper right lung lobe is smaller than-950 HU, the upper right lung lobe is an emphysema area; if the CT value of the upper right lung lobe is greater than or equal to-950 HU, the upper right lung lobe is judged herein not to be the emphysema region.
Step 1003: if so, the emphysema area is colored, and the emphysema area is displayed. Step 1004: if not, the coloring is not performed or the color of the coloring is different from that of the emphysema area.
In the embodiment of the invention, the coloring is pseudo-color, so that doctors can conveniently observe the emphysema area of the extracted lung lobes or perform positioning display on the extracted lung lobes. If the extracted lobe is the upper right lobe, when the CT value of the upper right lobe is smaller than-950 HU, the region of the upper right lobe is the emphysema region, and the region is colored, for example: red; if the CT value of the upper right lung lobe is greater than or equal to-950 HU, judging that the upper right lung lobe is not an emphysema area, and not coloring or coloring to be green so as to distinguish the color of the emphysema area.
The above examples are merely illustrative embodiments of the present invention, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that modifications, equivalent substitutions, improvements, etc. can be made by those skilled in the art without departing from the spirit of the present invention, and these are all within the scope of the present invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (8)

1. A pulmonary lobe-based emphysema area determination method, comprising:
extracting lung lobes with CT values;
judging whether the CT value on the lung lobes is an emphysema area according to the lung lobes with the CT values and a set threshold value;
if so, coloring the emphysema area, and displaying the emphysema area;
if not, the coloring is not performed or the color of the coloring is different from that of the emphysema area;
further comprises: acquiring a lung image; performing lung lobe segmentation on the lung image to obtain the lung lobe segmented image;
the method for obtaining the lung lobe segmentation image comprises the following steps of:
acquiring lung image features of the lung under sagittal plane, features of the lung under coronal plane and features of the lung under transverse plane;
correcting a third lung lobe slit feature by utilizing the lung lobe slit features of any two of the sagittal plane, the coronal plane and the transverse plane;
segmenting the lung image using the corrected lung lobe slit features;
wherein the method for correcting the third lung lobe slit feature by utilizing the lung lobe slit feature of any two of the sagittal plane, the coronal plane and the transverse plane comprises the following steps:
Mapping the features of the any two to the view angle at which the third pair of features of the lung lobes resides;
correcting the third lung lobe slit feature by using the mapped lung lobe slit features of any two;
wherein the method for correcting the third lung lobe slit feature using the mapped lung lobe slit features of any two comprises:
respectively carrying out spatial attention feature fusion by using the mapped lung lobe slit features of any two and the third lung lobe slit features to obtain a first fusion feature and a second fusion feature;
obtaining the corrected third lung lobe slit feature according to the first fusion feature and the second fusion feature;
the method for obtaining the first fusion feature and the second fusion feature by respectively utilizing the mapped lung lobe slit features of any two and the third lung lobe slit feature to perform spatial attention feature fusion comprises the following steps:
respectively connecting the lung lobe slit features of any two with the third lung lobe slit feature to obtain a first connection feature and a second connection feature;
performing a first convolution operation on the first connection feature to obtain a first convolution feature, and performing a first convolution operation on the second connection feature to obtain a second convolution feature;
Performing a second convolution operation on the first convolution feature to obtain a first attention coefficient, and performing a second convolution operation on the second convolution feature to obtain a second attention coefficient;
obtaining the first fusion feature using a first convolution feature and a first attention coefficient, and obtaining the second fusion feature using a second convolution feature and a second attention coefficient;
wherein the method of obtaining the first fusion feature using a first convolution feature and a first attention coefficient and obtaining the second fusion feature using a second convolution feature and a second attention coefficient comprises:
adding the characteristic multiplied by the first attention coefficient by the first convolution characteristic to obtain the first fusion characteristic; adding the characteristic multiplied by the second attention coefficient by the second convolution characteristic to obtain a second fusion characteristic; or alternatively, the first and second heat exchangers may be,
adding the characteristic multiplied by the first convolution characteristic and the first attention coefficient with the first convolution characteristic, and carrying out a plurality of convolution operations on the added characteristic to obtain the first fusion characteristic; and adding the characteristic multiplied by the second attention coefficient by the second convolution characteristic to the second convolution characteristic, and carrying out a plurality of convolution operations on the added characteristic to obtain the second fusion characteristic.
2. The method of claim 1, wherein the method of extracting lung lobes with CT values comprises:
acquiring a lobe segmentation image of the lung image;
determining lung lobes to be extracted;
marking the lung lobes to be extracted;
and obtaining the lung lobes to be extracted according to the lung lobes to be extracted after marking and the lung images.
3. The method according to claim 2, wherein the labeling of the lung lobes to be extracted, and the specific method for obtaining the lung lobes to be extracted from the labeled lung lobes to be extracted and the lung image, comprises:
and obtaining a mask image according to the lung lobe segmentation image, obtaining a marked mask image according to the mask image and the mark of the lung lobe to be extracted, and multiplying the lung image by the marked mask image to obtain the lung lobe to be extracted.
4. The method of determining according to claim 3, wherein the method of obtaining a mask image from the lobe segmentation image and obtaining a mask image of a mark from the mask image and the mark of the lung lobe to be extracted comprises:
carrying out mask processing on the lung lobe segmentation images to obtain mask images of each lung lobe, and obtaining mask images of the marks according to preset mask values of the mask images of each lung lobe and the marks of the lung lobes to be extracted; and sets 1 for pixels within the marked mask image and sets 0 for pixels of areas of the lung lobe segmentation image other than the marked mask image.
5. The method according to any one of claims 3-4, wherein the specific method for obtaining the lung lobes to be extracted by multiplying the mask image of the marker by the lung image comprises:
and multiplying the mask images of the marks with the same layer number by the lung images to obtain one layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
6. The method according to any one of claims 3 to 4, wherein a mask image is obtained from the lobe segmentation image, and a marked mask image is obtained from the mask image and a mark of the lobe to be extracted, wherein before the lung image is multiplied by the marked mask image to obtain the lobe to be extracted, the number of layers of the lung image and the number of layers of the marked mask image are respectively determined;
judging whether the number of layers of the lung image is equal to the number of layers of the mask image of the mark;
if the number of the lung lobes is equal, multiplying the lung images by mask images of the marks with the same number of layers to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted;
If not, interpolating the marked mask images to obtain mask images with the same layer number as the lung images, multiplying the lung images by the marked mask images with the same layer number to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
7. The judgment method according to claim 5, wherein a mask image is obtained from the lobe segmentation image, and a marked mask image is obtained from the mask image and the mark of the lobe to be extracted, wherein the number of layers of the lung image and the number of layers of the marked mask image are respectively determined before the lobe to be extracted is obtained by multiplying the lung image by the marked mask image;
judging whether the number of layers of the lung image is equal to the number of layers of the mask image of the mark;
if the number of the lung lobes is equal, multiplying the lung images by mask images of the marks with the same number of layers to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted;
If not, interpolating the marked mask images to obtain mask images with the same layer number as the lung images, multiplying the lung images by the marked mask images with the same layer number to obtain a layer of lung lobes to be extracted, and carrying out three-dimensional reconstruction on a plurality of layers of lung lobes to be extracted to obtain three-dimensional lung lobes to be extracted.
8. A pulmonary lobe-based emphysema area determination apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored by the memory to perform the lung lobe based emphysema area determination method according to any of claims 1 to 7.
CN202010042872.XA 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device Active CN111260627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010042872.XA CN111260627B (en) 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010042872.XA CN111260627B (en) 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device

Publications (2)

Publication Number Publication Date
CN111260627A CN111260627A (en) 2020-06-09
CN111260627B true CN111260627B (en) 2023-04-28

Family

ID=70946974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010042872.XA Active CN111260627B (en) 2020-01-15 2020-01-15 Pulmonary lobe-based emphysema area judging method and device

Country Status (1)

Country Link
CN (1) CN111260627B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399799A (en) * 2019-06-26 2019-11-01 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5352303B2 (en) * 2009-03-25 2013-11-27 富士フイルム株式会社 Image processing apparatus and method, and program
CN102429679A (en) * 2011-09-09 2012-05-02 华南理工大学 Computer-assisted emphysema analysis system based on chest CT (Computerized Tomography) image
EP2916738B1 (en) * 2012-09-13 2018-07-11 The Regents of the University of California Lung, lobe, and fissure imaging systems and methods
CN110473207B (en) * 2019-07-30 2022-05-10 赛诺威盛科技(北京)股份有限公司 Method for interactively segmenting lung lobes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399799A (en) * 2019-06-26 2019-11-01 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111260627A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
Shan et al. Lung infection quantification of COVID-19 in CT images with deep learning
CN110599528B (en) Unsupervised three-dimensional medical image registration method and system based on neural network
CN111242931B (en) Method and device for judging small airway lesions of single lung lobes
CN110600122B (en) Digestive tract image processing method and device and medical system
CN109886922B (en) Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image
CN107203989A (en) End-to-end chest CT image dividing method based on full convolutional neural networks
CN112132166B (en) Intelligent analysis method, system and device for digital cell pathology image
CN110164550B (en) Congenital heart disease auxiliary diagnosis method based on multi-view cooperative relationship
CN111080584A (en) Quality control method for medical image, computer device and readable storage medium
CN111292343B (en) Lung lobe segmentation method and device based on multi-view angle
CN110246580B (en) Cranial image analysis method and system based on neural network and random forest
CN111275673A (en) Lung lobe extraction method, device and storage medium
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN110189307B (en) Pulmonary nodule detection method and system based on multi-model fusion
CN110853082A (en) Medical image registration method and device, electronic equipment and computer storage medium
CN104732520A (en) Cardio-thoracic ratio measuring algorithm and system for chest digital image
CN111754453A (en) Pulmonary tuberculosis detection method and system based on chest radiography image and storage medium
CN107767362A (en) A kind of early screening of lung cancer device based on deep learning
CN112669959B (en) Automatic evaluation method for vitiligo conditions based on images
CN115049666A (en) Endoscope virtual biopsy device based on color wavelet covariance depth map model
CN114648541A (en) Automatic segmentation method for non-small cell lung cancer gross tumor target area
CN113781489A (en) Polyp image semantic segmentation method and device
CN115760858A (en) Kidney pathological section cell identification method and system based on deep learning
TWI738001B (en) Method and device for identifying body part in medical image
CN116597985A (en) Survival rate prediction model training method, survival period prediction method, survival rate prediction device and survival rate prediction equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant