CN113096139B - Image segmentation processing method for lung parenchyma - Google Patents

Image segmentation processing method for lung parenchyma Download PDF

Info

Publication number
CN113096139B
CN113096139B CN202110404136.9A CN202110404136A CN113096139B CN 113096139 B CN113096139 B CN 113096139B CN 202110404136 A CN202110404136 A CN 202110404136A CN 113096139 B CN113096139 B CN 113096139B
Authority
CN
China
Prior art keywords
lung
image
mask
images
parenchyma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110404136.9A
Other languages
Chinese (zh)
Other versions
CN113096139A (en
Inventor
俞晔
方圆圆
袁凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hanhu Intelligent Technology Co ltd
Original Assignee
Shanghai First Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai First Peoples Hospital filed Critical Shanghai First Peoples Hospital
Priority to CN202110404136.9A priority Critical patent/CN113096139B/en
Publication of CN113096139A publication Critical patent/CN113096139A/en
Application granted granted Critical
Publication of CN113096139B publication Critical patent/CN113096139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An image segmentation processing method of lung parenchyma comprises the steps of S1, respectively processing the collected same lung data to obtain the position data of a trunk mask in a plurality of lung images, and comparing and screening out lung images with inconsistent trunk mask positions; step S2, selecting a lung image which is subjected to the screening processing in the step S1 to generate a trunk mask image; step S3, carrying out hole elimination processing on the rest lung images screened and removed in the step S1, and generating a lung parenchyma mask image by overlapping and subtracting the lung parenchyma mask image with the trunk mask image; a step S4 of removing the trachea in the lung parenchymal mask image, and superimposing and multiplying the lung image subjected to the screening processing in the step S1 to generate a lung parenchymal image; a plurality of torso mask positions are extracted, parts such as air outside the lung parenchyma and an examining table do not need to be processed, and the processing speed and the accuracy of the edges of the lung parenchyma are improved; the segmentation area is compared with the lung parenchyma mask image to confirm the characteristic part, so that the inaccurate cut lung parenchyma is prevented.

Description

Image segmentation processing method for lung parenchyma
Technical Field
The invention relates to the technical field of lung parenchyma image segmentation, in particular to a lung parenchyma image segmentation processing method.
Background
The morbidity and mortality of lung cancer in China are the first of various malignant tumors, and the early discovery and early treatment of lung cancer can effectively improve the survival rate and the condition after healing of patients; CT imaging is the best imaging means for examining lung diseases in medical imaging technology; in order to reduce the workload of doctors and check the lung diseases more quickly and accurately, the medical image processing technology is applied to the auxiliary diagnosis of the lung diseases, and has great significance for the auxiliary diagnosis of the lung diseases; in the process of diagnosing lung diseases by a computer, lung parenchyma in a lung CT image is extracted, so that a clinician can be assisted to diagnose and evaluate the lung diseases; generally, a global threshold segmentation and a region-based growth method are used to perform segmentation processing on a lung parenchyma image, however, since a lung CT image mainly includes the lung parenchyma and parts such as air outside the lung parenchyma, an examining table and the like, and meanwhile, due to strong noise and complex tissue structure, the lung CT image may cause inaccurate processing of lung parenchyma edges and misprocessing of internal tissues of the lung parenchyma; therefore, it is desirable to develop a method and system for improving the accuracy of cutting lung parenchyma.
Disclosure of Invention
The present invention is directed to overcome the deficiencies of the prior art and to provide an image segmentation processing system for lung parenchyma.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
an image segmentation processing method of lung parenchyma comprises
Step S1, collecting lung data, respectively processing the same collected lung data to obtain position data of an external trunk mask in a plurality of lung images, and comparing and screening lung images with inconsistent trunk mask positions;
s2, selecting a lung image which is screened and removed in the S1 to generate a trunk mask image and storing the trunk mask image;
and step S3, performing hole elimination processing on the remaining lung images subjected to the screening processing in step S1, and generating a lung parenchyma mask image by overlapping and subtracting the lung parenchyma mask image from the torso mask image in step S2.
Preferably, the method further includes a step S4 of removing the trachea from the lung parenchymal mask image processed in the step S3, and superimposing and multiplying the lung image subjected to the screening processing in the step S1 to generate a lung parenchymal image.
Preferably, the step S3 includes:
step S3-1, performing hole elimination processing on the rest lung images screened and processed in the step S1 to generate a preprocessed lung image;
step S3-2, two preprocessed lung images in the step S3-1 are respectively selected to be superposed and subtracted with the torso mask image in the step S2 to generate two substantial mask images;
step S3-3, performing region segmentation on the two lung parenchyma mask images in the step S3-2, and comparing the two lung parenchyma mask images by using a comparison method; comparing and generating a lung parenchyma mask image; and comparing the two images with each other, deleting the two lung parenchyma mask images, and reselecting two new lung images to perform the step S3-1.
Preferably, the step S1 includes:
s1-1, collecting lung data, and respectively carrying out position correction and processing on the collected same lung data to generate a plurality of lung images;
step S1-2, performing noise point removing processing on the images of the step S1-1 respectively;
and step S1-3, acquiring the position data of the trunk mask outside the lung image by extracting the maximum communication area, and comparing and screening out the lung image with inconsistent trunk mask positions.
Preferably, the step S1-3 includes:
s1-3-1, arbitrarily selecting one lung image as a comparison lung image;
step S1-3-2, comparing the position data of the torso mask of the other lung images with the position data of the torso mask of the comparison lung image in the step S1-3-1 one by one, and selecting the comparison lung image to perform the step S2 if the ratio of the comparison times to the total comparison times is more than or equal to a set ratio;
and S1-3-3, comparing the position data of the torso masks of the rest lung images with the position data of the torso masks of the compared lung images of the step S1-3-1 one by one, and selecting the rest lung images as the compared lung images to perform the step S1-3-2 if the ratio of the times of comparison in accordance with the total comparison times is smaller than a set ratio.
Preferably, the step S1 further includes:
and step S1-4, storing the lung image with the screened torso mask position consistent.
Preferably, the alignment method in step S3-3 includes:
step S3-3-1, wherein the two lung parenchymal mask images of the step S3-2 are a first lung parenchymal mask image and a second lung parenchymal mask image, and the first lung parenchymal mask image and the second lung parenchymal mask image are subjected to region segmentation processing;
step S3-3-2, extracting the characteristic part in the segmentation area of the first lung parenchyma mask image, and searching the corresponding characteristic part in the segmentation area of the second lung parenchyma mask image;
and step S3-3-3, performing weighting processing on the characteristic part corresponding to the step S3-3-2 to store the category and the characteristic data, and marking the division area of the characteristic part which is not found.
Preferably, the alignment method in step S3-3 further includes:
and S3-3-4, repeating the step S3-2 to generate two lung parenchymal mask images, repeating the step S3-3, and only comparing the segmentation areas which cannot find the corresponding characteristic parts until the lung parenchymal mask image data of all the segmentation areas are consistent.
Preferably, step S4 includes:
a step S4-1 of removing the trachea from the lung parenchymal mask image processed in the step S3;
step S4-2, the lung parenchymal mask image of the step S4-1 from which the trachea is removed is superimposed and multiplied by the lung image of the screening process in the step S1 to generate a lung parenchymal image.
Preferably, the method further includes step S5, processing and storing torso mask data, lung parenchymal mask image data, trachea data and feature data of the lung image.
The invention has the advantages and positive effects that:
1. the method comprises the steps of S1, respectively processing the collected same lung data to obtain the position data of the external trunk mask in a plurality of lung images, and comparing and screening out the lung images with inconsistent trunk mask positions; s2, selecting a lung image which is screened and removed in the S1 to generate a trunk mask image and storing the trunk mask image; step S3, performing hole elimination processing on the remaining lung images subjected to the screening processing in step S1, and generating a lung parenchyma mask image by superimposing and subtracting the lung parenchyma mask image from the torso mask image in step S2; and step S4, removing the trachea in the lung parenchyma mask image, superposing and multiplying the lung image screened and processed in the step S1 to generate the lung parenchyma image, and extracting a plurality of torso mask positions to improve the processing speed and the accuracy of the lung parenchyma edge without processing parts such as air, an examining table and the like outside the lung parenchyma.
2. The method comprises the step S3-1 of eliminating holes of the rest lung images screened and processed in the step S1 to generate preprocessed lung images; step S3-2, two preprocessed lung images in the step S3-1 are respectively selected to be superposed and subtracted with the trunk mask image in the step S2 to generate two lung parenchyma mask images; step S3-3, performing region segmentation on the two lung parenchyma mask images in the step S3-2, and comparing the two lung parenchyma mask images by using a comparison method; comparing and conforming to generate a lung parenchymal mask image; comparing the two images to be inconsistent, deleting the two lung parenchymal mask images, and repeating the step S3-2; the characteristic part is confirmed through comparing the segmented region with the lung parenchyma mask image, the internal tissue data of the lung parenchyma is confirmed, and the situation that the cut lung parenchyma is inaccurate and the workload of a doctor is increased is prevented.
Drawings
FIG. 1 is a schematic representation of the steps of the present invention;
FIG. 2 is a schematic representation of a lung image of the present invention;
FIG. 3 is a schematic representation of a torso mask image of the present invention;
FIG. 4 is a schematic representation of a pre-processed lung image of the present invention;
FIG. 5 is a schematic representation of lung parenchymal mask image data of the present invention;
FIG. 6 is a schematic view of a mask image of the lung parenchyma with trachea removed according to the present invention;
fig. 7 is a schematic diagram of a lung parenchyma image according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present. When a component is referred to as being "disposed on" another component, it can be directly on the other component or intervening components may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or several of the associated listed items.
As shown in fig. 2, the outermost white area of the lung image is a trunk 2, a bed plate 1 is arranged below the outermost white area, and the lung parenchyma 5 includes a left lung and a right lung, which are required to be divided from the trunk 2; the lung parenchyma 5 comprises a trachea 4 and a hole 3; the trachea 4 comprises a main trachea 4 and a main trachea 4 extending into the lung parenchyma; the holes 3 include blood vessels, nodules and the like; it is necessary to separate the lung parenchyma 5 from other structures in the thoracic cavity, obtain the left and right lungs in communication, remove the trachea 4, fill the holes 3 in the lung parenchyma 5, and obtain the complete lung parenchyma 5.
As shown in FIG. 1, the image segmentation processing method for lung parenchyma according to the present invention includes
Step S1, collecting lung data, respectively processing the same collected lung data to obtain position data of an external trunk mask in a plurality of lung images, and comparing and screening lung images with inconsistent trunk mask positions;
step S2, selecting a lung image which is screened out in the step S1 to generate a torso mask image and storing the torso mask image;
step S3, performing hole 3 elimination processing on the remaining lung images subjected to the screening processing in step S1, and generating a lung parenchyma mask image by superimposing and subtracting the lung parenchyma mask image from the torso mask image in step S2;
a step S4 of removing the trachea 4 from the lung parenchymal mask image processed in the step S3, and superimposing and multiplying the lung image subjected to the screening processing in the step S1 to generate a lung parenchymal image;
step S5, processing and storing torso mask data, lung parenchyma mask image data, trachea 4 data and characteristic part data of the lung image;
specifically, lung data is collected, the collected same lung data needs to be processed through a full threshold method to generate a lung image, the lung image is a huge low-gray connected region, the gray value of other unrelated structures is high, the same lung data to be collected may have a diseased part, the gray selection requirement of the lung image obtained through the global threshold method is high, the lung image obtained through one-time processing may have errors due to inaccuracy of the lung image possibly generated due to inaccuracy of the judged gray value, therefore, the collected same lung data needs to be processed through the full threshold method respectively, the position of the obtained lung image may have offset, and the position of the lung image is corrected according to the external contour in the lung image to facilitate subsequent processing and checking of the lung parenchyma after cutting processing; and respectively carrying out denoising point processing on the plurality of lung images to improve the accuracy of the lung images.
A plurality of lung images are generated through the processing of the step S1-1 and the step S1-2, at this time, the bed plate 1, the trunk 2 and the lung parenchyma 5 are located in a larger communication area in the lung images, the bed plate 1 and the trunk 2 are close to the lung parenchyma 5, the cutting of the lung parenchyma 5 is affected, the bed plate 1 needs to be separated from the lung parenchyma 5 and the trunk 2, the communication area of the trunk 2 is the largest, the trunk 2 is obtained by extracting the largest communication area, the bed plate 1 can be separated, and the efficiency is improved by processing only the parts of the trunk 2 and the lung parenchyma 5; calculating the number of pixels in each communication area based on Matlab, determining the maximum communication area, obtaining the position data of the trunk mask outside the lung image, and randomly selecting one lung image as a comparison lung image; comparing the position data of the trunk masks of the lung images to be comparison data, superposing and comparing the position data of the trunk masks of other lung images and the comparison data of the lung images to be compared one by one, wherein the ratio of the times of comparison consistency to the total comparison times is more than or equal to a set ratio, determining that the comparison data, namely the trunk mask position data, is correct, storing the lung images, performing the processing of the step S2 to generate trunk mask images, deleting the lung images inconsistent with the comparison data, and storing the lung images consistent with the comparison data; if the ratio of the times of comparison consistency to the total comparison times is smaller than the set ratio, the lung images are selected again to collect new data to be compared until the ratio of the times of comparison consistency to the total comparison times is larger than or equal to the set ratio to determine the position of the trunk 2, and partial or all parts of the bed plate 1 are prevented from being judged as the trunk 2 by mistake;
the lung parenchyma 5 is provided with the holes 3, when the lung parenchyma 5 is separated from the trunk 2, the processing difficulty is increased, and the lung images stored in the step S1-4 need to be processed by removing the holes 3 to generate preprocessed lung images; the step S2 generating a torso mask image from the lung image; generating a lung parenchymal mask image by overlapping and subtracting the preprocessed lung image and the torso mask image of the step S2; however, the lung images may have various lesion tissues, and due to the full threshold method in the step S1 and the processing of the hole elimination 3 in the step S3-1, when the generated preprocessed images are processed, the situations of inaccuracy or wrong judgment of the gray values may occur, and when the lung parenchyma 5 and the trunk 2 are segmented, the situation of partial or complete misjudgment of the trunk 2 as the lung parenchyma 5 may occur, or the partial or complete misjudgment of the lung parenchyma 5 as the trunk 2 may occur, so that the inaccuracy of the cut lung parenchyma 5 is caused, and therefore, two preprocessed lung images in the step S3-1 are respectively selected to be superimposed and subtracted with the trunk mask image in the step S2 to generate two parenchyma mask images; performing region segmentation on the two lung parenchymal mask images, comparing the two lung parenchymal mask images in regions, and storing the lung parenchymal mask image data of the segmentation regions which are compared and consistent; marking the areas of the lung parenchymal mask images of the segmentation areas which are inconsistent in comparison, and deleting the two compared lung parenchymal mask images; repeating the step S3-2, selecting two preprocessed lung images, superposing and subtracting the two preprocessed lung images with the trunk mask image of the step S2 to generate two lung parenchyma mask images, comparing only lung parenchyma mask image data information of the marked segmentation region, avoiding increasing workload of all comparison segmentation regions again, and comparing and storing the lung parenchyma mask image data of the consistent segmentation regions; and marking the areas of the lung parenchymal mask images of the segmentation areas which are inconsistent through comparison, and deleting the two lung parenchymal mask images until the lung parenchymal mask image data of all the segmentation areas are consistent to finally generate a lung parenchymal mask image.
The lung parenchymal mask image also comprises a trachea 4, the trachea 4 needs to be distinguished from the lung parenchyma 5, and the trachea 4 is a communication area in the lung parenchymal mask image; calculating the number of pixels in each communication area through Matlab, determining the communication area of the lung parenchyma mask image, confirming the communication area data and the communication area data of the preprocessed lung image, extracting the communication area with the area of the communication area smaller than 1000 to eliminate the trachea 4, and superposing the obtained image and the lung image subjected to screening processing in the step S1 to generate the lung parenchyma image; the processing stores torso mask data, lung parenchymal mask image data, trachea 4 data and feature part data of the lung image.
As shown in fig. 1 to 7, further, the step S1 includes:
s1-1, collecting lung data, and respectively carrying out position correction and processing on the collected same lung data to generate a plurality of lung images;
step S1-2, performing noise point removing processing on the images in the step S1-1 respectively;
specifically, the lung data is collected, the same collected lung data needs to be processed through a full threshold method to generate a lung image, the lung image is a huge low-gray-scale connected region, the gray values of other unrelated structures are high, the same lung data to be collected may have a diseased part, the requirement on gray level selection of the lung image obtained through the global threshold method is high, only one-time processing may cause errors of a subsequently segmented lung image due to inaccuracy of the judged gray value, and therefore the same collected lung data needs to be processed through the full threshold method respectively, the specific method is that an initial value T is set for the global threshold, the maximum gray value in the image is Tmax, the minimum gray value is Tmax + Tmin)/2; dividing the image into a foreground (the gray value is greater than or equal to T) and a background (the gray value is less than T) by taking T as a threshold, calculating the average gray value TF of the foreground and the average gray value TB of the background, updating the threshold T, and repeating T-TF + TB)/2 until T is not changed to obtain a lung image; particularly, the position of the obtained lung image may be deviated, and the lung image is corrected according to the external contour in the lung image, so that the lung parenchyma after subsequent processing and cutting processing can be conveniently checked; respectively processing the collected same lung data to generate a plurality of lung images; and respectively carrying out denoising point processing on the plurality of lung images to improve the accuracy of the lung images.
Further, the step S1 further includes:
step S1-3, obtaining the position data of the trunk mask outside the lung image by extracting the maximum communication area, and comparing and screening out the lung image with inconsistent trunk mask positions;
and step S1-4, storing the lung images with the screened torso masks in the same positions.
Specifically, a plurality of lung images are generated through the processing in the step S1-1 and the step S1-2, at this time, the bed plate 1, the trunk 2 and the lung parenchyma 5 are located in a larger communication area in the lung images, the bed plate 1 and the trunk 2 affect the cutting of the lung parenchyma 5 because of being close to the lung parenchyma 5, the bed plate 1 needs to be separated from the lung parenchyma 5 and the trunk 2, the communication area of the trunk 2 is the largest, the largest communication area is extracted to obtain the trunk 2, the bed plate 1 can be separated, and only the parts of the trunk 2 and the lung parenchyma 5 are processed later to improve the efficiency; calculating the number of pixels in each communication area based on Matlab, determining the maximum communication area, obtaining the position data of the trunk mask outside the lung image, and randomly selecting one lung image as a comparison lung image; comparing the position data of the trunk masks of the lung images to be comparison data, superposing and comparing the position data of the trunk masks of other lung images and the comparison data of the lung images to be compared one by one, wherein the ratio of the number of times of comparison which is consistent to the total number of times of comparison is more than or equal to a set ratio, determining that the comparison data, namely the trunk mask position data, is correct, storing the lung images, performing the processing of the step S2 to generate trunk mask images, deleting the lung images which are inconsistent with the comparison data, and storing the lung images which are consistent with the comparison data; if the ratio of the times of comparison consistency to the total comparison times is smaller than the set ratio, the lung images are selected again to collect new data to be compared until the ratio of the times of comparison consistency to the total comparison times is larger than or equal to the set ratio to determine the position of the trunk 2, and partial or all parts of the bed plate 1 are prevented from being judged as the trunk 2 by mistake;
further, the step S1-3 includes:
s1-3-1, arbitrarily selecting one lung image as a comparison lung image;
step S1-3-2, comparing the position data of the torso mask of the other lung images with the position data of the torso mask of the comparison lung image in the step S1-3-1 one by one, and selecting the comparison lung image to perform the step S2 if the ratio of the comparison times to the total comparison times is more than or equal to a set ratio;
and S1-3-3, comparing the position data of the torso masks of the other lung images with the position data of the torso masks of the compared lung images in the step S1-3-1 one by one, wherein the ratio of the number of times of comparison to the total number of comparison is smaller than a set ratio, and selecting the other lung images as the compared lung images to perform the step S1-3-2.
Specifically, a plurality of lung images are generated through the processing in the step S1-1 and the step S1-2, at this time, the bed plate 1, the trunk 2 and the lung parenchyma 5 are located in a larger communication area in the lung images, the bed plate 1 and the trunk 2 affect the cutting of the lung parenchyma 5 because of being close to the lung parenchyma 5, the bed plate 1 needs to be separated from the lung parenchyma 5 and the trunk 2, the communication area of the trunk 2 is the largest, the trunk 2 is obtained by extracting the largest communication area, the bed plate 1 can be separated, and the efficiency is improved by processing only the parts of the trunk 2 and the lung parenchyma 5; calculating the number of pixels in each communication area based on Matlab, determining the maximum communication area, and acquiring the position data of the trunk mask outside the lung image;
randomly selecting one lung image as a comparative lung image; comparing the position data of the trunk masks of the lung images to be comparison data, superposing and comparing the position data of the trunk masks of other lung images and the comparison data of the lung images to be compared one by one, wherein the ratio of the number of times of comparison in accordance to the total number of times of comparison is greater than or equal to a set ratio, determining that the comparison data, namely the trunk mask position data, is correct, storing, performing the processing of the step S2 to generate a trunk mask image, and deleting the lung images which are not in accordance with the comparison data; if the ratio of the times of comparison consistency to the total comparison times is smaller than the set ratio, the lung images are selected again to collect new data to be compared until the ratio of the times of comparison consistency to the total comparison times is larger than or equal to the set ratio, the position of the trunk 2 is determined according to a certain ratio, and partial or all parts of the bed plate 1 are prevented from being judged as the trunk 2 by mistake.
Further, the step S3 includes:
step S3-1, performing hole elimination 3 processing on the rest lung images screened and processed in the step S1 to generate a preprocessed lung image;
step S3-2, two preprocessed lung images in the step S3-1 and the torso mask image in the step S2 are respectively selected to be superposed and subtracted to generate two substantial mask images;
step S3-3, performing region segmentation on the two lung parenchyma mask images in the step S3-2, and comparing the two lung parenchyma mask images by using a comparison method; comparing and generating a lung parenchyma mask image; and comparing the two images with each other, deleting the two lung parenchyma mask images, and reselecting two new lung images to perform the step S3-1.
Specifically, the lung parenchyma 5 is provided with the holes 3, so that when the lung parenchyma 5 is separated from the trunk 2, the processing difficulty is increased, and the lung images stored in the step S1-4 need to be processed by removing the holes 3 to generate preprocessed lung images; the step S2 generating a torso mask image from the lung image; generating a lung parenchymal mask image by overlapping and subtracting the preprocessed lung image and the torso mask image of the step S2; however, the lung images may have various lesion tissues, and through the full threshold method in the step S1 and the processing of the hole elimination 3 in the step S3-1, there may be a case where inaccuracy or wrong judgment of the gray value may occur during the processing of the generated preprocessed images, so that when the lung parenchyma 5 and the trunk 2 are segmented, there may be a case where a local or complete misjudgment of the trunk 2 as the lung parenchyma 5 may occur, or a local or complete misjudgment of the lung parenchyma 5 as the trunk 2 may occur, which causes inaccuracy of the cut lung parenchyma 5, and therefore two preprocessed lung images in the step S3-1 and the trunk mask image in the step S2 are respectively selected to be superimposed and subtracted to generate two parenchyma mask images; performing region segmentation on the two lung parenchymal mask images, comparing the two lung parenchymal mask images in regions, and storing the lung parenchymal mask image data of the segmentation regions which are compared and consistent; marking the areas of the lung parenchymal mask images of the segmentation areas which are inconsistent in comparison, and deleting the two compared lung parenchymal mask images; repeating the step S3-2, selecting two preprocessed lung images, superposing and subtracting the two preprocessed lung images with the trunk mask image of the step S2 to generate two lung parenchyma mask images, comparing only lung parenchyma mask image data information of the marked segmentation region, avoiding increasing workload of all comparison segmentation regions again, and comparing and storing the lung parenchyma mask image data of the consistent segmentation regions; and marking the areas of the lung parenchymal mask images of the segmentation areas which are not consistent in comparison, and deleting the two lung parenchymal mask images until the lung parenchymal mask image data of all the segmentation areas are consistent to finally generate one lung parenchymal mask image.
Further, the alignment method in step S3-3 includes:
a step S3-3-1 of performing a region segmentation process on the first lung parenchymal mask image and the second lung parenchymal mask image, wherein the two lung parenchymal mask images of the step S3-2 are the first lung parenchymal mask image and the second lung parenchymal mask image, respectively;
step S3-3-2, extracting the characteristic part in the segmentation area of the first lung parenchyma mask image, and searching the corresponding characteristic part in the segmentation area of the second lung parenchyma mask image;
and step S3-3-3, performing weighting processing on the characteristic part which can be corresponding to the step S3-3-2 to store the category and the characteristic data, and marking the division region of which the corresponding characteristic part is not found.
And S3-3-4, repeating the step S3-2 to generate two lung parenchymal mask images, repeating the step S3-3, and only comparing the segmentation areas which cannot find the corresponding characteristic parts until the lung parenchymal mask image data of all the segmentation areas are consistent.
Specifically, the two lung parenchymal mask images of the step S3-2 are a first lung parenchymal mask image and a second lung parenchymal mask image, and the first lung parenchymal mask image and the second lung parenchymal mask image are subjected to region segmentation processing; comparing the first lung parenchymal mask image and the second lung parenchymal mask image in different areas; storing the lung parenchyma mask image data of the segmentation areas which are compared and consistent; marking the areas of the lung parenchymal mask images of the segmentation areas with inconsistent comparison, and deleting the two lung parenchymal mask images; repeating the step S3-2, selecting two pre-processed lung images, superposing and subtracting the two pre-processed lung images with the trunk mask image of the step S2 to generate two lung parenchymal mask images, comparing only the lung parenchymal mask image data information of the marked segmentation region, avoiding the increase of workload of all comparison segmentation regions again, and comparing and storing the lung parenchymal mask image data of the consistent segmentation regions; marking the areas of the lung parenchymal mask images of the segmentation areas which are inconsistent in comparison, and deleting the two lung parenchymal mask images until the lung parenchymal mask image data of all the segmentation areas are consistent and stored; for example, comparing the first segmentation region of the first lung parenchymal mask image with the first segmentation region of the second lung parenchymal mask image, and storing the data of the first segmentation region when the comparison is consistent; comparing the second segmentation area of the first lung parenchymal mask image with the second segmentation area of the second lung parenchymal mask image, marking and storing the second segmentation areas when the comparison is inconsistent, and deleting the two old lung parenchymal mask images; selecting two preprocessed lung images from the two preprocessed lung images, superposing and subtracting the two preprocessed lung images with the trunk mask image in the step S2 to generate two lung parenchyma mask images, comparing only the data information of the two lung parenchyma mask images of the marked second segmentation area, and comparing the lung parenchyma mask image data of the consistent segmentation area for storage; and performing weighting processing on the compared characteristic part to store the category and the characteristic data.
Further, the step S4 includes:
a step S4-1 of removing the trachea 4 from the lung parenchymal mask image processed in the step S3;
in step S4-2, the lung parenchymal mask image of the step S4-1 from which the trachea 4 is removed is superimposed and multiplied by the lung image of the screening process in step S1 to generate a lung parenchymal image.
Specifically, the lung parenchymal mask image further includes a trachea 4, the trachea 4 needs to be distinguished from the lung parenchyma 5, and the trachea 4 is a connected region in the lung parenchymal mask image; calculating the number of pixels in each connected region through Matlab, determining the connected region of the lung parenchyma mask image, confirming the connected region data and the connected region data of the preprocessed lung image, extracting the connected region with the area smaller than 1000 to eliminate the trachea 4, and superposing the obtained image and the lung image screened and processed in the step S1 to generate the lung parenchyma image; and processing and storing the torso mask data, the lung parenchyma mask image data, the trachea 4 data and the characteristic part data of the lung image.
The method comprises the steps of S1, respectively processing the same collected lung data to obtain the position data of the external trunk masks in a plurality of lung images, and comparing and screening out the lung images with inconsistent trunk mask positions; s2, selecting a lung image which is screened and removed in the S1 to generate a trunk mask image and storing the trunk mask image; step S3, performing hole 3 elimination processing on the remaining lung images subjected to the screening processing in step S1, and generating a lung parenchyma mask image by superimposing and subtracting the lung parenchyma mask image from the torso mask image in step S2; a step S4 of removing the trachea 4 from the lung parenchymal mask image, and superimposing and multiplying the lung image subjected to the screening processing in the step S1 to generate a lung parenchymal image; by extracting a plurality of torso mask positions, parts such as air outside the lung parenchyma and an examining table do not need to be processed, and the processing speed and the accuracy of the edges of the lung parenchyma are improved; the characteristic part is confirmed through comparing the segmented region with the lung parenchyma mask image, the internal tissue data of the lung parenchyma is confirmed, and the situation that the cut lung parenchyma is inaccurate and the workload of a doctor is increased is prevented.
The above description is for the purpose of illustrating the preferred embodiments of the present invention, but the present invention is not limited thereto, and all changes and modifications that can be made within the spirit of the present invention should be included in the scope of the present invention.

Claims (7)

1. An image segmentation processing method of a lung parenchyma, characterized by: comprises that
Step S1, collecting lung data, respectively processing the same collected lung data to obtain position data of an external trunk mask in a plurality of lung images, and comparing and screening lung images with inconsistent trunk mask positions;
step S2, selecting a lung image which is screened out in the step S1 to generate a torso mask image and storing the torso mask image;
step S3, performing hole (3) elimination processing on the remaining lung images subjected to the screening processing in step S1, and subtracting the torso mask image in step S2 to generate a lung parenchyma mask image;
the step S1 includes:
s1-1, collecting lung data, and respectively carrying out position correction and processing on the collected same lung data to generate a plurality of lung images;
step S1-2, performing noise point removing processing on the images in the step S1-1 respectively;
step S1-3, acquiring the position data of the trunk mask outside the lung image by extracting the maximum communication area, and comparing and screening out the lung image with inconsistent trunk mask positions;
the step S1-3 includes:
s1-3-1, arbitrarily selecting one lung image as a comparison lung image;
s1-3-2, comparing the position data of the torso masks of the rest lung images with the position data of the torso masks of the compared lung images of the step S1-3-1 one by one, and selecting the compared lung images to perform the step S2 if the ratio of the times of comparison consistency to the total comparison times is greater than or equal to a set ratio;
s1-3-3, comparing the position data of the torso masks of the other lung images with the position data of the torso masks of the compared lung images in the S1-3-1 one by one, and if the ratio of the number of times of comparison to the total number of times of comparison is smaller than a set ratio, selecting the other lung images as the compared lung images to perform the step S1-3-2;
the method also comprises a step S4 of removing the trachea (4) in the lung parenchymal mask image processed in the step S3 and multiplying the lung image screened and processed in the step S1 to generate a lung parenchymal image.
2. The image segmentation processing method of lung parenchyma as claimed in claim 1, wherein: the step S3 includes:
step S3-1, performing hole elimination (3) processing on the rest lung images screened and processed in the step S1 to generate a preprocessed lung image;
step S3-2, respectively selecting the two preprocessed lung images in the step S3-1 and the torso mask image in the step S2 to subtract to generate two lung parenchyma mask images;
step S3-3, performing region segmentation on the two lung parenchyma mask images in the step S3-2, and comparing the two lung parenchyma mask images by using a comparison method; comparing and generating a lung parenchyma mask image; and (5) comparing the two lung parenchymal mask images to be inconsistent, deleting the two lung parenchymal mask images, and reselecting two new lung images to perform the step S3-1.
3. The image segmentation processing method of lung parenchyma as claimed in claim 1, wherein: the step S1 further includes:
and step S1-4, storing the lung image with the screened torso mask position consistent.
4. The image segmentation processing method of lung parenchyma according to claim 3, wherein: the comparison method in the step S3-3 comprises the following steps:
a step S3-3-1 of performing a region segmentation process on the first lung parenchymal mask image and the second lung parenchymal mask image, wherein the two lung parenchymal mask images of the step S3-2 are the first lung parenchymal mask image and the second lung parenchymal mask image, respectively;
step S3-3-2, extracting the characteristic part in the segmentation area of the first lung parenchyma mask image, and searching the corresponding characteristic part in the segmentation area of the second lung parenchyma mask image;
and step S3-3-3, performing weighting processing on the corresponding characteristic part in the step S3-3-2 to store the category and the characteristic data, and marking the division area of the corresponding characteristic part which is not found.
5. The image segmentation processing method of lung parenchyma according to claim 3, wherein: the alignment method in step S3-3 further includes:
and S3-3-4, repeating the step S3-2 to generate two lung parenchymal mask images, repeating the step S3-3, and only comparing the segmentation areas which cannot find the corresponding characteristic parts until the lung parenchymal mask image data of all the segmentation areas are consistent.
6. The method of claim 2, wherein: step S4 includes:
a step S4-1 of removing the trachea (4) in the lung parenchymal mask image processed in the step S3;
and a step S4-2 of multiplying the lung parenchymal mask image of the step S4-1 with the trachea (4) removed by the lung image of the step S1 to generate a lung parenchymal image.
7. The image segmentation processing method of lung parenchyma as claimed in claim 1, wherein: the method also comprises a step S5 of processing and storing torso mask data, lung parenchyma mask image data, trachea (4) data and characteristic part data of the lung image.
CN202110404136.9A 2021-04-14 2021-04-14 Image segmentation processing method for lung parenchyma Active CN113096139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110404136.9A CN113096139B (en) 2021-04-14 2021-04-14 Image segmentation processing method for lung parenchyma

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110404136.9A CN113096139B (en) 2021-04-14 2021-04-14 Image segmentation processing method for lung parenchyma

Publications (2)

Publication Number Publication Date
CN113096139A CN113096139A (en) 2021-07-09
CN113096139B true CN113096139B (en) 2022-09-06

Family

ID=76677544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110404136.9A Active CN113096139B (en) 2021-04-14 2021-04-14 Image segmentation processing method for lung parenchyma

Country Status (1)

Country Link
CN (1) CN113096139B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7206462B1 (en) * 2000-03-17 2007-04-17 The General Hospital Corporation Method and system for the detection, comparison and volumetric quantification of pulmonary nodules on medical computed tomography scans
CN102429679A (en) * 2011-09-09 2012-05-02 华南理工大学 Computer-assisted emphysema analysis system based on chest CT (Computerized Tomography) image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751437B (en) * 2013-12-30 2017-08-25 蓝网科技股份有限公司 Lung's extraction method based on chest CT image
US10902619B2 (en) * 2016-10-26 2021-01-26 Duke University Systems and methods for determining quality metrics of an image or images based on an edge gradient profile and characterizing regions of interest in an image or images
CN110766713A (en) * 2019-10-30 2020-02-07 上海微创医疗器械(集团)有限公司 Lung image segmentation method and device and lung lesion region identification equipment
CN112648935A (en) * 2020-12-14 2021-04-13 杭州思锐迪科技有限公司 Image processing method and device and three-dimensional scanning system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7206462B1 (en) * 2000-03-17 2007-04-17 The General Hospital Corporation Method and system for the detection, comparison and volumetric quantification of pulmonary nodules on medical computed tomography scans
CN102429679A (en) * 2011-09-09 2012-05-02 华南理工大学 Computer-assisted emphysema analysis system based on chest CT (Computerized Tomography) image

Also Published As

Publication number Publication date
CN113096139A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN109685809B (en) Liver infusorian focus segmentation method and system based on neural network
CN109830289B (en) Rib image display device
US7295870B2 (en) Method for the detection and automatic characterization of nodules in a tomographic image and a system of medical imaging by tomodensimetry
EP3594830A1 (en) Similar case image search program, similar case image search device, and similar case image search method
CN102855618A (en) Method for image generation and image evaluation
JP2001137230A (en) Computer aided diagnostic system
CN115661149B (en) Lung image processing system based on lung tissue data
US20150297164A1 (en) Automatic identification of a potential pleural effusion
CN111462201A (en) Follow-up analysis system and method based on novel coronavirus pneumonia CT image
CN105678758A (en) Image feature automatic identifying and extracting method
US11935234B2 (en) Method for detecting abnormality, non-transitory computer-readable recording medium storing program for detecting abnormality, abnormality detection apparatus, server apparatus, and method for processing information
CN113096139B (en) Image segmentation processing method for lung parenchyma
JP7304437B2 (en) Methods, apparatus, media and electronic devices for segmentation of pneumonia symptoms
US11475568B2 (en) Method for controlling display of abnormality in chest x-ray image, storage medium, abnormality display control apparatus, and server apparatus
CN113034522A (en) CT image segmentation method based on artificial neural network
CN109559317B (en) Lung nodule segmentation method based on CT image
JP2020032043A (en) Image processing device, method, and program
CN112967254A (en) Lung disease identification and detection method based on chest CT image
CN114764809A (en) Self-adaptive threshold segmentation method and device for lung CT (computed tomography) density increase shadow
CN116029972A (en) Fracture region nondestructive segmentation and reconstruction method based on morphology
CN114343693A (en) Aortic dissection diagnosis method and device
CN113822872A (en) Image feature information extraction method for hepatoma imaging omics
Baima et al. Dense Swin Transformer for Classification of Thyroid Nodules
CN112767332A (en) Blood vessel region judgment method and system based on CTA image
Novak et al. System for automatic detection of lung nodules exhibiting growth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230629

Address after: Room 219, Building 8, No. 41 Jinlang Road, Langxia Town, Jinshan District, Shanghai, 200000

Patentee after: Shanghai Hanhu Intelligent Technology Co.,Ltd.

Address before: No.85 Wujin Road, Hongkou District, Shanghai

Patentee before: SHANGHAI FIRST PEOPLE'S Hospital