CN116934885A - Lung segmentation method, device, electronic equipment and storage medium - Google Patents

Lung segmentation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116934885A
CN116934885A CN202210334311.6A CN202210334311A CN116934885A CN 116934885 A CN116934885 A CN 116934885A CN 202210334311 A CN202210334311 A CN 202210334311A CN 116934885 A CN116934885 A CN 116934885A
Authority
CN
China
Prior art keywords
image
lung
segmentation
trachea
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210334311.6A
Other languages
Chinese (zh)
Inventor
谢卫国
叶宗州
张旭
高金兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weide Precision Medical Technology Co ltd
Original Assignee
Shenzhen Weide Precision Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weide Precision Medical Technology Co ltd filed Critical Shenzhen Weide Precision Medical Technology Co ltd
Priority to CN202210334311.6A priority Critical patent/CN116934885A/en
Publication of CN116934885A publication Critical patent/CN116934885A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the application provides a lung segmentation method, a device, electronic equipment and a storage medium, wherein the lung segmentation method comprises the following steps: acquiring an electronic computed tomography CT image, and preprocessing the CT image to obtain a preprocessed CT image; performing lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image; performing trachea elimination treatment on the lung primary segmentation image to obtain a trachea elimination treated image; and under the condition that left and right lungs are sticky in the image after the trachea removing treatment, carrying out left and right lung separation treatment on the image after the trachea removing treatment to obtain a lung image after left and right lung separation. The embodiment of the application can obtain accurate segmentation of the lung region under the condition of keeping the details of the original form of the lung to the maximum extent.

Description

Lung segmentation method, device, electronic equipment and storage medium
Technical Field
The application relates to the field of medical image segmentation, in particular to a lung segmentation method, a lung segmentation device, electronic equipment and a storage medium.
Background
In recent years, with the continuous progress of science, the medical level is continuously improved, and many medical devices and technologies are continuously innovated and developed. Among them, medical imaging techniques are most widely used. Medical imaging techniques utilize computed tomography techniques to generate high resolution images to assist a physician in diagnosis. High precision computerized tomography (computed tomography, CT) generally sees clearer human tissue structure and detail, but high precision CT requires higher resolution. To obtain a high resolution CT, the slice thickness of the CT is generally reduced to generate more CT slices in the same scan region, and a large number of CT images may cause a doctor to read the slice and be tired, and may even cause a doctor to misdiagnose or miss the patient.
In order to improve the diagnosis efficiency of doctors on the lung disease of patients, the method is used for carrying out rough segmentation on the CT image of the lung by combining a multi-threshold segmentation method with a marked watershed, and the method can inhibit the over-segmentation of the image but can lose certain lung details, so that the details of the original form of the lung are lost.
Disclosure of Invention
The embodiment of the application provides a lung segmentation method, a device, electronic equipment and a storage medium, which can obtain accurate segmentation of a lung region under the condition of keeping the details of the original form of the lung to the maximum extent.
A first aspect of an embodiment of the present application provides a method of pulmonary segmentation, the method comprising:
acquiring an electronic computed tomography CT image, and preprocessing the CT image to obtain a preprocessed CT image;
performing lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image;
performing trachea elimination treatment on the lung primary segmentation image to obtain a trachea elimination treated image;
and under the condition that left and right lung adhesion exists in the image after the trachea removing treatment, carrying out left and right lung separation treatment on the image after the trachea removing treatment to obtain a lung image after left and right lung separation.
Optionally, the preprocessing the CT image to obtain a preprocessed CT image includes:
and carrying out noise reduction and binarization on the CT image to obtain a processed CT image.
Optionally, the performing lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image includes:
performing image morphology operation processing and maximum connected domain retaining processing on the preprocessed CT image to obtain a body contour image;
and performing image subtraction operation on the body contour image and the preprocessed CT image to obtain a lung primary segmentation image.
Optionally, the performing a tracheal culling process on the lung primary segmentation image to obtain a tracheal culled image includes:
and positioning seed points of a lung trachea of the lung initial segmentation image, acquiring a binarization image of the lung trachea by a three-dimensional region growing method, and performing image subtraction operation on the binarization image of the lung trachea and the processed CT image to obtain an image after trachea rejection processing.
Optionally, the step of performing left and right lung separation processing on the image after the tracheal culling processing to obtain a lung image after left and right lung separation includes:
And positioning the adhesion region of the left and right lungs by a projection integration method, and performing left and right lung separation treatment by corroding the adhesion region of the left and right lungs to obtain a lung image after the left and right lungs are separated.
Optionally, after the left and right lung separation processing is performed on the image after the tracheal culling processing to obtain a lung image after the left lung and the right lung are separated, the method further includes:
and performing edge repair operation on the left lung area and the right lung area in the lung image respectively to obtain a repaired lung image.
Optionally, the edge repair operation is performed on the left lung area and the right lung area in the lung image, so that after the repaired lung image is obtained, the method further includes:
and carrying out three-dimensional reconstruction on the repaired lung image to obtain a three-dimensional model containing the lung region.
A second aspect of an embodiment of the present application provides a lung segmentation apparatus, the apparatus being applied to an electronic device; the device comprises:
an acquisition unit for acquiring an electronic computed tomography CT image;
the segmentation processing unit is used for preprocessing the CT image to obtain a preprocessed CT image;
The segmentation processing unit is also used for carrying out lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image;
the segmentation processing unit is also used for carrying out trachea elimination processing on the lung primary segmentation image to obtain a trachea elimination processed image;
and the segmentation processing unit is also used for carrying out left and right lung separation processing on the image after the trachea removing processing under the condition that left and right lung adhesion exists in the image after the trachea removing processing, so as to obtain a lung image after left and right lung separation.
A third aspect of an embodiment of the application provides an electronic device comprising a processor and a memory for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to execute the step instructions as in the first aspect of the embodiment of the application.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform part or all of the steps as described in the first aspect of the embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product, wherein the computer program product comprises a computer program comprising program instructions which, when executed by a processor, cause the processor to perform part or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
In the embodiment of the application, an electronic computed tomography CT image is acquired, and the CT image is preprocessed to obtain a preprocessed CT image; performing lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image; performing trachea elimination treatment on the lung primary segmentation image to obtain a trachea elimination treated image; and under the condition that left and right lung adhesion exists in the image after the trachea removing treatment, carrying out left and right lung separation treatment on the image after the trachea removing treatment to obtain a lung image after left and right lung separation. In the embodiment of the application, the details of the original form of the lung can be reserved to the maximum extent through the initial lung segmentation, the trachea rejection treatment and the left and right lung separation treatment are carried out after the initial lung segmentation, and the accurate segmentation of the lung region is obtained under the condition of reserving the details of the original form of the lung to the maximum extent, so that a doctor can conveniently and accurately conduct the examination of the focus of the lung.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for lung segmentation according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a CT image with window width and level adjustment according to an embodiment of the present application;
FIG. 3 is a schematic view of a CT image with window width and level adjustment according to an embodiment of the present application;
FIG. 4 illustrates a CT image effect before and after smoothing using curvature flow according to an embodiment of the present application;
FIG. 5 is a diagram of a binarization effect using OSTU method according to an embodiment of the present application;
FIG. 6a is a schematic diagram of an expansion operation according to an embodiment of the present application;
FIG. 6b is a schematic diagram of a corrosion operation provided by an embodiment of the present application;
FIG. 7 is a schematic diagram showing image effect contrast by adopting corrosion operation treatment according to an embodiment of the present application;
FIG. 8 is a schematic view of an image after performing a corrosion operation according to an embodiment of the present application;
FIG. 9 is a schematic image of a method for labeling connected domains according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an image processed by a method for marking connected domains according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a lung primary segmentation image obtained by performing an image subtraction operation on a body contour image and a processed CT image according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a bounding box for calculating each connected domain according to a method for automatically locating seed points according to an embodiment of the present application;
FIG. 13 is a schematic diagram of an image obtained after a trachea elimination process by performing an image subtraction operation on a lung primary segmentation image and a lung trachea binarized image according to an embodiment of the present application;
FIG. 14 is a schematic view of positioning the two left and right lung adhesion points by projection integration method according to the embodiment of the present application;
FIG. 15 is a schematic diagram of locating a region of interest by projection integration according to an embodiment of the present application;
FIG. 16 is a schematic diagram of etching pixels in a region of interest by morphological operators according to an embodiment of the present application;
FIG. 17 is a flow chart of another method for lung segmentation according to an embodiment of the present application;
FIG. 18 is a schematic illustration of an edge restoration operation by a ball method according to an embodiment of the present application;
FIG. 19 is a flow chart of another method for lung segmentation according to an embodiment of the present application;
FIG. 20 is a schematic illustration of a three-dimensional model including lung regions provided by an embodiment of the present application;
FIG. 21 is a schematic view of a lung segmentation apparatus according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the described embodiments of the application may be combined with other embodiments.
The lung segmentation method, the device, the electronic equipment and the storage medium of the application are described below with reference to the accompanying drawings, the details of the original form of the lung can be furthest reserved through the primary lung segmentation, the trachea rejection treatment and the left and right lung separation treatment are carried out after the primary lung segmentation, and the accurate segmentation of the lung region is obtained under the condition of furthest reserving the details of the original form of the lung, thereby being convenient for doctors to carry out accurate lung focus investigation.
Referring to fig. 1, fig. 1 is a flowchart of a lung segmentation method according to an embodiment of the present application. As shown in fig. 1, the lung segmentation method may include the following steps.
101, the electronic device acquires an electronic computed tomography CT image, and performs preprocessing on the CT image to obtain a preprocessed CT image.
In an embodiment of the present application, the manner in which the electronic device acquires the electronic computed tomography (computed tomography, CT) image may include any one of the following: 1. the CT equipment is in communication connection with the CT equipment, the CT equipment actively transmits the shot CT image to the electronic equipment, or the electronic equipment sends a CT image acquisition request to the CT equipment, and the CT equipment responds to the request and sends the CT image corresponding to the CT image number carried by the request to the electronic equipment. 2. The CT equipment uploads the shot CT image to a server, and the electronic equipment acquires the CT image from the server. 3. CT images are acquired from an external storage device (e.g., a mobile hard disk).
The CT image of the embodiment of the present application may be a CT image taken for human tissue (e.g., lung) of a human body. The CT image may be a three-dimensional image composed of a plurality of CT slice images.
It should be noted that, when taking a CT, the patient needs to hold his breath, i.e. to keep breathing. The breathing state of the patient at the time of taking the CT can be recorded.
The preprocessing of the CT image by the electronic device may comprise the steps of:
(11) Window width and window level value adjustment of the CT image (selecting a target window width and a target window level corresponding to the lung to obtain a CT image at the target window width and the target window level);
(12) Processing the CT outside field area of the CT image (correcting the pixel value of the CT outside field area of the CT image adjusted in the step 1 to be zero value);
(13) Noise reduction processing (performed on the CT image processed in step 2);
(14) And (5) binarizing.
In the step (11), a target window width and a target window level corresponding to the lung can be selected, the window width of the CT image is adjusted to be in the target window width, and the window level of the CT image is adjusted to be in the target window level.
Window width and level value adjustments of CT images may also be referred to as image contrast adjustments. The density and structure of each tissue of human body are different, and have different CT values. Different window levels and widths of CT images are typically adjusted to view different densities of normal or diseased tissue. When it is desired to emphasize a certain tissue or region in the CT image, the proper window level and width are adjusted to obtain the best display of lung details. For example, in chest CT detection, the window width of the lung window ranges from: 1500-2000 HU, window level range is: -450 to-600 HU.
Referring to fig. 2, fig. 2 is a schematic diagram of a CT image with window width and window level adjustment according to an embodiment of the present application.
In step (12), the region outside the CT field of view of the CT image is processed, and the pixel value of the region outside the CT field of view of the CT image may be corrected to a zero value. The processing of step (12) can enhance the effect of the binarization processing of step (14).
In order to obtain an ideal chest CT binarized image, the pixel value outside the circular view of the CT machine is corrected to be zero, and if the pixel value outside the circular view is not processed, the pixel data outside the circular view can participate in the operation of various algorithms of the image, so that the processing result of the CT image is seriously affected. The following circular outer region is a non-CT field of view region, which if left, would have an impact on the binarization process (e.g., adaptive thresholding) of the CT image.
Referring to fig. 3, fig. 3 is a schematic diagram of a CT image with window width and window level adjustment according to an embodiment of the present application. Fig. 3 is a left diagram showing the binarization effect after step (14) is performed without step (12) being performed, and fig. 3 is a right diagram showing the binarization effect after step (14) is performed after step (12) is performed. As shown in the left diagram of fig. 3, if step (12) is not performed, and after step (14) is performed, the boundary between the binarized region and the region outside the CT field of view is changed to the boundary between the region outside the CT field of view and the region inside the CT field of view, and the region inside the CT field of view is simply binarized, the region inside the CT field of view has the same binarized value (the binarized value is 1, corresponding to the white region in the left diagram of fig. 3), and the region outside the CT field of view has the same other binarized value (the binarized value is 0, corresponding to the black region in the left diagram of fig. 3), so that the lung region inside the CT field of view cannot be distinguished, and an image with an erroneous binarized effect is obtained. As shown in the right diagram of fig. 3, if step (12) is performed, after step (14), the lung area in the CT field of view is clearer, and a CT image with better binarization effect can be obtained.
In step (13), the noise reduction process may be curvature flow filtering noise reduction. By adopting curvature flow filtering noise reduction, more image details can be reserved and time is not consumed too much when filtering noise reduction is carried out.
From a diagrammatical view of fig. 3, CT images are often accompanied by a lot of noise and speckle. If a noise reduction algorithm based on a variational model, such as an anisotropic algorithm, is adopted, high-order differentiation needs to be calculated, and the problems of excessively high algorithm complexity, excessively long smooth waiting time and the like exist, so that the method is not suitable for medical image segmentation. The embodiment of the application can remove image noise by adopting curvature flow filtering noise reduction. The curvature flow filtering noise reduction can not only keep the image edge and detail characteristics, but also remove the noise of the image, and the noise reduction processing speed is high. Curvature flow filtering noise reduction may utilize the discrete characteristics of the image to implicitly optimize curvature, i.e., reduce curvature without requiring calculation of curvature. This does not require a secondary guidance of the final result, thus better protecting the image edges.
Referring to fig. 4, fig. 4 shows a CT image effect before and after using curvature flow smoothing according to an embodiment of the present application. As shown in fig. 4, the left graph in fig. 4 is a CT image before curvature flow smoothing, and the right graph in fig. 4 is a CT image after curvature flow smoothing. As can be seen from fig. 4, the CT image smoothed with the curvature flow is less noisy and retains more detailed features of the image than the CT image before the curvature flow smoothing.
Wherein the smoothing effect of the curvature flow can be adjusted by adjusting the number of iterations and the time step of the smoothing. The iteration times and time steps can be flexibly adjusted (for example, the tested better parameters can be debugged). For example, the number of iterations may be set to three, i.e., three consecutive smoothes, with a step size set to 1.25.
In step (14), the binarization process may be adaptive threshold binarization process.
Because of the difference of the gray scale distribution of each CT image, if the threshold is directly set for image binarization, the difference of the binarization effect is caused. The embodiment of the application adopts the self-adaptive threshold method to divide and obtain the binary chest CT image with better effect. The adaptive threshold binarization process may include a maximum inter-class variance method (OSTU). The maximum inter-class variance method is a binarization method for automatically determining a threshold value invented by Japanese scholars in Ojin, also called Ojin method.
The CT image is pre-segmented by adopting an OSTU method, so that the CT image structure of the body and the outside air can be well separated.
The OSTU method determines an optimal segmentation threshold by calculating the distribution of each pixel value, creating a gray level histogram, and dividing the image into foreground and background portions. Since the gray distribution uniformity can represent the magnitude of variance, the variance between the two parts of the image is calculated according to the gray distribution of the selected foreground and background, and the difference between the two parts of the image can be known from the variance value. If the classified gray values are directly determined, the foreground and the background of some images may be incorrectly classified, the variances of the two parts may be similar, which means that the foreground and the background are similar, and an ideal binarization result cannot be obtained. Therefore, the segmentation with maximized inter-class variance represents the best threshold for binarized segmentation, closest to the ideal segmentation effect.
The basic principle of the OSTU method is as follows:
(1) Assume that an image is binarized according to a threshold T0, divided into a foreground F and a background B.
(2) Let the total number of pixels in the image area be N and the number of foreground pixels be N f The number of background pixels is N b The method comprises the steps of carrying out a first treatment on the surface of the Let the image have L gray levels, and the pixel with gray level i has N i And if so, the gray values of the foreground and the background of the image meet the following probability distribution:
wherein P is f Probability distribution of gray value for foreground, P b Is the gray value probability distribution of the background.
(3) The grayscale averages of pixels of the foreground and background can be expressed as:
wherein V is f Is the gray average value of the foreground pixels, V b Is the gray average value of the pixels of the background.
(4) The average gray value of all pixels of the image is:
V=P f ×V f +P b ×V b #(4-5)
where V is the average gray value of all pixels.
(5) By the expression, the inter-class variance between the foreground and the background can be obtained as follows:
σ 2 =P f ×(V f -V) 2 +P b ×(V b -V) 2 #(4-6)
wherein sigma 2 Is the inter-class variance between foreground and background.
(6) By traversing all pixel values as thresholds or by taking other calculation methods, the inter-class variance sigma between foreground and background 2 The maximum threshold T at this time is the optimal threshold for binarization of the image.
Referring to fig. 5, fig. 5 is a diagram showing a binarization effect of an OSTU method according to an embodiment of the present application. As shown in fig. 5, the left diagram of fig. 5 is an image that is not binarized, and the right diagram of fig. 5 is an image that is binarized by the OSTU method.
Optionally, the electronic device performs preprocessing on the CT image to obtain a preprocessed CT image, which specifically includes the following steps:
and the electronic equipment performs noise reduction and binarization on the CT image to obtain a preprocessed CT image.
In the embodiment of the application, the noise reduction processing may include a noise reduction algorithm (such as an anisotropic algorithm) of a variational model or a curvature flow filtering noise reduction algorithm. By adopting the curvature flow filtering noise reduction algorithm, more image details can be reserved and the time is less when the filtering noise reduction is carried out. The binarization process may employ an adaptive threshold binarization process. For example, the adaptive thresholding may include a maximum inter-class variance method (OSTU) that may optimize the threshold of the inter-class variance maximized partition to achieve a partition effect that is closest to the ideal.
102, the electronic equipment performs lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image.
In the embodiment of the application, the lung is initially segmented, a CT bracket (namely, a CT bed) in the preprocessed CT image can be eliminated through image morphological operation, and the preprocessed CT image of the eliminated CT bracket is processed by reserving the maximum connected domain, so that a body contour image is obtained. The primary segmentation image of the lung can be extracted by the mode of preserving the processing of the maximum connected domain, so that the original form of the lung can be preserved to the maximum extent, and a doctor can conveniently and accurately conduct the examination of the focus of the lung.
The connected domain in the embodiment of the application can be a three-dimensional connected domain.
Alternatively, step 102 may include the steps of:
(21) The electronic equipment performs image morphological operation processing and maximum connected domain retaining processing on the preprocessed CT image to obtain a body contour image;
(22) And the electronic equipment performs image subtraction operation on the body contour image and the processed CT image to obtain a lung primary segmentation image.
In the embodiment of the application, the image morphology operation processing is carried out on the preprocessed CT image, so that the influence of the CT support (namely, the CT bed) on the subsequent image connected domain can be eliminated. The image morphology operation introduces mathematical morphology theory into the image processing, and can be used for eliminating irrelevant structures or tissues or connecting similar structures of the interested part of the adhesion image, so as to obtain an ideal image expression effect. The image morphology operations may include: any one of corrosion operation, expansion operation, open operation, and close operation.
In the image subtraction operation of the two binarized images, the following rule may be followed: 1-0=1, 1-1=0, 0-0=0. Wherein the white area in the binarized image represents 1 and the black area represents 0.
The image morphological operation will be described by taking the expansion operation and the erosion operation as examples.
The dilation operation is a process of merging background pixels contacted by the current region into the current region continuously by using structural elements, and as a result, the region boundary is expanded outwards, and is generally used for connecting adjacent regions, and the mathematical expression is as follows:
wherein A is the original pixel point set, S is the expansion factor, and is also the pixel point set. S andis a reverse set, the structural direction is reversed, +.>Representing exclusive or, the pixel values are continually combined.
Referring to fig. 6a, fig. 6a is a schematic diagram illustrating an expansion operation according to an embodiment of the application. As shown in fig. 6a, the origin of the structural element in the expansion factor S represents the origin of the structure of this expansion factor, which may be circular, rectangular or other structure. The expansion factor S in figure 6a has one non-zero pixel both to the right and below the origin of the structural element, representing exclusive or operation, namely, summing up three pixel point sets, namely, the original pixel point set A, the pixel point set after translating the original pixel point set A by 1 unit to the right (the direction of a non-zero pixel on the right side of the structural element origin of the expansion factor) and the pixel point set after translating the original pixel point set A by 1 unit downwards (the direction of a non-zero pixel on the lower side of the structural element origin of the expansion factor), namely, obtaining the expanded pixel point set >It can be seen that the expanded set of pixelsThe number of non-zero pixels increases (as can be seen from fig. 6a, the number of non-zero pixels changes from 7 to 13).
The corrosion operation is a process of continuously reducing contact elements between a current area and a background area by using structural elements, the operation result is that the boundary is internally converged, and the operation result is generally used for eliminating a connected domain with small adhesion in an image.
Wherein A is the original pixel point set, S is the corrosion factor, and is also the pixel point set,representing the same or, pixel values are continually eliminated.
Referring to fig. 6b, fig. 6b is a schematic diagram illustrating an etching operation according to an embodiment of the application. As shown in fig. 6b, the origin of the structural element in the corrosion factor S represents the origin of the structure of this corrosion factor, which may be circular, rectangular or other structure. The corrosion factor S in figure 6b has one non-zero pixel both to the right and below the origin of the structural element,the represented exclusive nor operation shows that the three pixel point sets of the original pixel point set A, the pixel point set after the original pixel point set A translates leftwards (opposite direction of the non-zero pixel on the right side of the structural element origin of the corrosion factor S) by 1 unit and the pixel point set after the original pixel point set A translates upwards (opposite direction of the non-zero pixel on the lower side of the structural element origin of the corrosion factor S) by 1 unit are intersected, namely the corroded pixel point set- >It can be seen that the number of non-zero pixels in the eroded pixel set is reduced (from 7 to 2 as seen in FIG. 6 b)。
The morphological operation of the image in step (21) can be described by taking the etching operation as an example.
Referring to fig. 7, fig. 7 is a schematic diagram showing image effect contrast by erosion operation according to an embodiment of the present application. As shown in fig. 7, the left side of fig. 7 is an image before the erosion operation process (i.e., an image binarized on the right side of fig. 5), and the right side of fig. 7 is an image after the erosion operation process. As can be seen from fig. 7, in the image after the erosion operation, the CT stent has lost the connection with the body.
The maximum connected domain retention process can separate the body from the air. The maximum connected domain preserving process may include the steps of: firstly, inverting the pixel binary of the image on the right side of fig. 7 (inverting for the first time) to obtain an image after inverting the image after the corrosion operation treatment as shown in fig. 8; at this time, the body contour and the outside air can be segmented out by marking the non-zero pixels (white areas) in fig. 8, leaving the maximum connected domain. Then, the image shown in fig. 8 is subjected to a method of labeling connected domains, and an image processed by the method of labeling connected domains shown in fig. 9 is obtained. The black area of fig. 9 is the body contour and the white area is air. Then, the image shown in fig. 9 is subjected to pixel binary inversion (second inversion), and an image obtained by inverting the image processed by the method of marking the connected domain as shown in fig. 10, that is, a body contour image is obtained. Finally, the body contour image (the image shown in fig. 10) and the processed CT image are subjected to image subtraction operation to obtain a lung primary segmentation image, as shown in fig. 11, fig. 11 is a schematic diagram of the lung primary segmentation image obtained by performing image subtraction operation on the body contour image and the processed CT image according to the embodiment of the present application. The lung primary segmentation image shown on the right side of fig. 11 is an image of one of the CT slice sequences. The image of the CT slice sequence may be a slice sequence including the main trachea or a slice sequence not including the main trachea. The image of the CT slice sequence of fig. 11 is described by taking a slice sequence that does not include the main trachea as an example.
The method for marking the connected domain can be various, and the Two-Pass scanning (Two-Pass) method can be adopted to mark the connected domain in the embodiment of the application. The name implies that connected domains existing in the image can be found and marked by traversing the image twice. In the traversing process, the number of pixels in each connected domain and a Bounding Box (Bounding Box) are calculated and related records are made.
The implementation process of the two-pass scanning method comprises the following steps: for the first traversal, a label value L (Lable) is recorded for each pixel's location. The pixel point set in the same connected area is given one to a plurality of marking values L, and pixels with different marking values in the same connected area are combined into the same class; and when traversing the image for the second time, classifying the similar marked pixels into the same connected region and updating the identification value of the pixel set into the same L value.
The process of the two-pass scanning specifically comprises the following steps:
input: image B, pixel range (x, y);
and (3) outputting: a collection of tagged connected domains (label Set).
The step of scanning the image for the first time comprises:
1. accessing the current pixel B (x, y) if the pixel value of the current pixel B (x, y) =1;
2 if there is no non-zero pixel point in B (x, y) neighborhood, giving B (x, y) a new label; namely: label+=1 (table value+1), B (x, y) =label (label value given B (x, y) after one+1);
3. If the neighborhood of B (x, y) contains a neighbor of a pixel with a pixel value greater than 1; the minimum value in Neighbors is assigned to B (x, y), namely: b (x, y) =min { Neighbors };
4. recording the equality relation among all the identification values in the neighbor, namely that the identification values belong to the same communication area;
labelSet [ i ] = { Lm, … …, ln }, all L in labelSet [ i ] belong to the same connected region.
The step of scanning the image a second time includes:
1. traversing all label values of equivalent relationship to label=b (x, y) homonymsFinding out the minimum label from the obtained values min Let B (x, y) =label min
2. In the traversal process, pixels with equal label values in the image are connected into the same region according to the label values.
The embodiment of the application can extract the lung primary segmentation image according to the mode of retaining the maximum connected domain, can furthest retain the original form of the lung, and is convenient for doctors to accurately check the focus of the lung.
The initial segmentation of the lung, derived from fig. 11, may be accompanied by adhesion of the lung to surrounding tissue. Therefore, it is necessary to remove other tissues and determine the last remaining connected region by comparing the sizes of the plurality of connected regions through the analysis of the connecting members. The pulmonary connectivity of fig. 11 can be discussed in two categories:
The first type of case: two lung adhesions. At this time, the largest connected domain is the lung region, and the lung region can be obtained directly by labeling the largest connected domain. In the case of two lung adhesions, there may be a trachea in the lung region obtained by labeling the largest connected domain.
The second category of cases: the case where the two lungs are separated. In the case where two lungs are separated, the left and right lungs cannot be retained by directly retaining the largest connected domain. The number of marks of the connected domain may be set to 3 at this time. And calculating the number of pixels in each connected domain to determine left and right lung regions so as to remove other surrounding tissues, and removing a region less than half of the number of pixels in the maximum connected domain according to the principle that the volumes of the two lungs are similar, so that the tissues around the lungs can be removed, and the regions of the left and right lungs can be obtained. In the case of two lungs separated, there are still situations where the trachea needs to be eliminated.
103, the electronic equipment performs trachea elimination processing on the lung initial segmentation image to obtain a trachea elimination processed image.
In the embodiment of the application, the trachea removing treatment is to obtain a trachea mask by using a region growing method, and then subtract the trachea mask from the lung initial segmentation image, so as to obtain a trachea removing treated image, thereby achieving the purpose of trachea removing. The trachea is removed by a regional growth method, so that the trachea can be removed accurately.
Optionally, step 103 may include the steps of:
(31) The method comprises the steps that electronic equipment positions seed points of lung air pipes of a lung primary segmentation image, and a binarization image of the lung air pipes is obtained through a three-dimensional area growth method;
(32) And the electronic equipment performs image subtraction operation on the lung primary segmentation image and the lung tracheal binarization image to obtain a tracheal rejection processed image.
In the embodiment of the application, the seed points of the lung trachea are positioned, and the seed points can be obtained by carrying out connected domain analysis according to the region of interest of the lung. The region of interest of the lung may be the upper lung region of a primary segmented image (three-dimensional image) of the lung. If there is only one connected region in the region of interest of the lung, this connected region must be the main trachea, and if there are two or more connected regions, then a Bounding Box (Bounding Box) for each connected region needs to be calculated. Which connected domain is closer to the middle position of the image is compared, and if the connected domain is closer to the middle position of the image, the connected domain is considered to be the connected domain where the trachea is located, and the other connected domain is the lung. And selecting seed point coordinates of the lung trachea from a communication domain where the trachea is located, generating a region in the selected seed point coordinates by a three-dimensional region growing method, and acquiring a binarized image of the lung trachea. And performing image subtraction operation on the lung primary segmentation image and the binarized image of the lung trachea to obtain a trachea-removed image.
In the image subtraction operation of the two binarized images, the following rule may be followed: 1-0=1, 1-1=0, 0-0=0. Wherein the white area in the binarized image represents 1 and the black area represents 0.
A two-dimensional slice of the region of interest of the lung may be processed. The three-dimensional image of the region of interest can be sliced to generate a two-dimensional sequence for analysis, so that the tracheal region can be more accurately positioned. The embodiment of the application is to reduce the running time of the algorithm and only slice the region of interest. Considering the position structural characteristics of the main air pipe, the interested slice area is fixed in the upper lung area, and the slice direction is the head-foot direction of the patient. The upper lung region may be a region where an upper half of a lung primary segmentation image (three-dimensional image) is located.
The connected domain analysis can be performed by adopting a method of automatically positioning seed points. Specifically, the connected domain analysis is performed from a region of interest (upper lung region) of the lung by a method of automatically locating seed points. Analysis is performed in three cases according to the number of connected domains, if there is only one connected domain, this connected domain is necessarily the main gas pipe, and if there are two or more connected domains, it is necessary to calculate a Bounding Box (Bounding Box) for each connected domain. According to the scope of the binding Box on the X axis, which connected domain is close to the middle position of the image can be compared, if the connected domain is close to the middle position, the connected domain is considered to be the connected domain where the trachea is located, and the other connected domain is considered to be the connected domain where the lung is located, because according to anatomical knowledge, the connected domain located in the center is the section of the trachea, the binding Box of the trachea is confirmed, and the coordinates of the seed point can be confirmed by calculating the centroid of the connected domain or the midpoint of a Bounding Box (also can be called as a Bounding Box). For example, at least two (e.g., 3 to 5) slices are selected from the upper layer slice region of the lung, and the seed point coordinates obtained from the at least two slices are used as seed point coordinates for region growth. Referring to fig. 12, fig. 12 is a schematic diagram of a bounding box for calculating each connected domain according to a method for automatically positioning a seed point according to an embodiment of the present application. As shown in fig. 12, in the left diagram of fig. 12, there are only 1 connected domains, and the connected domains are connected domains where the air pipe is located; the right diagram of fig. 12 has 3 connected domains, and then the connected domain near the middle of the 3 connected domains (the smallest connected domain in the diagram of fig. 12) is the connected domain where the trachea is located.
Wherein the region generation may be a three-dimensional region growth. The basic idea of three-dimensional region growth is: the seed area or seed point is set first, and then some small areas with the same or similar properties are merged into the own area according to the surrounding area, and the area becomes larger and larger through continuous merging. The specific implementation steps are that the positions of seed points are firstly determined (the positions of the seed points are determined before), one to a plurality of seed points can be selected, then an image is traversed from the seed points, the growth rule and the stop condition of the seed points are required to be specified in traversing, the three-dimensional region is generally grown by adopting a region with 6 neighborhood or 26 neighborhood, when the region around the seed points accords with the growth condition, the seed points are updated and are combined into the region, the region becomes larger along with the increase of the traversing times, and the region is not expanded until the stop condition of the growth is met. Wherein the stop condition may include: pixels within the 6 neighborhood or 26 neighborhood region do not meet the growth threshold and stop growing.
The aim of removing the trachea can be achieved by subtracting the coarsely segmented trachea (binarized image of the lung trachea) from the lung parenchymal image with the trachea (the lung primary segmentation image). Referring to fig. 13, fig. 13 is a schematic diagram of an image obtained by performing an image subtraction operation on a lung primary segmentation image and a lung tracheal binarization image according to an embodiment of the present application. The lung primary segmentation image shown on the left side of fig. 13 is an image of one of the CT slice sequences. The image of the CT slice sequence may be a slice sequence including the main trachea or a slice sequence not including the main trachea. The image of the CT slice sequence of fig. 13 is illustrated by way of example with a slice sequence that includes the main trachea (white approximately circular region in the lung primary segmentation image as shown on the left side of fig. 13).
104, under the condition that left and right lung adhesion exists in the image after the trachea removing treatment, the electronic equipment performs left and right lung separation treatment on the image after the trachea removing treatment to obtain a lung image after left and right lung separation.
Optionally, in step 104, the electronic device performs a left-right lung separation process on the image after the trachea removing process to obtain a lung image after the left lung and the right lung are separated, and may include the following steps:
the electronic equipment positions the adhesion areas of the left and right lungs through a projection integration method, and performs left and right lung separation treatment through corroding the adhesion areas of the left and right lungs to obtain a lung image after the left and right lungs are separated.
In the embodiment of the application, aiming at the situation that left and right lung adhesion occurs in CT images of certain sequences, the left and right lung areas are required to be separated. If the complexity of the processing time of the whole CT sequence is large, the method in the embodiment of the application comprises the following steps: the method comprises the steps of firstly determining a region of interest to be processed for each CT image in the images after trachea elimination processing, then positioning to a region where two lungs are adhered by a projection integration method, positioning adhesion points by curvature transformation of lung contours in the region, and then cutting out left and right lungs from the adhesion points.
In the embodiment of the present application, when left and right lung adhesion exists in an image after the trachea removing process, the left and right lung separation process for the image after the trachea removing process may specifically include the following steps:
(41) The projection integration is carried out on the positive half axis of the Y axis, the integration method is to accumulate the number of pixel points of the lung area of the positive half axis of the Y axis, and the projection integration value on the x axis can be calculated according to the following projection integration calculation formula:
θ(x)=∫ s f(x,y)dy;
and determining a column with the minimum integrated value theta (x), and obtaining a coordinate point P (x 0, y 0) with the maximum y value of the column, wherein the point P is the adhesion point of the left and right lungs. Referring to fig. 14, fig. 14 is a schematic diagram illustrating positioning of two lung adhesion points by projection integration according to an embodiment of the present application. As shown in fig. 14, the y value corresponding to the minimum x value of the integrated value θ (x) is the largest coordinate point P (x 0, y 0).
(42) And setting a square bounding box with the L pixel ranges as side lengths by taking the P point as the center, defining the square bounding box as B (P, L), and taking the region corresponding to the square B as the region of interest. The pixel range of the lung in the Z axis can be determined according to the lung contour edge, and L can be set to be Zmax-Zmin. Referring to fig. 15, fig. 15 is a schematic diagram illustrating a positioning of a region of interest by a projection integration method according to an embodiment of the application. As shown in fig. 15, the rectangular frame shown in fig. 15 is the region of interest.
(43) After the region of interest is obtained, the pixel points in the region are corroded through a morphological operator, the purpose of segmenting the left and right lungs is achieved, the radius of the morphological corrosion operator is difficult to determine, and errors can be caused to final segmentation due to different sizes. The embodiment of the application provides a method for confirming morphological operators, which comprises the following specific steps:
(431) The empirical value r0 of the morphological erosion operator is set, the calculation of r0 is determined by calculating the number of positive half-axis non-zero pixel points of the y-axis at the P point, and half of the number can be taken as r0.
(432) Generating a small sphere structure with r0 as radius.
(433) And carrying out corrosion treatment on the non-zero pixel points in the region of interest by using the small sphere structure.
(434) And (3) carrying out connected domain analysis on the image in the region according to the result after the corrosion treatment, if only one connected domain exists, indicating that the value of r0 is too small to divide the left lung and the right lung successfully, and at the moment, taking 1 as a step length, increasing the value of r0, and returning to the step (432).
(435) If two connected domains exist after the corrosion treatment, the success of the segmentation of the left lung and the right lung is indicated.
Referring to fig. 16, fig. 16 is a schematic diagram illustrating corrosion of pixels in a region of interest by a morphological operator according to an embodiment of the present application. As shown in fig. 16, the rectangular frame shown in fig. 16 is the region of interest, and in the process of executing the above steps (431) to (435), the image on the left side of fig. 16 is the region of interest before etching (1 connected region exists in the region of interest), the image in the middle of fig. 16 is the region of interest during etching (1 connected region exists in the region of interest, but the etched region in the connected region, i.e., the black region increases), and the image on the right side of fig. 16 is the region of interest after etching (2 connected regions exist in the region of interest).
According to the embodiment of the application, the left lung and the right lung can be separated by the projection integration method, the volumes of the two lungs can be calculated by separating the left lung and the right lung, the adhesion region of the left lung and the right lung can be rapidly positioned, and the other region forms of the lungs can not be damaged by corrosion of a small part of regions.
In the embodiment of the application, the details of the original form of the lung can be reserved to the maximum extent through the initial lung segmentation, the trachea rejection treatment and the left and right lung separation treatment are carried out after the initial lung segmentation, and the accurate segmentation of the lung region is obtained under the condition of reserving the details of the original form of the lung to the maximum extent, so that a doctor can conveniently and accurately conduct the examination of the focus of the lung.
Referring to fig. 17, fig. 17 is a flowchart of another lung segmentation method according to an embodiment of the present application. As shown in fig. 17, the lung segmentation method may include the following steps.
1701, the electronic device acquires an electronic computed tomography CT image, and performs preprocessing on the CT image to obtain a preprocessed CT image.
1702, the electronic device performs lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image.
1703, the electronic equipment performs trachea elimination processing on the lung initial segmentation image to obtain a trachea elimination processed image.
1704, under the condition that left and right lung adhesion exists in the image after the trachea removing treatment, the electronic equipment performs left and right lung separation treatment on the image after the trachea removing treatment, and a lung image after left and right lung separation is obtained.
The specific implementation of steps 1701 to 1704 may refer to steps 101 to 104, which are not described herein.
1705, the electronic device performs edge repair operation on the left lung area and the right lung area in the lung image respectively, so as to obtain a repaired lung image. The left lung area and the right lung area in the lung image are subjected to edge repair operation respectively, so that the corroded edges in the separation process of the two lungs can be smoothed, the lung repair effect is improved, and the repaired lung image is closer to the real lung.
In the embodiment of the present application, after the morphological operator in step 1704 is processed, the pulmonary contour may be uneven, and it is necessary to repair the pulmonary contour to obtain a smooth pulmonary contour. The lung edge restoration according to the embodiment of the application can be restored by using a rolling ball method. The rolling ball method is to roll along a certain direction by using a spherical mask and then the edge of the lung, when the edge defect or the concave-convex degree is large, the rolling ball is combined at the edge of the lung so as to fill the edge defect, and when the sphere rolls around the outline of the lung for one circle, the outline repair is finished. The edge repair is carried out on the lung region by a rolling ball method, the concave part of the lung outline after corrosion treatment is filled, the lung repair effect is improved, and the repaired lung image can be obtained to be closer to the real lung.
The specific steps of the ball method can be described as:
input: a set of pulmonary contour points (n 1, n2, … …, nn), a Radius of the sphere Radius;
and (3) outputting: the lung contour point sets (n 1, n2, … …, nm) after repair.
Starting:
1. calculating a normal vector at n1, and calculating the center coordinate of the rolling ball B through Radius;
2. if there are multiple crossing points (p 1, p 2) between the ball B and the lung contours (n 1, n 2), repairing the ball B in the next step, otherwise returning to the previous step to process the next contour point ni+1;
3. selecting a point pmin with the shortest distance between Ni and (p 1, p2, pn), deleting points between ni+1 and pmin in the contour point set, wherein Ni and pmin are connected by adopting a straight line;
4. when ni= nn, lung contour repair is completed, otherwise, returning to step "1", and performing iterative processing on the next contour point.
Referring to fig. 18, fig. 18 is a schematic diagram of an edge restoration operation by a rolling ball method according to an embodiment of the present application. As shown in fig. 18, the white circle is a rolling ball, the rolling ball rolls in a certain direction of the lung outline edge, when encountering an edge defect or a large degree of concave-convex, the rolling ball merges the lung edge to fill the edge defect, and when the rolling ball rolls one circle of the lung outline, the outline repair is finished.
Referring to fig. 19, fig. 19 is a flowchart of another lung segmentation method according to an embodiment of the present application. As shown in fig. 19, the lung segmentation method may include the following steps.
1901, the electronic device acquires an electronic computed tomography CT image, and performs preprocessing on the CT image to obtain a preprocessed CT image.
1902, the electronic device performs lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image.
1903, the electronic equipment performs trachea elimination processing on the lung initial segmentation image to obtain a trachea elimination processed image.
1904, in the case that left and right lung adhesion exists in the image after the trachea removing treatment, the electronic equipment performs left and right lung separation treatment on the image after the trachea removing treatment to obtain a lung image after left and right lung separation.
1905, the electronic device performs edge repair operation on the left lung area and the right lung area in the lung image respectively, so as to obtain a repaired lung image.
1906, the electronic device performs three-dimensional reconstruction on the repaired lung image to obtain a three-dimensional model containing the lung region.
In the embodiment of the application, the three-dimensional reconstruction can comprise surface drawing reconstruction, and a three-dimensional model with accurate image contour can be obtained through the surface drawing reconstruction. The process of three-dimensional reconstruction may include the steps of: reading a two-dimensional medical graph sequence (such as a CT slice sequence) into a memory to obtain layered three-dimensional data field data; scanning the three-dimensional data field layer by layer, and taking a cube formed by eight vertexes of two adjacent layers as a current voxel; respectively comparing scalar values of eight corner points in the current voxel with values of the isosurface, and calculating data with indexes of the current voxel set forth as all 0's or all 1's; solving each vertex coordinate of the triangular patch by using a linear interpolation method; calculating the normal vector at each corner of the voxel by using a center difference method, and then calculating the normal vector at each vertex of the triangular patch by using linear interpolation again; and sending the coordinates of each vertex of the triangular surface patch and the corresponding normal vector into a surface drawing link.
In the embodiment of the application, after lung segmentation is completed, a three-dimensional model containing a lung region can be obtained through three-dimensional reconstruction. The doctor can observe the lung region of the patient from a three-dimensional angle without page-by-page turning and reference from the two-dimensional CT sequence diagram, and the diagnosis efficiency of the doctor can be improved. Referring to fig. 20, fig. 20 is a schematic diagram of a three-dimensional model including a lung region according to an embodiment of the present application.
The implementation of step 1901 to step 1904 may refer to step 101 to step 104, and the implementation of step 1905 may refer to step 1705, which are not described herein.
A specific flow diagram of a lung segmentation method is provided below.
1) First, CT images of a patient are acquired, as shown in FIG. 2;
2) Removing the region outside the field of view of the CT image and then performing noise reduction treatment on the CT image, as shown in FIG. 4;
3) The CT image after noise reduction is adaptively binarized, as shown in figure 5;
4) Removing the CT bracket and extracting the CT body contour by using the connected domain, as shown in figure 10;
5) Obtaining a lung binarized image by using the difference between the body contour image and the CT binarized image, as shown in FIG. 11;
6) Positioning seed points of a lung trachea by using a lung contour image, obtaining a lung trachea binarization image through three-dimensional region growth as shown in fig. 12, and then performing difference between the lung trachea binarization image and the lung trachea binarization image to achieve the aim of removing the trachea as shown in fig. 13;
7) After removing the trachea, positioning the two-lung adhesion area by a projection integration method, as shown in fig. 14, and corroding the area to achieve the purpose of separating the two lungs, as shown in fig. 15 and 16;
8) The outline of the lung may be uneven after separating the two lungs, and the outline is repaired by a rolling ball method, as shown in fig. 18;
9) The segmented image is subjected to three-dimensional reconstruction to obtain a final three-dimensional model, as shown in fig. 20.
The above description of the solution of the embodiment of the present application is presented in terms of the implementation of the procedure from the method side. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Referring to fig. 21, fig. 21 is a schematic structural diagram of a lung segmentation apparatus 2100 according to an embodiment of the present application, where the lung segmentation apparatus 2100 includes:
an acquisition unit 2101 for acquiring an electronic computed tomography CT image;
a segmentation processing unit 2102, configured to perform preprocessing on the CT image, to obtain a preprocessed CT image;
the segmentation processing unit 2102 is further configured to perform lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image;
the segmentation processing unit 2102 is further configured to perform a tracheal rejection process on the lung primary segmentation image, so as to obtain a tracheal rejection processed image;
The segmentation processing unit 2102 is further configured to perform left and right lung separation processing on the image after the tracheal culling processing to obtain a lung image after the separation processing in which the left lung and the right lung are separated, when the left and right lung are sticky.
Optionally, the segmentation processing unit 2102 performs preprocessing on the CT image to obtain a preprocessed CT image, including: and carrying out noise reduction and binarization on the CT image to obtain a preprocessed CT image.
Optionally, the segmentation processing unit 2102 performs lung primary segmentation on the preprocessed CT image to obtain a lung primary segmented image, including: performing image morphology operation processing and maximum connected domain retaining processing on the preprocessed CT image to obtain a body contour image; and performing image subtraction operation on the body contour image and the preprocessed CT image to obtain a lung primary segmentation image.
Optionally, the segmentation processing unit 2102 performs a tracheal culling process on the lung primary segmentation image to obtain a tracheal culling processed image, including: and positioning seed points of a lung trachea of the lung initial segmentation image, acquiring a binarization image of the lung trachea by a three-dimensional region growing method, and performing image subtraction operation on the binarization image of the lung trachea and the processed CT image to obtain an image after trachea rejection processing.
Optionally, the segmentation processing unit 2102 performs left and right lung separation processing on the image after the trachea removing processing to obtain a lung image after left lung and right lung separation, including: and positioning the adhesion region of the left and right lungs by a projection integration method, and performing left and right lung separation treatment by corroding the adhesion region of the left and right lungs to obtain a lung image after the left and right lungs are separated.
Optionally, the pulmonary segmentation apparatus 2100 may further include a repair unit 2103;
the repairing unit 2103 is configured to perform edge repairing operations on the left lung region and the right lung region in the lung image, respectively, to obtain a repaired lung image.
Optionally, the pulmonary segmentation apparatus 2100 may further include a three-dimensional reconstruction unit 2104;
the three-dimensional reconstruction unit 2104 is configured to perform three-dimensional reconstruction on the repaired lung image, so as to obtain a three-dimensional model including a lung region.
Among them, the segmentation processing unit 2102, the repair unit 2103, and the three-dimensional reconstruction unit 2104 in the embodiment of the present application may be processors in an electronic apparatus. The acquisition unit 2101 may be a communication module in an electronic device.
In the embodiment of the application, the details of the original form of the lung can be reserved to the maximum extent through the initial lung segmentation, the trachea rejection treatment and the left and right lung separation treatment are carried out after the initial lung segmentation, and the accurate segmentation of the lung region is obtained under the condition of reserving the details of the original form of the lung to the maximum extent, so that a doctor can conveniently and accurately conduct the examination of the focus of the lung.
Referring to fig. 22, fig. 22 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 22, the electronic device 2200 includes a processor 2201 and a memory 2202, where the processor 2201 and the memory 2202 may be connected to each other through a communication bus 2203. The communication bus 2203 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The communication bus 2203 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 22, but not only one bus or one type of bus. The memory 2202 is used to store a computer program comprising program instructions, the processor 2201 being configured to invoke the program instructions, the program comprising instructions for performing part or all of the steps of the methods comprised in fig. 1-19.
The processor 2201 may be a general purpose Central Processing Unit (CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the above programs.
The Memory 2202 may be, but is not limited to, read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, but may also be electrically erasable programmable read-Only Memory (EEPROM), compact disc read-Only Memory (Compact Disc Read-Only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and coupled to the processor via a bus. The memory may also be integrated with the processor.
The electronic device 2200 may also include a communication module and a display. The communication module may be a wireless communication module (e.g., a WiFi module, a bluetooth module, etc.) or a wired communication module.
The electronic device 2200 may further include general-purpose components such as a communication interface (e.g., a USB interface, a microphone interface, etc.), an antenna, etc., which are not described in detail herein.
In the embodiment of the application, the details of the original form of the lung can be reserved to the maximum extent through the initial lung segmentation, the trachea rejection treatment and the left and right lung separation treatment are carried out after the initial lung segmentation, and the accurate segmentation of the lung region is obtained under the condition of reserving the details of the original form of the lung to the maximum extent, so that a doctor can conveniently and accurately conduct the examination of the focus of the lung.
The embodiment of the present application also provides a computer-readable storage medium storing a computer program for electronic data exchange, the computer program causing a computer to execute part or all of the steps of any one of the lung segmentation methods described in the above method embodiments.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software program modules.
The integrated units, if implemented in the form of software program modules, may be stored in a computer-readable memory for sale or use as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, which is stored in a memory, and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned memory includes: a U-disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-only memory, random access memory, magnetic or optical disk, etc.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A method of lung segmentation, the method being applied to an electronic device, the method comprising:
acquiring an electronic computed tomography CT image, and preprocessing the CT image to obtain a preprocessed CT image;
performing lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image;
Performing trachea elimination treatment on the lung primary segmentation image to obtain a trachea elimination treated image;
and under the condition that left and right lung adhesion exists in the image after the trachea removing treatment, carrying out left and right lung separation treatment on the image after the trachea removing treatment to obtain a lung image after left and right lung separation.
2. The method of claim 1, wherein preprocessing the CT image to obtain a preprocessed CT image comprises:
and carrying out noise reduction and binarization on the CT image to obtain a preprocessed CT image.
3. The method of claim 1, wherein the performing the preliminary lung segmentation on the preprocessed CT image to obtain a preliminary lung segmentation image comprises:
performing image morphology operation processing and maximum connected domain retaining processing on the preprocessed CT image to obtain a body contour image;
and performing image subtraction operation on the body contour image and the preprocessed CT image to obtain a lung primary segmentation image.
4. The method according to claim 1, wherein performing a tracheal culling process on the lung primary segmentation image to obtain a tracheal culled image comprises:
And positioning seed points of a lung trachea of the lung initial segmentation image, acquiring a binarization image of the lung trachea by a three-dimensional region growing method, and performing image subtraction operation on the binarization image of the lung trachea and the processed CT image to obtain an image after trachea rejection processing.
5. The method according to claim 1, wherein the performing left-right lung separation processing on the image after the tracheal culling processing to obtain a lung image after left lung and right lung separation includes:
and positioning the adhesion region of the left and right lungs by a projection integration method, and performing left and right lung separation treatment by corroding the adhesion region of the left and right lungs to obtain a lung image after the left and right lungs are separated.
6. The method according to any one of claims 1 to 5, wherein after the left and right lung separation processing is performed on the image after the tracheal culling processing to obtain a lung image after the left and right lung separation processing, the method further comprises:
and performing edge repair operation on the left lung area and the right lung area in the lung image respectively to obtain a repaired lung image.
7. The method of claim 6, wherein the performing an edge repair operation on the left lung region and the right lung region in the lung image, respectively, results in a repaired lung image, and further comprising:
And carrying out three-dimensional reconstruction on the repaired lung image to obtain a three-dimensional model containing the lung region.
8. A lung segmentation apparatus, characterized in that the apparatus is applied to an electronic device; the device comprises:
an acquisition unit for acquiring an electronic computed tomography CT image;
the segmentation processing unit is used for preprocessing the CT image to obtain a preprocessed CT image;
the segmentation processing unit is also used for carrying out lung primary segmentation on the preprocessed CT image to obtain a lung primary segmentation image;
the segmentation processing unit is also used for carrying out trachea elimination processing on the lung primary segmentation image to obtain a trachea elimination processed image;
and the segmentation processing unit is also used for carrying out left and right lung separation processing on the image after the trachea removing processing under the condition that left and right lung adhesion exists in the image after the trachea removing processing, so as to obtain a lung image after left and right lung separation.
9. An electronic device comprising a processor and a memory, the memory for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-7.
CN202210334311.6A 2022-03-31 2022-03-31 Lung segmentation method, device, electronic equipment and storage medium Pending CN116934885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210334311.6A CN116934885A (en) 2022-03-31 2022-03-31 Lung segmentation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210334311.6A CN116934885A (en) 2022-03-31 2022-03-31 Lung segmentation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116934885A true CN116934885A (en) 2023-10-24

Family

ID=88383013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210334311.6A Pending CN116934885A (en) 2022-03-31 2022-03-31 Lung segmentation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116934885A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576126A (en) * 2024-01-16 2024-02-20 广东欧谱曼迪科技股份有限公司 Optimization method and device for lung lobe segmentation, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576126A (en) * 2024-01-16 2024-02-20 广东欧谱曼迪科技股份有限公司 Optimization method and device for lung lobe segmentation, electronic equipment and storage medium
CN117576126B (en) * 2024-01-16 2024-04-09 广东欧谱曼迪科技股份有限公司 Optimization method and device for lung lobe segmentation, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Shen et al. An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy
JP2002503861A (en) Automatic drawing method and system of lung region and rib diaphragm angle in chest radiograph
CN108573502B (en) Method for automatically measuring Cobb angle
CN109753997B (en) Automatic accurate robust segmentation method for liver tumor in CT image
Pulagam et al. Automated lung segmentation from HRCT scans with diffuse parenchymal lung diseases
CN107633514B (en) Pulmonary nodule peripheral blood vessel quantitative evaluation system and method
WO2007018889A1 (en) Pulminary nodule detection in a chest radiograph
CN107545579B (en) Heart segmentation method, device and storage medium
Wang et al. Automatic Approach for Lung Segmentation with Juxta‐Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction
Chen et al. Pathological lung segmentation in chest CT images based on improved random walker
US8050470B2 (en) Branch extension method for airway segmentation
Zhao et al. An automated pulmonary parenchyma segmentation method based on an improved region growing algorithmin PET-CT imaging
CN111932495B (en) Medical image detection method, device and storage medium
CN108305247B (en) Method for detecting tissue hardness based on CT image gray value
CN111145226B (en) Three-dimensional lung feature extraction method based on CT image
CN116934885A (en) Lung segmentation method, device, electronic equipment and storage medium
CN111080556A (en) Method, system, equipment and medium for strengthening trachea wall of CT image
CN113012127A (en) Cardiothoracic ratio measuring method based on chest medical image
JP2004188202A (en) Automatic analysis method of digital radiograph of chest part
CN114066885B (en) Lower limb skeleton model construction method and device, electronic equipment and storage medium
KR101274283B1 (en) Liver automatic segmentation method and system
Ukil et al. Automatic lung lobe segmentation in X-ray CT images by 3D watershed transform using anatomic information from the segmented airway tree
CN113780421B (en) Brain PET image identification method based on artificial intelligence
CN112634280B (en) MRI image brain tumor segmentation method based on energy functional
Iwao et al. Integrated lung field segmentation of injured regions and anatomical structures from chest CT images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination