CN111724360B - Lung lobe segmentation method, device and storage medium - Google Patents

Lung lobe segmentation method, device and storage medium Download PDF

Info

Publication number
CN111724360B
CN111724360B CN202010534722.0A CN202010534722A CN111724360B CN 111724360 B CN111724360 B CN 111724360B CN 202010534722 A CN202010534722 A CN 202010534722A CN 111724360 B CN111724360 B CN 111724360B
Authority
CN
China
Prior art keywords
lung
image
images
segmented
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010534722.0A
Other languages
Chinese (zh)
Other versions
CN111724360A (en
Inventor
李强
杨英健
刘洋
郭英委
曾吴涛
康雁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202010534722.0A priority Critical patent/CN111724360B/en
Publication of CN111724360A publication Critical patent/CN111724360A/en
Application granted granted Critical
Publication of CN111724360B publication Critical patent/CN111724360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lung lobe segmentation method, a device and a storage medium, which relate to the field of medical image processing, wherein the lung lobe segmentation method comprises the following steps: acquiring lung images at multiple moments in the respiratory process; determining a lung image to be segmented in the multi-moment lung images, wherein the lung image outside the image to be segmented is used as a first lung image; fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at a moment before the image to be segmented and/or at least one lung image at a moment after the image to be segmented; and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented. So as to solve the problem of poor segmentation effect caused by insufficient characteristic information of the lung (leaf).

Description

Lung lobe segmentation method, device and storage medium
Technical Field
The invention relates to the field of medical image processing, in particular to a lung lobe segmentation method, a lung lobe segmentation device and a storage medium.
Background
The upper end of the lung is rounded and called the tip of the lung, protrudes upwards into the root of the neck through the upper orifice of the thorax, the bottom is positioned above the diaphragm, the surface facing the rib and the rib clearance is called the rib surface, the surface facing the mediastinum is called the inner side surface, the bronchus, the blood vessel, the lymphatic vessel and the nerve in the center of the surface are called the hilum, and the structures of the hilum are wrapped by connective tissues and called the root of the lung. The left lung is divided into an upper lung lobe and a lower lung lobe by oblique cleavage, and the right lung is divided into an upper lung lobe, a middle lung lobe and a lower lung lobe by horizontal cleavage besides oblique cleavage.
At present, whether traditional machine learning or deep learning is adopted to divide lung lobes, classification is carried out according to the characteristics of the lung (lobe), so that the characteristics of the lung (lobe) are particularly important, and if the characteristic information of one lung (lobe) is enough, the classifier can learn and classify better, and lung lobe division is completed better.
Disclosure of Invention
In view of the above, the present invention provides a lung lobe segmentation method, device and storage medium to solve the problem of poor segmentation effect caused by insufficient feature information of the lung (lobe) at present.
In a first aspect, the present invention provides a lung lobe segmentation method comprising:
acquiring lung images at multiple moments in the respiratory process;
determining a lung image to be segmented in the multi-moment lung images, wherein the lung image outside the image to be segmented is used as a first lung image;
fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at a moment before the image to be segmented and/or at least one lung image at a moment after the image to be segmented;
and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
Preferably, the method for fusing the lung images to be segmented by using the at least two first lung images to obtain a fused lung image includes:
performing registration operation from the at least one first lung image to the lung image to be segmented to obtain a lung image to be fused;
fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image;
and/or the number of the groups of groups,
the method for determining the lung image to be segmented at a certain moment in the multi-moment lung image comprises the following steps:
and calculating the lung volume in the lung images at multiple moments, and determining the lung image with the largest lung volume as the lung image to be segmented.
Preferably, the method for fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image includes:
determining a weight value of the lung image to be fused;
obtaining a weighted lung image according to the weight value and the lung image to be fused;
fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image;
and/or the number of the groups of groups,
the method for determining the lung image to be segmented at a certain moment in the multi-moment lung image further comprises the following steps:
Before calculating the lung volume in the multi-moment lung image, respectively extracting a left lung and a right lung of the multi-moment lung image, respectively calculating a first volume of the left lung and a second volume of the right lung in the multi-moment lung image, respectively calculating the lung volume in the multi-moment lung image according to the first volume and the second volume.
Preferably, the method for determining the weight value of the lung image to be fused comprises the following steps: determining registration points of the lung images to be fused, and determining that the weight value of the registration points is larger than that of non-registration points, wherein the characteristic points outside the registration points are the non-registration points;
and/or the number of the groups of groups,
the method for fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image comprises the following steps: performing addition processing on the weighted lung image and the lung image to be segmented to obtain the fused lung image;
and/or the number of the groups of groups,
the method for fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image comprises the following steps: and fusing the lung image to be fused and the lung image to be segmented by using a first preset neural network to obtain a fused lung image.
Preferably, the method for performing the registration operation of the at least one first lung image to the lung image to be segmented to obtain the lung image to be fused comprises the following steps:
extracting images from the same positions in the at least two first lung images and the images to be segmented to obtain lung motion sequence images formed by the images extracted from the same positions;
respectively calculating lung displacement of adjacent images in the lung motion sequence images, and executing registration operation from the at least one first lung image to the lung image to be segmented according to the lung displacement;
and/or the number of the groups of groups,
the lung lobe segmentation method further comprises the following steps: and the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain a final lung lobe image.
Preferably, the method for extracting images from the same positions in the at least two first lung images and the image to be segmented to obtain lung motion sequence images formed by the images extracted from the same positions comprises the following steps:
determining the layer number of the multi-moment lung image;
determining lung images of the at least two first lung images and the image to be segmented at the same position according to the layer number;
Obtaining the lung motion sequence image according to the lung images at the same position at multiple moments;
and/or the number of the groups of groups,
the method for respectively calculating the lung displacement of adjacent images in the lung motion sequence image comprises the following steps:
determining first forward optical flows of adjacent images in the lung motion sequence images respectively;
determining lung displacements of the adjacent images from the first forward optical flow, respectively;
and/or the number of the groups of groups,
the method for fusing the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain fused features comprises the following steps:
and respectively splicing the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain splicing features, and inputting the splicing features into a second preset neural network to perform convolution operation to obtain the fusion features.
Preferably, the method for calculating lung displacement of adjacent images in the lung motion sequence image respectively further comprises:
determining a first reverse optical flow corresponding to the first forward optical flow respectively;
determining a lung displacement of the adjacent image from the first inverse optical flow and the first inverse optical flow, respectively;
and/or the number of the groups of groups,
the method for calculating the lung displacement of the adjacent images in the lung motion sequence image respectively further comprises the following steps: performing optical flow optimization processing on the first forward optical flow and the first backward optical flow respectively to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows; the lung displacement of the adjacent image is determined from the second forward optical flow and the second reverse optical flow, respectively.
Preferably, the method of determining the lung displacement of the adjacent image from the first inverse optical flow and the first inverse optical flow, respectively, comprises:
respectively calculating the second forward optical flow and the second backward optical flow to obtain corrected optical flow;
and determining the lung displacement of the adjacent images according to the corrected optical flow respectively.
Preferably, the method for performing optical flow optimization processing on the first forward optical flow and the first backward optical flow to obtain a second forward optical flow corresponding to each of the first forward optical flows and a second backward optical flow corresponding to each of the first backward optical flows includes:
connecting each first forward optical flow to obtain a first connected optical flow, and connecting each first backward optical flow to obtain a second connected optical flow;
performing optical flow optimization processing on the first connection optical flow and the second connection optical flow for N times respectively to obtain a first optimized optical flow corresponding to the first connection optical flow and a second optimized optical flow corresponding to the second connection optical flow;
obtaining a second forward optical flow corresponding to each first forward optical flow according to the first optimized optical flow, and obtaining a second backward optical flow corresponding to each first backward optical flow according to the second optimized optical flow;
Wherein N is a positive integer greater than or equal to 1.
Preferably, the performing optical flow optimization processing on the first connection optical flow and the second connection optical flow N times includes:
performing a first optical flow optimization process on the first connection optical flow and the second connection optical flow to obtain a first optimized sub-optical flow corresponding to the first connection optical flow and a first optimized sub-optical flow corresponding to the second connection optical flow; and
respectively executing the (i+1) -th optical flow optimization processing on the (i+1) -th optimized sub-optical flows of the first connection optical flow and the second connection optical flow to obtain the (i+1) -th optimized sub-optical flow corresponding to the first connection optical flow and the (i+1) -th optimized sub-optical flow corresponding to the second connection optical flow;
wherein i is a positive integer greater than 1 and less than N; determining an nth optimized sub-optical flow of the obtained first connection optical flow as the first optimized optical flow and determining an nth optimized sub-optical flow of the obtained second connection optical flow as the second optimized optical flow through an nth optimizing process; each optical flow optimization process includes a residual process and an upsampling process.
Preferably, the first forward optical flow of adjacent images in the lung motion sequence image is determined from a forward temporal order of the multi-temporal lung lobe images, and the first reverse optical flow of adjacent images in the lung motion sequence image is determined from a reverse temporal order of the multi-temporal lung lobe images.
In a second aspect, the present invention provides a lobe segmentation device, comprising:
the acquisition unit is used for acquiring lung images at multiple moments in the breathing process;
a determining unit, configured to determine a lung image to be segmented in the lung images at multiple times, where a lung image other than the lung image to be segmented is used as a first lung image;
a fusion unit, configured to fuse the lung images to be segmented by using at least one first lung image, so as to obtain a fused lung image, where the at least two first lung images include at least one lung image at a time before the image to be segmented and/or at least one lung image at a time after the image to be segmented;
the segmentation unit is used for segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
In a third aspect, the present invention provides a storage medium, which when executed by a processor, implements the above method, comprising:
acquiring lung images at multiple moments in the respiratory process;
determining a lung image to be segmented in the multi-moment lung images, wherein the lung image outside the image to be segmented is used as a first lung image;
Fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at a moment before the image to be segmented and/or at least one lung image at a moment after the image to be segmented;
and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
The invention has at least the following beneficial effects:
the invention provides a lung lobe segmentation method, a device and a storage medium, which are used for solving the problem of poor segmentation effect caused by insufficient characteristic information of the lung (lobe) at present.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
fig. 1 is a flow chart of a lobe segmentation method according to an embodiment of the present invention.
Detailed Description
The present invention is described below based on examples, but it should be noted that the present invention is not limited to these examples. In the following detailed description of the present invention, certain specific details are set forth in detail. However, for the part not described in detail, the present invention is also fully understood by those skilled in the art.
Furthermore, those of ordinary skill in the art will appreciate that the drawings are provided solely for the purposes of illustrating the objects, features, and advantages of the invention and that the drawings are not necessarily drawn to scale.
Meanwhile, unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, it is the meaning of "including but not limited to".
The main execution body of the lung lobe segmentation method provided by the embodiments of the present disclosure may be any image processing apparatus, for example, the lung lobe segmentation method may be executed by a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. The server may be a local server or a cloud server. In some possible implementations, the lung lobe segmentation method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Fig. 1 is a flow chart of a lobe segmentation method according to an embodiment of the present invention. As shown in fig. 1, a lobe segmentation method includes: step 101: acquiring lung images at multiple moments in the respiratory process; step 102: determining a lung image to be segmented in the multi-moment lung images, wherein the lung image outside the image to be segmented is used as a first lung image; step 103: fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at a moment before the image to be segmented and/or at least one lung image at a moment after the image to be segmented; step 104: and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented. So as to solve the problem of poor segmentation effect caused by insufficient characteristic information of the lung (leaf).
Step 101: lung images are acquired at multiple moments during respiration.
In the embodiment of the disclosure, the lung image obtained at multiple times in the breathing process may be a lung image at multiple times in the inspiration process, or a lung image at multiple times in the breathing process, or a lung image at multiple times in the inspiration and breathing processes; the multi-temporal lung lobe image is a multi-temporal lung image obtained by the same patient at a plurality of moments during exhalation and/or inhalation, respectively. The time instant in embodiments of the present disclosure may be expressed as a period of time, i.e., the time information at which a set of lung images is acquired. The specific acquisition process can be performed according to the guidance of an imaging doctor; for example, during breathing, at least one set of lung images may be acquired at the time of deep inhalation, or at least one set of lung images may be acquired at the time of deep exhalation, with at least one set of lung images acquired in a calm state, wherein the calm state is a set of lung images acquired after normal exhalation. For another example, during the breath-to-exhale process, the patient is allowed to breath hold at different times during the inspiration or expiration phase to acquire multi-time lung images.
The segmentation result may include location information corresponding to each partition (lobe) in the identified lung image. For example, the lung image may include five lung lobe regions, which are respectively an upper right lobe, a middle right lobe, a lower right lobe, an upper left lobe and a lower left lobe, and the obtained segmentation result may include position information of the five lung lobes in the lung image, respectively. The embodiment of the present disclosure may represent the segmentation result by means of a mask feature, that is, the segmentation result obtained by the embodiment of the present disclosure may be a feature represented as a mask, for example, the embodiment of the present disclosure may allocate unique corresponding mask values, such as 1, 2, 3, 4 and 5, to the above five lung lobe regions, respectively, where each mask value forms a region that is a location region where a corresponding lung lobe is located. The above mask values are merely exemplary, and other mask values may be configured in other embodiments.
In some possible implementations, the disclosed embodiments may obtain lung images at multiple times by taking CT (Computed Tomography ) images. The specific method comprises the following steps: determining the scanning layer number, layer thickness and interlayer distance of the multi-moment lung lobe image; and acquiring the lung images at multiple moments according to the scanning layer number, the layer thickness and the interlayer distance. Wherein, the lung image obtained by the embodiment of the disclosure is composed of a plurality of layers of images and can be seen as a three-dimensional image structure.
In some possible embodiments, a request may be made from another electronic device or a server to obtain multiple sets of lung images, each set corresponding to a time instant, the multiple sets constituting multiple time instant lung images. In addition, in the embodiment of the present disclosure, in order to reduce the image of other features, in the case of obtaining a lung image, a lung parenchyma segmentation process may be performed on the lung image, a location of a lung region in the lung image is determined, and a subsequent process is performed using the image of the location region as the lung image. Wherein lung parenchyma segmentation may be obtained according to existing methods, e.g. by deep learning neural networks, or may also be achieved by lung parenchyma segmentation algorithms, which are not specifically limited in this disclosure.
Step 102: and determining a lung image to be segmented in the multi-moment lung images, wherein the lung image outside the image to be segmented is used as a first lung image.
In some possible embodiments, a lung image at any one time in the lung images at multiple times may be determined as a lung image to be segmented, or input time information may be received, and a lung image corresponding to the time information may be determined as a lung image to be segmented.
Alternatively, in an embodiment of the present disclosure, the method for determining a lung image to be segmented in the lung images at multiple times, where a lung image other than the lung image to be segmented is used as the first lung image may also include: and respectively calculating the lung volumes of all the lung images in the multi-time lung images, and determining the lung image with the largest lung volume as the lung image to be segmented.
That is, in the embodiment of the present disclosure, the lung image with the largest lung volume may be determined as the lung image to be segmented, so that the lung lobe feature may be more fully reflected, and the lung lobe segmentation accuracy may be improved.
In an embodiment of the present disclosure, the method for determining a lung image to be segmented in the lung images at multiple times, where a lung image other than the lung image to be segmented is used as the first lung image, further includes: before calculating the lung volume in the multi-moment lung image, respectively extracting a left lung and a right lung of the multi-moment lung image, respectively calculating a first volume of the left lung and a second volume of the right lung of the multi-moment lung image, respectively calculating the lung volume in the multi-moment lung image according to the first volume and the second volume. Specifically, the lung volume in the multi-temporal lung image is the sum of the respective first volumes of the left and right lungs. The extraction of the left and right lungs may be performed by a lung parenchyma extraction algorithm or a neural network for lung parenchyma segmentation, thereby obtaining left and right lung regions. The calculation of the left and right lung volumes may be obtained by using the sum of the areas of the left and right lungs extracted from each layer of the lung image, respectively. Other means may be used by those skilled in the art, and this disclosure is not particularly limited.
Step 103: and fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at a moment before the image to be segmented and/or at least one lung image at a moment after the image to be segmented.
In some possible embodiments, at least one group of lung images before the lung image to be segmented and/or at least one group of lung images after the lung image to be segmented can be utilized, and the features of the image to be segmented are subjected to complementary correction and fusion to obtain a fused lung image. Alternatively, in the embodiment of the present disclosure, all the lung images other than the lung image to be segmented may be used as the first lung image, so that all the feature information of the lung image extracted in the respiratory process may be retained.
In an embodiment of the present disclosure, the method for fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image includes:
performing registration operation from the first lung image to the lung image to be segmented to obtain a lung image to be fused; and fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image.
In some embodiments of the present invention, the registration operation is to find a point corresponding to the lung image to be segmented at a time before and/or after the certain time of the lung image to be segmented, to complete a matching process of the lung image to be segmented and the first lung image, and further to fuse image features of the first lung image at each time through the registration process. Wherein, registration between each first lung image and the lung image to be segmented can be realized by using a registration algorithm, and the first lung image is registered to the lung image to be segmented. The registration algorithm may use an elastic registration algorithm or registration with a VGG network in deep learning (VGG-net), such as paper Deformable image registration using convolutional nerual networks or U-network (U-net), such as paper Pulmonary CT Registration through Supervised Learning with Convolutional Neural Networks. The invention is not limited to a specific registration algorithm.
In other embodiments of the present disclosure, the method for performing a registration operation of the first lung image to the lung image to be segmented to obtain a lung image to be fused includes: extracting images from the same positions in the at least two first lung images and the images to be segmented to obtain lung motion sequence images formed by the images extracted from the same positions; and respectively calculating lung displacement of adjacent images in the lung motion sequence images, and executing registration operation from the at least one first lung image to the lung image to be segmented according to the lung displacement.
In the embodiment of the disclosure, the same position may be expressed as the same number of layers, and as described in the above embodiment, each group of lung images may include a plurality of layers of images, and the images of the same number of layers are selected from each group of lung images in the first lung image and the lung image to be segmented, to form a group of lung motion sequence images. That is, the embodiment of the present disclosure may obtain the same number of sets of lung motion sequence images as the number of layers, that is, the lung motion sequence image of each position.
In an embodiment of the disclosure, the method for extracting images from the same positions in the at least two first lung images and the image to be segmented to obtain a lung motion sequence image formed by the images extracted from the same positions includes determining the layer number of the lung images at multiple moments; determining lung images of the at least two first lung images and the image to be segmented at the same position according to the layer number; and obtaining the lung motion sequence image according to the lung image at the same position.
In a specific embodiment of the invention, when acquiring lung images at multiple moments in the breathing process, the number of scanning layers, the layer thickness and the interlayer distance of the lung images at the multiple moments are determined; therefore, the lung images of the lung images at the same positions at the multiple moments can be determined according to the layer numbers, the lung images at the same positions are selected from the lung images at the multiple moments to obtain the lung motion sequence image, for example, the position corresponding to the Nth layer of the lung image at the first moment is the same as the position corresponding to the Nth layer of the lung image at the second moment and the Mth moment, the lung images are all lung planes, the same lung planes at all moments are combined together to form the lung motion sequence image, M is an integer larger than 1, the integer is used for representing the moment number or the group number, and N represents any layer number.
In the case of obtaining a plurality of lung motion sequence images, an image corresponding to an image to be segmented in the lung motion sequence may be determined, with the remainder being images corresponding to the first lung image. The images in the lung motion sequence image are arranged in order of time. Since the image to be segmented has been determined in the foregoing embodiment, the corresponding time of the image to be segmented may also be acquired correspondingly, and the image corresponding to the image to be segmented and the image corresponding to the first lung image may be determined in the lung motion sequence image according to the time. Included in the lung motion sequence image is one layer of images in each lung image, which is described as an image to be segmented or a first segmented image for convenience of description of the subsequent embodiments, but it is to be noted here that the images in the lung motion sequence image are only images of the respective layers in the lung image to be segmented and the first lung image.
In the embodiment of the present disclosure, in the case where the lung motion sequence image is obtained, a motion case in which the first lung image and the lung image to be segmented are performed may be performed. I.e. the lung displacements of adjacent images in the lung motion sequence images can be calculated respectively, and the registration operation of the at least one first lung image to the lung image to be segmented is performed according to the lung displacements. Wherein by determining the lung displacement between adjacent images, the lung displacement between the first lung image and the lung image to be segmented may be determined, followed by performing a registration operation of the first lung image and the lung image to be segmented. Wherein the lung displacement may represent a displacement between the image to be segmented and a feature point of the lung in the image to be segmented.
In an embodiment of the present disclosure, the method for calculating lung displacement of adjacent images in the lung motion sequence image respectively includes: determining first forward optical flows of adjacent images in the lung motion sequence images respectively; lung displacement of the adjacent images is determined from the first forward optical flow, respectively.
In a specific embodiment of the present invention, optical flow (optical flow) may be used to represent the variation between moving images, which refers to the velocity of mode motion in a time-varying image. When the lung is in motion, its brightness pattern at the corresponding point on the image is also in motion, so that the optical flow can be used to represent the change between images, as it contains information on the motion of the lung and can therefore be used by the observer to determine the motion of the lung. In the embodiment of the disclosure, optical flow evaluation is performed on each adjacent image in the lung motion sequence image, so that optical flow information between the adjacent images can be obtained. The times corresponding to the multi-time lung images are assumed to be t1, t2, …, tM, and M, respectively, and the number of groups is indicated. The nth lung motion sequence image may include nth layer images F1N, F2N, …, FMN, respectively, of the M groups of lung images, representing nth layer images within the lung images of the 1 st to M groups.
When the optical flow estimation is performed, the first forward optical flows of two adjacent images in each lung motion sequence image are respectively obtained according to the forward sequence of 1 to M groups, for example, the first forward optical flows of F1N to F2N, the first forward optical flows of F2N to F3N and so on can be obtained, and the first forward optical flows of F (M-1) to FMN are obtained. The first forward optical flow represents the motion velocity information of each feature point in the adjacent lung images, which are arranged in the forward order of time. In particular, the lung motion sequence image may be input into an optical flow estimation model, which may be flownet2.0 or may be another optical flow estimation model, for obtaining a first forward optical flow between adjacent images, which is not particularly limited in this disclosure. Or, an optical flow estimation algorithm such as a sparse optical flow estimation algorithm, a dense optical flow estimation algorithm, etc. may be used to perform optical flow estimation on the adjacent images, which is not limited in detail in the disclosure.
In a specific embodiment of the invention, a method for determining the lung displacement of the adjacent image from the first forward optical flow comprises: and obtaining the lung displacement of the adjacent images in the lung lobe motion sequence image by using the speed information of the first forward optical flow and the time information of the adjacent images. The time information of adjacent images in the lung motion sequence image can be approximately obtained by dividing the scanning time by the number of layers by utilizing the scanning time and the number of layers in a dicom file in the lung image acquired by CT.
In the embodiment of the disclosure, each layer of image in the acquired lung image may have corresponding acquisition time information, and the product of the time difference between the acquisition times of two adjacent images in the lung motion sequence image and the first forward optical flow may be used to obtain lung displacement of the two adjacent images within the time difference range.
In addition, since the time information of the adjacent images in the lung motion sequence image is smaller, in the embodiment of the disclosure, the velocity information corresponding to the optical flow may be approximately equal to the lung displacement.
The first forward optical flow of the first lung image and the image to be segmented in the lung motion sequence image and the time information between the first lung image and the image to be segmented are determined in sequence correspondingly, and accordingly, the lung displacement between the first lung image and the image to be segmented can be obtained through the product of the first forward optical flow and the time information.
In an embodiment of the present disclosure, the method for calculating lung displacement of adjacent images in the lung motion sequence image respectively further includes: determining a first reverse optical flow corresponding to the first forward optical flow respectively; the lung displacement of the adjacent image is determined from the first inverse optical flow and/or the first inverse optical flow, respectively.
In an embodiment of the disclosure, the first forward optical flow of adjacent images in the lung motion sequence image is determined according to a forward temporal order of the multi-temporal lung images, and the first backward optical flow of adjacent images in the lung motion sequence image may be determined according to a backward temporal order of the multi-temporal lung images.
Correspondingly, when performing optical flow estimation, according to the reverse order of M to 1 groups, the first reverse optical flows of two adjacent images in each lung motion sequence image are respectively obtained, for example, the first reverse optical flows of FMN to F (M-1) N, the first reverse optical flows of F (M-2) N to F (M-1) N and so on can be obtained, and the first reverse optical flows of F2N to F1N are obtained. The first reverse optical flow represents the motion velocity information of each feature point in the adjacent lung images, which are arranged in reverse order of time. Similarly, the lung motion sequence image may be input into an optical flow estimation model to obtain a first inverse optical flow between adjacent images, or an optical flow estimation algorithm such as a sparse optical flow estimation algorithm, a dense optical flow estimation algorithm, or the like may be used to perform optical flow estimation on the adjacent images, which is not specifically limited in this disclosure.
In a specific embodiment of the disclosure, a method of determining lung displacement of the adjacent image from the first inverse optical flow, comprises: and obtaining the lung displacement of the adjacent images in the lung lobe motion sequence image by using the speed information of the first reverse optical flow and the time information of the adjacent images. The time information of adjacent images in the lung lobe motion sequence image can be approximately obtained by dividing scanning time and the number of layers in a dicom file in the lung image acquired by CT.
In the embodiment of the disclosure, each of the acquired lung images may have corresponding acquisition time information, and the lung displacement of two adjacent images within the time difference range may be obtained by using the product of the time difference of the acquisition times of the two adjacent images in the lung lobe motion sequence image and the first reverse optical flow. In addition, since the time information of the adjacent images in the lung lobe motion sequence image is smaller, in the embodiment of the disclosure, the velocity information corresponding to the optical flow may be approximately equal to the lung lobe displacement.
The first inverse optical flow of the first lung image and the image to be segmented in the lung motion sequence image and the time information between the first lung image and the image to be segmented are determined in sequence correspondingly, and accordingly, the lung displacement from the first lung image to the image to be segmented can be obtained through the product of the first inverse optical flow and the time information.
In an embodiment of the present disclosure, the method for calculating lung displacement of adjacent images in the lung motion sequence image respectively further includes: performing optical flow optimization processing on the first forward optical flow and the first backward optical flow respectively to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows; the lung displacement of the adjacent image is determined from the second forward optical flow and/or the second backward optical flow, respectively.
In a specific embodiment of the invention, the method for determining the lung displacement of the adjacent image according to the first inverse optical flow and the first inverse optical flow, respectively, comprises: respectively calculating the second forward optical flow and the second backward optical flow to obtain corrected optical flow; and determining the lung displacement of the adjacent images according to the corrected optical flow respectively.
In a specific embodiment of the present invention, the method for obtaining the corrected optical flow by respectively performing an operation on the second forward optical flow and the second backward optical flow includes: and performing addition operation on the second forward optical flow and the second backward optical flow to obtain a two-way optical flow sum, and then obtaining the average value of the two-way optical flow sum to obtain a corrected optical flow. That is, the mean value of the second forward optical flow and the second backward optical flow is obtained, and the corrected optical flow= (second forward optical flow+second backward optical flow)/2.
In a specific embodiment of the present invention, the method for performing optical flow optimization processing on the first forward optical flows and the first backward optical flows to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows respectively includes connecting the first forward optical flows to obtain first connection optical flows and connecting the first backward optical flows to obtain second connection optical flows; performing optical flow optimization processing on the first connection optical flow and the second connection optical flow for N times respectively to obtain a first optimized optical flow corresponding to the first connection optical flow and a second optimized optical flow corresponding to the second connection optical flow; obtaining a second forward optical flow corresponding to each first forward optical flow according to the first optimized optical flow, and obtaining a second backward optical flow corresponding to each first backward optical flow according to the second optimized optical flow; wherein N is a positive integer greater than or equal to 1.
Wherein connecting each of the first forward optical flows to obtain a first connected optical flow, and connecting each of the first backward optical flows to obtain a second connected optical flow includes: and sequentially connecting the first forward optical flow between each adjacent image in the lung motion sequence image to obtain a first connected optical flow corresponding to the group of lung motion sequence images, and sequentially connecting the first backward optical flow between each adjacent image in the lung motion sequence image to obtain a second connected optical flow corresponding to the group of lung motion sequence images. The connection here is a splice in the depth direction.
After obtaining the first connection optical flow and the second connection optical flow, optical flow optimization processing may be performed on the first connection optical flow and the second connection optical flow, respectively, and embodiments of the present disclosure may perform at least one optical flow optimization processing procedure. For example, each optical flow optimization process in the embodiments of the present disclosure may be performed by an optical flow optimization module, which may be composed of a neural network, or may also perform an optimization operation by a corresponding optimization algorithm. Correspondingly, when the optical flow optimization processing is executed for N times, the optical flow optimization processing method can comprise N optical flow optimization network modules which are sequentially connected, wherein the input of the last optical flow optimization network module is the output of the previous optical flow optimization network module, and the output of the last optical flow optimization network module is the optimization result of the first connection optical flow and the second connection optical flow.
Specifically, when only one optical flow optimization network module is included, the optical flow optimization network module may be used to perform optimization processing on the first connection optical flow to obtain a first optimized optical flow corresponding to the first connection optical flow, and perform optimization processing on the second connection optical flow through the optical flow optimization network module to obtain a second optimized optical flow corresponding to the second connection optical flow. Wherein the optical flow optimization process may include a residual process and an upsampling process. That is, the optical flow optimization network module may further include a residual unit and an up-sampling unit, where the residual unit may include a plurality of convolution layers, and a convolution kernel adopted by each convolution layer is not specifically limited, and the scale of the first connected optical flow after the residual processing by the residual unit is reduced, for example, reduced to a quarter of the scale of the input connected optical flow, which is not specifically limited in the present disclosure, and may be set according to the needs. After performing the residual processing, an upsampling process may be performed on the residual processed first or second connected optical streams, by which the scale of the output first optimized sub-optical streams may be adjusted to the scale of the first connected optical streams, and the scale of the output second optimized sub-optical streams may be adjusted to the scale of the second connected optical streams. And the characteristics of each optical flow can be fused through the optical flow optimization process, and meanwhile, the optical flow precision can be improved.
In other embodiments, the optical flow optimization module may also include a plurality of optical flow optimization network modules, such as N optical flow optimization network modules. The first optical flow optimization network module may receive the first connection optical flow and the second connection optical flow, and perform a first optical flow optimization process on the first connection optical flow and the second connection optical flow, where the first optical flow optimization process includes a residual error process and an upsampling process, and specific processes are the same as the above embodiments and are not repeated herein. The first optimization sub-optical flow of the first connection optical flow and the first optimization sub-optical flow of the second connection optical flow can be obtained through the first optical flow optimization processing.
Similarly, each optical flow optimization network module may be used to perform an optical flow optimization process once, that is, the i+1th optical flow optimization process may be performed on the i-th optimized sub-optical flows of the first connection optical flow and the second connection optical flow by using the i+1th optical flow optimization network module, so as to obtain the i+1th optimized sub-optical flow corresponding to the first connection optical flow and the i+1th optimized sub-optical flow corresponding to the second connection optical flow, where i is a positive integer greater than 1 and less than N. Finally, an nth optimization process executed by an nth optical flow optimization network module can be used for obtaining an nth optimization sub-optical flow of a first connection optical flow and an nth optimization sub-optical flow of a second connection optical flow, the obtained nth optimization sub-optical flow of the first connection optical flow can be determined to be the first optimization optical flow, and the obtained nth optimization sub-optical flow of the second connection optical flow can be determined to be the second optimization optical flow. In the embodiment of the disclosure, the optical flow optimization processing procedure executed by each optical flow optimization network module may be residual processing and upsampling processing. I.e. each optical flow optimization network module may be the same optical flow optimization module.
In the case of obtaining a first optimized optical flow and a second optimized optical flow for each lung motion sequence image, a second forward optical flow corresponding to each first forward optical flow may be obtained using the first optimized optical flow, and a second backward optical flow corresponding to each first backward optical flow may be obtained from the second optimized optical flow.
After the optical flow optimization processing is performed for N times, the scale of the first optimized optical flow is the same as the scale of the first connected optical flow, the first optimized optical flow can be split into M-1 second forward optical flows according to the depth direction, and the M-1 second forward optical flows are respectively corresponding to the optimized results of the first forward optical flows. Similarly, after N times of optical flow optimization processing, the scale of the second optimized optical flow is the same as the scale of the second connected optical flow, and the second optimized optical flow can be split into M-1 second inverse optical flows according to the depth direction, where the M-1 second inverse optical flows are respectively corresponding to the optimized results of the first inverse optical flows.
Through the above embodiment, the second forward optical flow after the optimization of the first forward optical flow between each adjacent image of the lung motion sequence image and the second backward optical flow after the optimization of the first backward optical flow between each adjacent image of the lung motion sequence image can be obtained.
In the case of obtaining the second forward optical flow and/or the second backward optical flow, the second forward optical flow and/or the second backward optical flow may be used to determine the motion displacement of the lung lobe corresponding to the adjacent image, and then the lung displacement between the lung segmentation image and the first lung image may be obtained, and the motion displacement is determined by referring to the above-mentioned first forward optical flow and/or the first backward optical flow, which will not be repeated here.
Based on the above, the embodiment of the disclosure may obtain the motion displacement (lobe displacement) of each layer of images in the lung image in each time range, and in the case of performing the key point detection on each layer of images in the lung image, the motion track of the matched key point in each time range may be obtained, so that the motion state and the motion track of the whole lung in each time range may be obtained.
By the above-described embodiments, the lung displacement between the first lung image and the lung image to be segmented can be obtained, and then the registration operation of the first lung image to the lung image to be segmented can be performed using the lung displacement. The lung displacement in the embodiment of the present disclosure may include a displacement value between any pixel point in the first lung image and the lung image to be segmented, and by adding the first lung image and the lung displacement, a registration result corresponding to the first lung image, that is, the lung image to be fused may be obtained. Then, the embodiment of the disclosure can obtain the lung image to be fused corresponding to the registration operation of each first lung image and the lung image to be segmented.
Under the condition that the lung image to be fused is obtained, the fused lung image can be obtained directly through adding and processing between the lung image to be fused and the lung image to be segmented, or different weights can be set for the lung image to be fused, and the fused lung image can be obtained by using the set weights.
In an embodiment of the present disclosure, the method for fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image includes: determining a weight value of the lung image to be fused; obtaining a weighted lung image according to the weight value and the lung image to be fused; and fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image.
In some embodiments of the present disclosure, the lung images to be fused may be preconfigured with corresponding weight values. The weight value of each lung image to be fused can be the same or can be different, for example, the weight value of the lung images to be fused can be 1/k, wherein k is the number of the lung images to be fused. Or the configured weight value can be determined according to the image quality of the lung images to be fused, for example, the image quality score of each lung image to be fused can be determined through a single-stimulus continuous quality evaluation method (Single Stimulus Continuous Quality Evaluation, SSCQE), and the score is normalized to be in the range of [0,1], so that the weight value of each lung image to be fused is obtained. Or, the input lung image to be fused can be evaluated through the image quality evaluation model NIMA (Neural Image Assessment), so as to obtain a corresponding weight value.
Alternatively, in other embodiments of the present disclosure, the method for determining the weight value of the lung image to be fused includes: and determining the registration points of the lung images to be fused, wherein points other than the registration points are non-registration points, and the weight value of the registration points is larger than that of the non-registration points. That is, in the embodiment of the present disclosure, the weight value of each pixel point in the lung image to be fused may be different, and the weight value of the registration point may be set to be greater than the weight value of the non-registration point, so as to highlight the feature information of the registration point, where the registration point is a feature point highlighting the lung feature. In a specific embodiment of the present invention, determining the registration points of the lung image to be fused may detect the key points (registration points) of the lung image to be fused and the fused lung image through SIFT (Scale-invariant feature transform). The obtained weight of the key point can be a value a larger than 0.5, and the weight of the non-alignment point can be 1-a, or any other positive value smaller than a.
Alternatively, the setting of the weight value may be achieved by a neural network of attention mechanisms (attention). The neural network of the attention mechanism can comprise at least one layer of convolution layer and an attention mechanism module (attention module) connected with the convolution layer, convolution processing is carried out on images to be fused through the convolution layer to obtain convolution characteristics, the convolution characteristics are input into the attention module to obtain an attention characteristic diagram corresponding to each image to be fused, the attention characteristic diagram comprises attention values corresponding to each pixel point in the images to be fused, the attention values can be used as weight values of the corresponding pixel points, and the pixel points with the attention values larger than 0.5 are registration points. The weight values of the lung images to be fused can be obtained by a person skilled in the art by selecting an appropriate manner according to the requirements, and the disclosure is not particularly limited. In a specific embodiment of the present invention, determining the registration points of the lung image to be fused may detect the key points (registration points) of the lung image to be fused and the fused lung image through SIFT (Scale-invariant feature transform)
In the case of obtaining the weight of the lung image to be fused, the weighted lung image may be obtained by using the product of the weight and the lung image to be fused.
In a specific embodiment of the present invention, the method for fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image includes: and adding the lung image to be segmented to the weighted lung image to obtain the fused lung image.
In a specific embodiment of the present invention, the method for fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image may further be implemented by using the following embodiments, including: the method for fusing the lung image to be fused and the lung image to be segmented by using a first preset neural network to obtain a fused lung image comprises the following steps: connecting the weighted lung image with the lung image to be segmented to obtain a connected lung image; and performing at least one layer of convolution processing on the connected lung images to obtain fusion characteristics of the connected lung images, wherein the images corresponding to the fusion characteristics are fusion lung images. The first preset neural network is a network which can realize the fusion of lung characteristic information and each extraction after being trained in advance, for example, can be a residual network, a Unet, a characteristic golden tower network and the like, and the disclosure is not limited in particular.
Step 104: and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
In the embodiment of the disclosure, the preset lobe segmentation model may be a traditional machine-learned lobe segmentation model, or a progressive dense V-network (PDV-NET) lobe segmentation model proposed by 2018 voxel science and technology in deep learning. In the invention, the fused lung image is the lung image before and/or after the moment of the lung image to be segmented and all information of the lung image to be segmented, so that the information quantity of the lung image to be segmented is ensured, and the lung lobe segmentation is performed better.
Alternatively, in the embodiment of the present disclosure, the preset lung lobe segmentation model may be implemented through a neural network, and may include at least one of the residual networks Resnet, unet, vnet, which is not specifically limited in the present disclosure. The preset lung lobe segmentation model in the embodiments of the present disclosure can be used to implement segmentation detection of at least one lung lobe, and the obtained segmentation result includes location information of the detected lung lobe, such as a location area of the detected lung lobe in the lung image may be represented by a preset mask.
In an embodiment of the present disclosure, the lobe segmentation method further includes: and the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain a final lung lobe image. In an embodiment of the present disclosure, the method for fusing features of the lung lobe segmentation image obtained by the preset lung lobe segmentation model to obtain a fused feature includes:
and respectively splicing the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain splicing features, and inputting the splicing features into a second preset neural network to perform convolution operation to obtain the fusion features.
The two preset segmentation models may be different segmentation models. For example, the first preset segmentation model may be a Resnet, and the second preset segmentation model may be a Unet, but not as a specific limitation of the present disclosure, any two different neural networks capable of being used for lobal segmentation may be used as the preset lobal segmentation model. And inputting the fused lung image into a first preset segmentation model and a second preset segmentation model to respectively obtain a first segmentation result and a second segmentation result. Wherein the first segmentation result and the second segmentation result may each comprise position information of the detected lung lobe region. Since there may be a difference in the segmentation results obtained by performing the segmentation process through different preset segmentation models, the embodiment of the present disclosure may further improve the segmentation accuracy by combining the two segmentation results. The position information of the first segmentation result and the second segmentation result can be averaged to obtain a final lung lobe segmentation result.
Or in some embodiments, the first feature map output by the convolution layer before the first preset segmentation model outputs the first segmentation result and the second feature map output by the convolution layer before the second preset segmentation model outputs the second segmentation result may be fused to obtain the fusion feature. The first preset segmentation model and the second preset segmentation model may respectively include a corresponding feature extraction module and a corresponding classification module, wherein the classification module obtains a final first segmentation result and a final second segmentation result, the feature extraction module may include a plurality of convolution layers, and a feature map output by the convolution layer of the last layer is used for being input to the classification module to obtain the first segmentation result or the second segmentation result. The embodiment of the disclosure can obtain a first feature map output by a last layer of convolution layer of the feature extraction module in the first preset segmentation model and a second feature map output by a last layer of convolution layer of the feature extraction module in the second preset segmentation model. And fusing the first feature image and the second image of the lung lobe segmentation image obtained by the preset lung lobe segmentation model to obtain fusion features, and classifying the fusion features to obtain a final lung lobe image. Specifically, the first feature map and the second feature map may be spliced to obtain a spliced feature, and the spliced feature is input to at least one convolution layer to obtain a fusion feature. And then, classifying the fusion characteristics through a classification network to obtain a classification (segmentation) result of the lung lobes to be detected, namely obtaining a lung lobe segmentation result corresponding to the lung images to be detected.
In the embodiments of the present disclosure, static lung data during lung respiration is generally used for clinical analysis, and the analysis accuracy of lung characteristic data tends to be imaged without considering the motion information of the lung. If the correlation between lung features at different time periods can be combined, the accuracy of the detection of the lung motion data will be improved. The embodiment of the disclosure can solve the problem of poor segmentation effect caused by insufficient characteristic information of the current lung (leaf).
The invention also provides a lung lobe segmentation device, comprising: the acquisition unit is used for acquiring lung images at multiple moments in the breathing process; a determining unit, configured to determine a lung image to be segmented at a certain moment in the multi-moment lung images; the fusion unit is used for fusing the lung images to be segmented by utilizing the lung images before and/or after the certain moment of the lung images to be segmented to obtain fused lung images; the segmentation unit is used for segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented. Reference may be made to a detailed description of a lobe segmentation method for its specific implementation.
In addition, the present invention provides a storage medium, where the computer program instructions, when executed by a processor, implement the above method, and the storage medium includes: acquiring lung images at multiple moments in the respiratory process; determining a lung image to be segmented in the multi-moment lung images, wherein the lung image outside the image to be segmented is used as a first lung image; fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at a moment before the image to be segmented and/or at least one lung image at a moment after the image to be segmented; and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
The above examples are merely illustrative embodiments of the present invention, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that modifications, equivalent substitutions, improvements, etc. can be made by those skilled in the art without departing from the spirit of the present invention, and these are all within the scope of the present invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (27)

1. A method of lobe segmentation, comprising:
acquiring lung images at multiple moments in the respiratory process;
Determining a lung image to be segmented in the multi-moment lung images, wherein the lung image except the lung image to be segmented is used as a first lung image;
fusing the lung images to be segmented by using at least one or at least two first lung images to obtain a fused lung image; wherein the at least one or the at least two first lung images comprise at least one lung image at a time before the lung image to be segmented and/or at least one lung image at a time after the lung image to be segmented;
and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
2. The method according to claim 1, wherein the fusing of the lung images to be segmented with the at least one or at least two first lung images results in a fused lung image, comprising:
performing registration operation from the at least one or at least two first lung images to the lung images to be segmented to obtain lung images to be fused;
and fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image.
3. The method according to any of claims 1-2, wherein the method of determining a lung image to be segmented in the multi-temporal lung image comprises:
And respectively calculating the lung volumes of all the lung images in the multi-time lung images, and determining the lung image with the largest lung volume as the lung image to be segmented.
4. The method according to claim 2, wherein the method for fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image comprises:
determining a weight value of the lung image to be fused;
obtaining a weighted lung image according to the weight value and the lung image to be fused;
and fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image.
5. A method according to claim 3, wherein the method of determining the lung image to be segmented in the multi-temporal lung image further comprises:
extracting the left lung and the right lung of the multi-moment lung image respectively before calculating the lung volume in the multi-moment lung image respectively; respectively calculating a first volume of the left lung and a second volume of the right lung in the multi-moment lung image; the lung volume in the multi-temporal lung image is calculated from the first volume and the second volume, respectively.
6. The method of claim 4, wherein the method of determining the weight value of the lung image to be fused comprises: and determining the registration points of the lung images to be fused, and determining that the weight value of the registration points is larger than that of the non-registration points, wherein the characteristic points outside the registration points are the non-registration points.
7. The method according to claim 4 or 6, wherein the fusing of the weighted lung image and the lung image to be segmented results in the fused lung image, comprising: performing addition processing on the weighted lung image and the lung image to be segmented to obtain the fused lung image; or alternatively, the first and second heat exchangers may be,
the method for fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image comprises the following steps: and fusing the lung image to be fused and the lung image to be segmented by using a first preset neural network to obtain a fused lung image.
8. The method according to any of claims 2 or 4 or 6, wherein the performing of the registration of the at least one or at least two first lung images to the lung image to be segmented results in a method of obtaining a lung image to be fused, comprising:
extracting images from the same positions in the at least one or at least two first lung images and the lung images to be segmented to obtain lung motion sequence images formed by the images extracted from the same positions;
and respectively calculating lung displacement of adjacent images in the lung motion sequence images, and executing registration operation from the at least one or at least two first lung images to the lung images to be segmented according to the lung displacement.
9. The method according to claim 7, wherein the performing a registration operation of the at least one or the at least two first lung images to the lung image to be segmented results in a method of obtaining a lung image to be fused, comprising:
extracting images from the same positions in the at least one or at least two first lung images and the lung images to be segmented to obtain lung motion sequence images formed by the images extracted from the same positions;
and respectively calculating lung displacement of adjacent images in the lung motion sequence images, and executing registration operation from the at least one or at least two first lung images to the lung images to be segmented according to the lung displacement.
10. The method of any one of claims 2, 4-6, 9, wherein the lobe segmentation method further comprises: and the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain a final lung lobe image.
11. The method of claim 3, wherein the lobe segmentation method further comprises: and the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain a final lung lobe image.
12. The method of claim 7, wherein the lobe segmentation method further comprises: and the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain a final lung lobe image.
13. The method of claim 8, wherein the lobe segmentation method further comprises: and the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain a final lung lobe image.
14. The method according to claim 8, wherein the method of extracting images from the same location in the at least one or at least two first lung images and the lung image to be segmented, resulting in an image-formed lung motion sequence image of the same location extraction, comprises:
determining the layer number of the multi-moment lung image;
determining the lung images of the at least one or at least two first lung images and the lung images to be segmented at the same position according to the layer number;
And obtaining the lung motion sequence image according to the lung image at the same position.
15. The method according to claim 9, wherein the method of extracting images from the same location in the at least one or at least two first lung images and the lung image to be segmented, resulting in an image-formed lung motion sequence image of the same location extraction, comprises:
determining the layer number of the multi-moment lung image;
determining the lung images of the at least one or at least two first lung images and the lung images to be segmented at the same position according to the layer number;
and obtaining the lung motion sequence image according to the lung image at the same position.
16. The method according to claim 8, wherein the method of calculating lung displacements of adjacent images in the lung motion sequence image, respectively, comprises:
determining first forward optical flows of adjacent images in the lung motion sequence images respectively;
lung displacement of the adjacent images is determined from the first forward optical flow, respectively.
17. The method according to claim 9 or 14, wherein the method of calculating lung displacements of adjacent images in the lung motion sequence images, respectively, comprises:
Determining first forward optical flows of adjacent images in the lung motion sequence images respectively;
lung displacement of the adjacent images is determined from the first forward optical flow, respectively.
18. The method of claim 15, wherein the method of separately calculating lung displacements of adjacent images in the lung motion sequence image comprises:
determining first forward optical flows of adjacent images in the lung motion sequence images respectively;
lung displacement of the adjacent images is determined from the first forward optical flow, respectively.
19. The method according to claim 10, wherein the method for fusing features of the lung lobe segmentation image obtained by the preset lung lobe segmentation model to obtain fused features comprises:
and respectively splicing the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain splicing features, and inputting the splicing features into a second preset neural network to perform convolution operation to obtain the fusion features.
20. The method according to any one of claims 11-13, wherein the method for fusing features of the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain fused features comprises:
And respectively splicing the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain splicing features, and inputting the splicing features into a second preset neural network to perform convolution operation to obtain the fusion features.
21. The method according to claim 16 or 18, wherein the method of calculating lung displacements of adjacent images in the lung motion sequence images, respectively, further comprises:
determining a first reverse optical flow corresponding to the first forward optical flow respectively;
the lung displacement of the adjacent image is determined from the first forward optical flow and the first reverse optical flow, respectively.
22. The method of claim 17, wherein the method of separately calculating lung displacements of adjacent images in the lung motion sequence image further comprises:
determining a first reverse optical flow corresponding to the first forward optical flow respectively;
the lung displacement of the adjacent image is determined from the first forward optical flow and the first reverse optical flow, respectively.
23. The method of claim 21, wherein the method of separately calculating lung displacements of adjacent images in the lung motion sequence image further comprises: performing optical flow optimization processing on the first forward optical flow and the first backward optical flow respectively to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows; the lung displacement of the adjacent image is determined from the second forward optical flow and the second reverse optical flow, respectively.
24. The method of claim 22, wherein the method of separately calculating lung displacements of adjacent images in the lung motion sequence image further comprises: performing optical flow optimization processing on the first forward optical flow and the first backward optical flow respectively to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows; the lung displacement of the adjacent image is determined from the second forward optical flow and the second reverse optical flow, respectively.
25. The method of claim 23 or 24, wherein the method of determining the lung displacement of the adjacent image from the second forward optical flow and the second reverse optical flow, respectively, comprises:
respectively calculating the second forward optical flow and the second backward optical flow to obtain corrected optical flow;
and determining the lung displacement of the adjacent images according to the corrected optical flow respectively.
26. A lobe segmentation device, comprising:
the acquisition unit is used for acquiring lung images at multiple moments in the breathing process;
a determining unit, configured to determine a lung image to be segmented in the lung images at multiple times, where a lung image other than the lung image to be segmented is used as a first lung image;
The fusion unit is used for fusing the lung images to be segmented by using at least one or at least two first lung images to obtain a fused lung image; wherein the at least one or the at least two first lung images comprise at least one lung image at a time before the lung image to be segmented and/or at least one lung image at a time after the lung image to be segmented;
the segmentation unit is used for segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
27. A storage medium, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 25, comprising:
acquiring lung images at multiple moments in the respiratory process;
determining a lung image to be segmented in the multi-moment lung images, wherein the lung image except the lung image to be segmented is used as a first lung image;
fusing the lung images to be segmented by using at least one or at least two first lung images to obtain a fused lung image; wherein the at least one or the at least two first lung images comprise at least one lung image at a time before the lung image to be segmented and/or at least one lung image at a time after the lung image to be segmented;
And segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
CN202010534722.0A 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium Active CN111724360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010534722.0A CN111724360B (en) 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010534722.0A CN111724360B (en) 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111724360A CN111724360A (en) 2020-09-29
CN111724360B true CN111724360B (en) 2023-06-02

Family

ID=72568049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010534722.0A Active CN111724360B (en) 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111724360B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808082B (en) * 2021-08-19 2023-10-03 东北大学 Lung image processing method and device, electronic equipment and storage medium
CN114913196A (en) * 2021-12-28 2022-08-16 天翼数字生活科技有限公司 Attention-based dense optical flow calculation method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1418353A (en) * 2000-01-18 2003-05-14 芝加哥大学 Automated method and system for segmentation of lung regions in computed tomography scans
EP1447772A1 (en) * 2003-02-11 2004-08-18 MeVis GmbH A method of lung lobe segmentation and computer system
CN108985345A (en) * 2018-06-25 2018-12-11 重庆知遨科技有限公司 A kind of detection device based on the classification of lung's Medical image fusion
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109658425A (en) * 2018-12-12 2019-04-19 上海联影医疗科技有限公司 A kind of lobe of the lung dividing method, device, computer equipment and storage medium
CN109727251A (en) * 2018-12-29 2019-05-07 上海联影智能医疗科技有限公司 The system that lung conditions are divided a kind of quantitatively, method and apparatus
CN110033005A (en) * 2019-04-08 2019-07-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1694208A2 (en) * 2003-11-26 2006-08-30 Viatronix Incorporated Systems and methods for automated segmentation, visualization and analysis of medical images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1418353A (en) * 2000-01-18 2003-05-14 芝加哥大学 Automated method and system for segmentation of lung regions in computed tomography scans
EP1447772A1 (en) * 2003-02-11 2004-08-18 MeVis GmbH A method of lung lobe segmentation and computer system
CN108985345A (en) * 2018-06-25 2018-12-11 重庆知遨科技有限公司 A kind of detection device based on the classification of lung's Medical image fusion
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109658425A (en) * 2018-12-12 2019-04-19 上海联影医疗科技有限公司 A kind of lobe of the lung dividing method, device, computer equipment and storage medium
CN109727251A (en) * 2018-12-29 2019-05-07 上海联影智能医疗科技有限公司 The system that lung conditions are divided a kind of quantitatively, method and apparatus
CN110033005A (en) * 2019-04-08 2019-07-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Lung image segmentation and registration for quantitative image analysis;H. Haneishi 等;《2001 IEEE Nuclear Science Symposium Conference Record》;1390-1393 *
PRF-RW: a progressive random forest-based random walk approach for interactive semi-automated pulmonary lobes segmentation;Qiang Li等;《International Journal of Machine Learning and Cybernetics》;第11卷;2221–2235 *
基于CT图像的肺实质分割和肺结节检测方法研究;侍新;《中国优秀硕士学位论文全文数据库医药卫生科技辑》(第02期);E072-256 *
基于医学图像分割技术的计算机辅助肺功能评估系统;解德芳;《中国优秀硕士学位论文全文数据库信息科技辑》(第12期);I138-1474 *

Also Published As

Publication number Publication date
CN111724360A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
US11497267B2 (en) Systems and methods for full body measurements extraction
US11861829B2 (en) Deep learning based medical image detection method and related device
US20210232924A1 (en) Method for training smpl parameter prediction model, computer device, and storage medium
Harrison et al. Progressive and multi-path holistically nested neural networks for pathological lung segmentation from CT images
CN111429421B (en) Model generation method, medical image segmentation method, device, equipment and medium
CN112767329B (en) Image processing method and device and electronic equipment
US11810301B2 (en) System and method for image segmentation using a joint deep learning model
JPWO2019167883A1 (en) Machine learning equipment and methods
CN114565763B (en) Image segmentation method, device, apparatus, medium and program product
CN111724360B (en) Lung lobe segmentation method, device and storage medium
CN112330684B (en) Object segmentation method and device, computer equipment and storage medium
CN112396605B (en) Network training method and device, image recognition method and electronic equipment
WO2023071154A1 (en) Image segmentation method, training method and apparatus for related model, and device
WO2023005634A1 (en) Method and apparatus for diagnosing benign and malignant pulmonary nodules based on ct images
CN114758360B (en) Multi-modal image classification model training method and device and electronic equipment
CN111091010A (en) Similarity determination method, similarity determination device, network training device, network searching device and storage medium
CN115187819B (en) Training method and device for image classification model, electronic equipment and storage medium
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
CN107616796B (en) Lung nodule benign and malignant detection method and device based on deep neural network
CN113822792A (en) Image registration method, device, equipment and storage medium
US20230110263A1 (en) Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure
CN111724359B (en) Method, device and storage medium for determining motion trail of lung lobes
JP6920477B2 (en) Image processing equipment, image processing methods, and programs
CN117493601A (en) Image retrieval method, model training method, device and storage medium
CN116883329A (en) Data analysis method and device for medical CT image and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant