CN111091564A - Pulmonary nodule size detection system based on 3DUnet - Google Patents

Pulmonary nodule size detection system based on 3DUnet Download PDF

Info

Publication number
CN111091564A
CN111091564A CN201911352399.9A CN201911352399A CN111091564A CN 111091564 A CN111091564 A CN 111091564A CN 201911352399 A CN201911352399 A CN 201911352399A CN 111091564 A CN111091564 A CN 111091564A
Authority
CN
China
Prior art keywords
image
intermediate image
lung
unit
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911352399.9A
Other languages
Chinese (zh)
Other versions
CN111091564B (en
Inventor
王军
舒锦尔
包勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinhua central hospital
Changzhou Industrial Technology Research Institute of Zhejiang University
Original Assignee
Jinhua central hospital
Changzhou Industrial Technology Research Institute of Zhejiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinhua central hospital, Changzhou Industrial Technology Research Institute of Zhejiang University filed Critical Jinhua central hospital
Priority to CN201911352399.9A priority Critical patent/CN111091564B/en
Priority claimed from CN201911352399.9A external-priority patent/CN111091564B/en
Publication of CN111091564A publication Critical patent/CN111091564A/en
Application granted granted Critical
Publication of CN111091564B publication Critical patent/CN111091564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Abstract

The invention provides a pulmonary nodule size detection system based on 3DUnet, which comprises a lung lobe segmentation module and a focus detection module; the lung lobe segmentation module is configured to read a lung CT image, remove irrelevant areas in the lung CT image and output a lung segmentation image; the lesion detection module is configured to read the lung segmentation image, locate a lesion, output a feature map, and output a location result. The lung nodule size detection system based on the 3DUnet can accurately give the lung nodule probability, the lung nodule central point and the lung nodule size information through the feature map and the positioning result.

Description

Pulmonary nodule size detection system based on 3DUnet
Technical Field
The invention relates to the technical field of pulmonary nodule detection, in particular to a pulmonary nodule size detection system based on 3 DUnet.
Background
The lung cancer always poses great threat to human life as a cancer with high incidence, lung nodules are one of early manifestation forms of the lung cancer, and whether the lung nodule detection is accurate and timely is very important for patients. The traditional pulmonary nodule detection system is to scan the lung of a patient by CT and the doctor manually labels the pulmonary nodule, which has great instability and inaccuracy. Some lung nodule detection systems based on deep learning also exist nowadays, but the detection systems have the problem of large detection error, and meanwhile, the training difficulty is increased by the detection model with a complex structure. Therefore, it is very necessary to invent a pulmonary nodule size detection system based on 3 DUnet.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in order to solve the problems of large detection error and large difficulty in training a detection model of the conventional pulmonary nodule detection system, the invention provides a pulmonary nodule size detection system based on 3DUnet to solve the problems.
The technical scheme adopted by the invention for solving the technical problems is as follows: a pulmonary nodule size detection system based on 3DUnet comprises a lung lobe segmentation module and a focus detection module;
the lung lobe segmentation module is configured to read a lung CT image, remove irrelevant areas in the lung CT image and output a lung segmentation image; the lung lobe segmentation module comprises ten segmentation volume blocks, four maximum pooling layers and four upper sampling layers;
the focus detection module is configured to read the lung segmentation image, locate a focus, output a feature map, and output a location result; the focus detection module comprises an input residual block, a first residual block, a second residual block, a first deconvolution unit, a third residual block, a second deconvolution unit and an output residual block which are sequentially arranged;
the input residual block comprises two first convolution units, a first pooling unit and a first normalization unit which are arranged in sequence;
the first residual block comprises three second convolution units, a second pooling unit and a second normalization unit;
the second residual block comprises three third convolution units, a third pooling unit and a third normalization unit;
the third residual block comprises a fourth convolution unit, a fifth convolution unit and a fourth normalization unit;
the output residual block includes two sixth convolution units and a fifth normalization unit.
Preferably, the lung lobe segmentation module is specifically configured to:
a first segmentation volume block configured to process the lung CT image into a first result image;
the first largest pooling layer and the second split volume block are configured to process the first resultant image into a second resultant image;
a second largest pooling layer and a third split volume block configured to process the second resultant image into a third resultant image;
a third maximum pooling layer and a fourth split volume block configured to process the third resultant image into a fourth resultant image;
a fourth maximum pooling layer and a fifth split volume block configured to process the fourth resultant image into a fifth resultant image;
the first upsampling layer is configured to process the fifth resulting image into a first intermediate resulting image;
a sixth segmentation volume block configured to process a stitching result of the first intermediate result image and the fourth result image into a sixth result image;
a second upsampling layer configured to process the sixth resulting image into a second intermediate resulting image;
a seventh segmentation volume block configured to process a stitching result of the second intermediate result image and the third result image into a seventh result image;
a third upsampling layer configured to process the seventh resulting image into a third intermediate resulting image;
an eighth segmentation volume block configured to process a stitching result of the third intermediate result image and the second result image into an eighth result image;
a fourth upsampling layer configured to process the eighth resulting image into a fourth intermediate resulting image;
a ninth segmentation volume block and a tenth segmentation volume block are configured to process a stitching result of the fourth intermediate result image and the first result image into a lung segmentation image.
Preferably, the lesion detection module is specifically configured to:
the input residual block is configured to process the lung segmentation image into an input image;
a first one of the second convolution units is configured to process the input image into a first intermediate image, a second one of the second convolution units is configured to process the first intermediate image into a second intermediate image, a third one of the second convolution units is configured to process the second intermediate image into a third intermediate image, and the second pooling unit and the second normalization unit are configured to process the third intermediate image into the second residual block;
a first one of the third convolution units is configured to process the third intermediate image processed by the second pooling unit and the second normalization unit into a fourth intermediate image, a second one of the third convolution units is configured to process the fourth intermediate image into a fifth intermediate image, a third one of the third convolution units is configured to process the fifth intermediate image into a sixth intermediate image, and the third pooling unit and the third normalization unit are configured to process the sixth intermediate image and input the sixth intermediate image into the first deconvolution unit;
the first deconvolution unit is configured to process the sixth intermediate image processed by the third pooling unit and the third normalization unit into a seventh intermediate image;
the seventh intermediate image is processed by a Sigmoid function and then multiplied by a superposed image of the fourth intermediate image, the fifth intermediate image and the sixth intermediate image to obtain an eighth intermediate image;
adding the eighth intermediate image and the superposed image of the fourth intermediate image, the fifth intermediate image and the sixth intermediate image to obtain a ninth intermediate image;
the fourth convolution unit, fifth convolution unit, and fourth normalization unit are configured to process the ninth intermediate image into a tenth intermediate image;
the tenth intermediate image is processed by a Sigmoid function and then multiplied by a superposed image of the first intermediate image, the second intermediate image and the third intermediate image to obtain an eleventh intermediate image;
adding the eleventh intermediate image and a superposed image of the first intermediate image, the second intermediate image and the third intermediate image to obtain a twelfth intermediate image;
the two sixth convolution units and the fifth normalization unit are configured to process the twelfth intermediate image into a feature map.
Preferably, the positioning result is determined by the following formula:
(P,Ox,Oy,Oz,Sx,Sy,Sz);
in the formula:
p is the lung nodule probability;
Oxis the X-axis coordinate of the lung nodule;
Oyis the Y-axis coordinate of the lung nodule;
Ozis the Z-axis coordinate of the pulmonary nodule;
Sxthe length of the region of the pulmonary nodule in the X-axis direction;
Sythe length of the region where the lung nodule is located in the Y-axis direction;
Szthe length of the region of the lung nodule in the Z-axis direction.
Preferably, the dimension of the lung segmentation image is 200 × 224 × 320, the dimension of the feature map is 100 × 112 × 160, and the step size of the feature map is 2.
The lung nodule size detection system based on the 3DUnet has the advantages that the lung nodule probability, the lung nodule central point and the lung nodule size information can be accurately given through the characteristic diagram and the positioning result; the focus detection module cancels the setting and calculation of anchors and NMS, and reduces the training difficulty of the focus detection module.
Drawings
The invention is further illustrated with reference to the following figures and examples.
Fig. 1 is a schematic structural diagram of a lung lobe segmentation module of a lung nodule size detection system based on 3DUnet according to the present invention.
Fig. 2 is a schematic structural diagram of a lesion detection module of a lung nodule size detection system based on 3DUnet according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "axial", "radial", "circumferential", and the like, indicate orientations and positional relationships based on the orientations and positional relationships shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be considered as limiting the present invention.
Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
As shown in fig. 1-2, the present invention provides a lung nodule size detection system based on 3DUnet, which includes a lung lobe segmentation module and a lesion detection module.
The lung lobe segmentation module includes ten segmentation volume blocks, four maximum pooling layers, and four upsampling layers. The lung lobe segmentation module is configured to read a lung CT image, remove irrelevant areas in the lung CT image and output a lung segmentation image, and specifically comprises:
the first segmented volume block is configured to process the lung CT image into a first resultant image having dimensions 96 x 96, the first segmented volume block having a number of output channels of 32.
The first max pooling layer and the second split volume block are configured to process the first result image into a second result image having dimensions of 48 x 48, the first max pooling layer has a number of output channels of 32, and the second split volume block has a number of output channels of 64.
The second max pooling layer and the third split volume block are configured to process the second resultant image into a third resultant image having dimensions of 24 x 24, the second max pooling layer has a number of output channels of 64, and the third split volume block has a number of output channels of 128.
The third maximum pooling layer and the fourth split volume block are configured to process the third result image into a fourth result image having dimensions of 12 × 12 × 12, the third maximum pooling layer has a number of output channels of 128, and the fourth split volume block has a number of output channels of 256.
The fourth maximum pooling layer and the fifth split volume block are configured to process the fourth resultant image into a fifth resultant image having dimensions of 6 × 6 × 6, the number of output channels of the fourth maximum pooling layer is 256, and the number of output channels of the fifth split volume block is 512.
The first upsampling layer is configured to process the fifth resulting image into a first intermediate resulting image.
The sixth segmentation volume block is configured to process a result of stitching the first intermediate result image and the fourth result image into a sixth result image, the sixth result image having dimensions of 12 × 12 × 12, the number of output channels of the first upsampling layer being 512, and the number of output channels of the sixth segmentation volume block being 256.
The second upsampling layer is configured to process the sixth resulting image into a second intermediate resulting image.
The seventh segmentation volume block is configured to process a stitching result of the second intermediate result image and the third result image into a seventh result image, the seventh result image having dimensions of 24 × 24 × 24, the number of output channels of the second upsampling layer being 256, and the number of output channels of the seventh segmentation volume block being 128.
The third upsampling layer is configured to process the seventh resulting image into a third intermediate resulting image.
The eighth segmentation volume block is configured to process a result of stitching the third intermediate result image and the second result image into an eighth result image having dimensions of 48 × 48 × 48, the number of output channels of the third upsampling layer being 128, and the number of output channels of the eighth segmentation volume block being 64.
The fourth upsampling layer is configured to process the eighth resulting image into a fourth intermediate resulting image.
The ninth and tenth segmentation volume blocks are configured to process a stitching result of the fourth intermediate result image and the first result image as a lung segmentation image. The dimensions of the lung segmentation image are 96 × 96 × 96, the number of output channels of the fourth upsampled layer is 64, the number of output channels of the ninth segmentation volume block is 32, and the number of output channels of the tenth segmentation volume block is 6.
In the embodiment of the present disclosure, the obtained lung segmentation image with the dimension of 96 × 96 × 96 needs to be processed into a lung segmentation image with the dimension of 200 × 224 × 320, and then the lung segmentation image with the dimension of 200 × 224 × 320 needs to be input to the lesion detection module.
The focus detection module comprises an input residual block, a first residual block, a second residual block, a first deconvolution unit, a third residual block, a second deconvolution unit and an output residual block which are sequentially arranged.
The input residual block comprises two first convolution units, a first pooling unit and a first normalization unit which are arranged in sequence.
The first residual block includes three second convolution units, a second pooling unit, and a second normalization unit.
The second residual block includes three third convolution units, a third pooling unit, and a third normalization unit.
The third residual block includes a fourth convolution unit, a fifth convolution unit, and a fourth normalization unit.
The output residual block includes two sixth convolution units and a fifth normalization unit.
The focus detection module is configured to read the lung segmentation image, locate a focus, output a feature map and output a location result, and specifically includes:
the input residual block is configured to process the lung segmentation image into an input image.
The first second convolution unit is configured to process the input image into a first intermediate image, the second convolution unit is configured to process the first intermediate image into a second intermediate image, the third second convolution unit is configured to process the second intermediate image into a third intermediate image, and the second pooling unit and the second normalization unit are configured to process the third intermediate image into a second residual block.
The first third convolution unit is configured to process a third intermediate image processed by the second pooling unit and the second normalization unit into a fourth intermediate image, the second third convolution unit is configured to process the fourth intermediate image into a fifth intermediate image, the third convolution unit is configured to process the fifth intermediate image into a sixth intermediate image, and the third pooling unit and the third normalization unit are configured to process the sixth intermediate image and input the sixth intermediate image into the first deconvolution unit.
The first deconvolution unit is configured to process the sixth intermediate image processed by the third pooling unit and the third normalization unit into a seventh intermediate image.
And the seventh intermediate image is multiplied by the superposed image of the fourth intermediate image, the fifth intermediate image and the sixth intermediate image after being processed by the Sigmoid function to obtain an eighth intermediate image.
And adding the eighth intermediate image and the superposed image of the fourth intermediate image, the fifth intermediate image and the sixth intermediate image to obtain a ninth intermediate image.
The fourth convolution unit, the fifth convolution unit, and the fourth normalization unit are configured to process the ninth intermediate image into a tenth intermediate image.
And the tenth intermediate image is multiplied by the superposed image of the first intermediate image, the second intermediate image and the third intermediate image after being processed by a Sigmoid function to obtain an eleventh intermediate image.
The eleventh intermediate image is added to the superimposed image of the first intermediate image, the second intermediate image, and the third intermediate image to obtain a twelfth intermediate image.
The two sixth convolution units and the fifth normalization unit are configured to process the twelfth intermediate image into a feature map.
In this embodiment, the dimension of the feature map is 100 × 112 × 160, and the step size of the feature map is 2. The positioning result is determined by the following formula:
(P,Ox,Oy,Oz,Sx,Sy,Sz);
in the formula:
p is the pulmonary nodule probability;
Oxis the X-axis coordinate of the lung nodule;
Oyis the Y-axis coordinate of the lung nodule;
Ozis the Z-axis coordinate of the pulmonary nodule;
Sxthe length of the region of the pulmonary nodule in the X-axis direction;
Sythe length of the region where the lung nodule is located in the Y-axis direction;
Szthe length of the region of the lung nodule in the Z-axis direction.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, a schematic representation of the term does not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (5)

1. A pulmonary nodule size detection system based on 3DUnet is characterized by comprising a lung lobe segmentation module and a focus detection module;
the lung lobe segmentation module is configured to read a lung CT image, remove irrelevant areas in the lung CT image and output a lung segmentation image; the lung lobe segmentation module comprises ten segmentation volume blocks, four maximum pooling layers and four upper sampling layers;
the focus detection module is configured to read the lung segmentation image, locate a focus, output a feature map, and output a location result; the focus detection module comprises an input residual block, a first residual block, a second residual block, a first deconvolution unit, a third residual block, a second deconvolution unit and an output residual block which are sequentially arranged;
the input residual block comprises two first convolution units, a first pooling unit and a first normalization unit which are arranged in sequence;
the first residual block comprises three second convolution units, a second pooling unit and a second normalization unit;
the second residual block comprises three third convolution units, a third pooling unit and a third normalization unit;
the third residual block comprises a fourth convolution unit, a fifth convolution unit and a fourth normalization unit;
the output residual block includes two sixth convolution units and a fifth normalization unit.
2. The 3d net-based pulmonary nodule size detection system of claim 1, wherein:
the lung lobe segmentation module is specifically configured to:
a first segmentation volume block configured to process the lung CT image into a first result image;
the first largest pooling layer and the second split volume block are configured to process the first resultant image into a second resultant image;
a second largest pooling layer and a third split volume block configured to process the second resultant image into a third resultant image;
a third maximum pooling layer and a fourth split volume block configured to process the third resultant image into a fourth resultant image;
a fourth maximum pooling layer and a fifth split volume block configured to process the fourth resultant image into a fifth resultant image;
the first upsampling layer is configured to process the fifth resulting image into a first intermediate resulting image;
a sixth segmentation volume block configured to process a stitching result of the first intermediate result image and the fourth result image into a sixth result image;
a second upsampling layer configured to process the sixth resulting image into a second intermediate resulting image;
a seventh segmentation volume block configured to process a stitching result of the second intermediate result image and the third result image into a seventh result image;
a third upsampling layer configured to process the seventh resulting image into a third intermediate resulting image;
an eighth segmentation volume block configured to process a stitching result of the third intermediate result image and the second result image into an eighth result image;
a fourth upsampling layer configured to process the eighth resulting image into a fourth intermediate resulting image;
a ninth segmentation volume block and a tenth segmentation volume block are configured to process a stitching result of the fourth intermediate result image and the first result image into a lung segmentation image.
3. A 3DUnet based lung nodule size detection system as claimed in claim 2 wherein:
the lesion detection module is specifically configured to:
the input residual block is configured to process the lung segmentation image into an input image;
a first one of the second convolution units is configured to process the input image into a first intermediate image, a second one of the second convolution units is configured to process the first intermediate image into a second intermediate image, a third one of the second convolution units is configured to process the second intermediate image into a third intermediate image, and the second pooling unit and the second normalization unit are configured to process the third intermediate image into the second residual block;
a first one of the third convolution units is configured to process the third intermediate image processed by the second pooling unit and the second normalization unit into a fourth intermediate image, a second one of the third convolution units is configured to process the fourth intermediate image into a fifth intermediate image, a third one of the third convolution units is configured to process the fifth intermediate image into a sixth intermediate image, and the third pooling unit and the third normalization unit are configured to process the sixth intermediate image and input the sixth intermediate image into the first deconvolution unit;
the first deconvolution unit is configured to process the sixth intermediate image processed by the third pooling unit and the third normalization unit into a seventh intermediate image;
the seventh intermediate image is processed by a Sigmoid function and then multiplied by a superposed image of the fourth intermediate image, the fifth intermediate image and the sixth intermediate image to obtain an eighth intermediate image;
adding the eighth intermediate image and the superposed image of the fourth intermediate image, the fifth intermediate image and the sixth intermediate image to obtain a ninth intermediate image;
the fourth convolution unit, fifth convolution unit, and fourth normalization unit are configured to process the ninth intermediate image into a tenth intermediate image;
the tenth intermediate image is processed by a Sigmoid function and then multiplied by a superposed image of the first intermediate image, the second intermediate image and the third intermediate image to obtain an eleventh intermediate image;
adding the eleventh intermediate image and a superposed image of the first intermediate image, the second intermediate image and the third intermediate image to obtain a twelfth intermediate image;
the two sixth convolution units and the fifth normalization unit are configured to process the twelfth intermediate image into a feature map.
4. A 3DUnet based lung nodule size detection system as claimed in claim 3 wherein:
the positioning result is determined by the following formula:
(P,Ox,Oy,Oz,Sx,Sy,Sz);
in the formula:
p is the lung nodule probability;
Oxx-axis coordinates of the lung nodules in the feature map;
Oyis the Y-axis coordinate of the lung nodule in the feature map;
Ozis the Z-axis coordinate of the lung nodule in the feature map;
Sxthe length of the region where the lung nodule is located in the X-axis direction in the feature map;
Sythe length of the region where the lung nodule is located in the Y-axis direction in the feature map;
Szthe length of the region where the lung nodule is located in the Z-axis direction in the feature map.
5. The 3d net-based pulmonary nodule size detection system of claim 4, wherein:
the dimension of the lung segmentation image is 200 × 224 × 320, the dimension of the feature map is 100 × 112 × 160, and the step size of the feature map is 2.
CN201911352399.9A 2019-12-25 Lung nodule size detecting system based on 3DUnet Active CN111091564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911352399.9A CN111091564B (en) 2019-12-25 Lung nodule size detecting system based on 3DUnet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911352399.9A CN111091564B (en) 2019-12-25 Lung nodule size detecting system based on 3DUnet

Publications (2)

Publication Number Publication Date
CN111091564A true CN111091564A (en) 2020-05-01
CN111091564B CN111091564B (en) 2024-04-26

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820584A (en) * 2022-05-27 2022-07-29 北京安德医智科技有限公司 Lung focus positioner

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765369A (en) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 Detection method, device, computer equipment and the storage medium of Lung neoplasm
WO2018222755A1 (en) * 2017-05-30 2018-12-06 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
CN109685776A (en) * 2018-12-12 2019-04-26 华中科技大学 A kind of pulmonary nodule detection method based on ct images and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018222755A1 (en) * 2017-05-30 2018-12-06 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
CN108765369A (en) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 Detection method, device, computer equipment and the storage medium of Lung neoplasm
WO2019200740A1 (en) * 2018-04-20 2019-10-24 平安科技(深圳)有限公司 Pulmonary nodule detection method and apparatus, computer device, and storage medium
CN109685776A (en) * 2018-12-12 2019-04-26 华中科技大学 A kind of pulmonary nodule detection method based on ct images and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MIDL 2018: "Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation", <MIDL 2018>, pages 2 - 4 *
OZG UN C I CEK: "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation", <MICCAI 2016> *
褚晶辉;李晓川;张佳祺;吕卫;: "一种基于级联卷积网络的三维脑肿瘤精细分割", 激光与光电子学进展, no. 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820584A (en) * 2022-05-27 2022-07-29 北京安德医智科技有限公司 Lung focus positioner

Similar Documents

Publication Publication Date Title
JP5683065B2 (en) Improved system and method for positive displacement registration
JP4800127B2 (en) Medical image segmentation device and medical image segmentation program
CN1890689A (en) Elastic image registration
ES2875919T3 (en) Similar case image search program, similar case image search device and similar case image search method
US20080069477A1 (en) Result filter and method for selecting the result data of an application for automatic pattern recognition
CN104586418B (en) medical image data processing apparatus and medical image data processing method
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN110363802B (en) Prostate image registration system and method based on automatic segmentation and pelvis alignment
US10603016B2 (en) Image processing apparatus, method of controlling the same, and non-transitory computer-readable storage medium
CN105913442A (en) Method for automatically matching pulmonary nodules
US11200443B2 (en) Image processing apparatus, image processing method, and image processing system
JP7101809B2 (en) Image processing equipment, image processing methods, and programs
WO2019037654A1 (en) 3d image detection method and apparatus, electronic device, and computer readable medium
CN109191465A (en) A kind of system for being determined based on deep learning network, identifying human body or so the first rib cage
CN113436173A (en) Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN110910348B (en) Method, device, equipment and storage medium for classifying positions of pulmonary nodules
US20100215228A1 (en) Medical image displaying apparatus, medical image displaying method, and medical image displaying program
Costa et al. Data augmentation for detection of architectural distortion in digital mammography using deep learning approach
CN111091564B (en) Lung nodule size detecting system based on 3DUnet
CN111091564A (en) Pulmonary nodule size detection system based on 3DUnet
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
US20100260393A1 (en) Navigation guide
Foo et al. Interactive segmentation for covid-19 infection quantification on longitudinal ct scans
CN111751371B (en) Immunohistochemical digital slide reading system and method
JP4393135B2 (en) Radiation image processing apparatus, radiation image processing method, computer program, and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant