WO2021179702A1 - Procédé et appareil de segmentation de fragments d'os à partir d'une image tridimensionnelle, dispositif informatique et support de stockage - Google Patents
Procédé et appareil de segmentation de fragments d'os à partir d'une image tridimensionnelle, dispositif informatique et support de stockage Download PDFInfo
- Publication number
- WO2021179702A1 WO2021179702A1 PCT/CN2020/134546 CN2020134546W WO2021179702A1 WO 2021179702 A1 WO2021179702 A1 WO 2021179702A1 CN 2020134546 W CN2020134546 W CN 2020134546W WO 2021179702 A1 WO2021179702 A1 WO 2021179702A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- result
- segmentation
- dimensional
- sampling
- map
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 210000000988 bone and bone Anatomy 0.000 title claims abstract description 25
- 239000012634 fragment Substances 0.000 title claims abstract description 9
- 230000011218 segmentation Effects 0.000 claims abstract description 348
- 238000000605 extraction Methods 0.000 claims abstract description 71
- 238000005070 sampling Methods 0.000 claims description 130
- 238000012549 training Methods 0.000 claims description 58
- 206010064211 Bone fragmentation Diseases 0.000 claims description 52
- 238000013145 classification model Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 13
- 208000010392 Bone Fractures Diseases 0.000 claims description 11
- 208000006670 Multiple fractures Diseases 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 5
- 210000001519 tissue Anatomy 0.000 abstract description 6
- 238000002595 magnetic resonance imaging Methods 0.000 description 7
- 238000013467 fragmentation Methods 0.000 description 6
- 238000006062 fragmentation reaction Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000004927 fusion Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 206010017076 Fracture Diseases 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Definitions
- This application relates to the field of intelligent medical treatment, and in particular to methods, devices, computer equipment and storage media for segmentation of three-dimensional images.
- Fractures are a common disease. More severe fractures often result in broken bones. If broken bones are missed during surgical treatment, it may lead to serious consequences.
- the existing traditional segmentation methods mainly understand the image through artificial subjective consciousness, so as to extract specific feature information, such as: gray information, texture information, and symmetry information to achieve fragmented bone segmentation.
- this segmentation method only compares specific images. Good segmentation results, but the segmentation results are too rough and the segmentation efficiency is low.
- the convolutional neural network as a representative of supervised learning, can directly learn feature representation from data.
- the image is combined layer by layer from simple edges, corners and other low-level features to form a more abstract high-level It has achieved remarkable results in the field of image recognition and has been widely used in medical image processing.
- the inventor realizes that the current mainstream machine segmentation method is mainly based on the segmentation of MRI images based on tissue slices, and there is a problem that the spatial correlation of the tissue cannot be reflected, and the segmentation efficiency is low.
- the present application provides a method for segmentation of three-dimensional images, including: acquiring a three-dimensional image to be segmented; using a segmentation model to identify the three-dimensional image to be segmented to obtain a three-dimensional bone segmentation result; Recognizing the three-dimensional image to be segmented to obtain a three-dimensional bone fragmentation result includes: performing feature extraction on the three-dimensional image to be segmented to obtain a basic feature map; performing feature extraction on the basic feature map to obtain a segmentation map; The basic feature maps are fused to generate the three-dimensional bone fragmentation result.
- the present application also provides a three-dimensional image bone fragmentation device, including: a receiving unit for acquiring a three-dimensional image to be segmented; a segmentation unit for recognizing and acquiring the three-dimensional image to be segmented using a segmentation model Three-dimensional bone fragmentation results;
- the segmentation model includes a feature extraction module, an intermediate module, and a segmentation module;
- the segmentation unit uses the feature extraction module to perform feature extraction on the three-dimensional image to be segmented to obtain a basic feature map.
- the module performs feature extraction on the basic feature map to obtain a segmentation map, and uses the segmentation module to fuse the segmentation map with the basic feature map to generate the three-dimensional bone fragmentation result.
- the present application also provides a computer device, the computer device including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes the computer program when the computer program is executed.
- the following methods: obtaining a three-dimensional image to be segmented; using a segmentation model to recognize the three-dimensional image to be segmented to obtain a three-dimensional bone fragmentation result; using a segmentation model to recognize the three-dimensional image to be segmented to obtain a three-dimensional bone fragmentation result includes: Perform feature extraction on the three-dimensional image to be segmented to obtain a basic feature map; perform feature extraction on the basic feature map to obtain a segmentation map; fuse the segmentation map with the basic feature map to generate the three-dimensional bone fragmentation result.
- the present application also provides a computer-readable storage medium on which a computer program is stored.
- the computer program When the computer program is executed by a processor, the following method is realized: acquiring a three-dimensional image to be segmented; Segmenting a three-dimensional image for recognition to obtain a three-dimensional bone fragmentation result; using a segmentation model to recognize the three-dimensional image to be segmented to obtain a three-dimensional bone fragmentation result includes: performing feature extraction on the three-dimensional image to be segmented to obtain a basic feature map; Perform feature extraction on the basic feature map to obtain a segmentation map; fuse the segmentation map with the basic feature map to generate the three-dimensional bone fragmentation result.
- This application considers the relevance of the organization in the stereo image, saves the segmentation time, and improves the accuracy and efficiency of the segmentation.
- FIG. 1 is a flowchart of an embodiment of the method for segmenting bone fragments of a three-dimensional image according to this application.
- FIG. 2 is a flowchart of an embodiment of training an initial classification model to obtain a segmentation model in this application.
- FIG. 3 is a block diagram of an embodiment of the device for segmenting bone fragments of three-dimensional images described in this application.
- Fig. 4 is an internal block diagram of the segmentation model described in this application.
- FIG. 5 is a hardware architecture diagram of an embodiment of the computer device of this application.
- the technical solution of this application can be applied to the fields of artificial intelligence, smart city, digital medical care, blockchain and/or big data technology to realize smart medical care.
- the data involved in this application can be stored in a database, or can be stored in a blockchain, such as distributed storage through a blockchain, which is not limited in this application.
- the three-dimensional image bone fragmentation method, device, computer equipment and storage medium provided in this application are suitable for the field of intelligent medical treatment.
- This application takes a three-dimensional stereo image as input, and uses the feature extraction module in the segmentation model to perform feature extraction on the three-dimensional image to be segmented to obtain the basic feature map, taking into account the correlation of the organization in the stereo image; the basic feature map is extracted through the intermediate module
- the segmentation module is used to fuse the segmentation map with the basic feature map to obtain the 3D bone fragmentation result, which saves the segmentation time and improves the accuracy and efficiency of the segmentation.
- a method for segmenting bone fragments of a three-dimensional image in this embodiment includes the following steps.
- the three-dimensional image to be segmented is a three-dimensional MRI (Magnetic Resonance Imaging) image.
- MRI Magnetic Resonance Imaging
- MRI displays the internal information of the body in image mode, which has the advantages of non-invasive, multi-modal, accurate positioning and so on.
- the segmentation model includes a feature extraction module, an intermediate module, and a segmentation module.
- step S2 using a segmentation model to recognize the three-dimensional image to be segmented to obtain a three-dimensional bone fragmentation result includes the following steps.
- step S21 performing feature extraction on the three-dimensional image to be segmented to obtain a basic feature map may include the following steps: S211. Obtain a first feature result by convolving the three-dimensional image to be segmented; S212. Perform down-sampling to obtain the first sampling result; S213. The first sampling result and the first characteristic result are added element by element and then convolved to obtain a second characteristic result; S214. Down-sampling the second characteristic result to obtain a second sampling result; S215. Add the second sampling result and the second feature result element by element and then perform convolution to obtain a third feature result; S216. Downsample the third feature result to obtain a third sampling result ; S217.
- the third sampling result and the third characteristic result are added element by element and then convolved to obtain a fourth characteristic result; S218. Down-sampling the fourth characteristic result to obtain a fourth sampling result; S219 The fourth sampling result and the fourth feature result are added element by element and then convolved to obtain the basic feature map.
- a feature extraction module is used to perform feature extraction on the three-dimensional image to be segmented to obtain a basic feature map.
- the feature extraction module sequentially includes: a first context layer, a first down-sampling layer, a second context layer, a second down-sampling layer, a third context layer, a third down-sampling layer, a fourth context layer, and a second context layer.
- the fourth downsampling layer and the fifth context layer Perform feature extraction on the three-dimensional image to be segmented through the first context layer, input the first feature result into the first down-sampling layer to obtain the first sampling result, and compare the first sampling result with the first sampling result.
- the feature results are added element by element as the input of the second context layer to obtain the second feature result
- the second feature result is input into the second down-sampling layer to obtain the second sampling result
- the result and the second characteristic result are added element by element as the input of the third context layer to obtain a third characteristic result
- the third characteristic result is input to the third down-sampling layer to obtain a third sampling result
- the third sampling result and the third characteristic result are added element by element as the input of the fourth context layer to obtain a fourth characteristic result
- the fourth characteristic result is input to the fourth down-sampling layer to A fourth sampling result is obtained
- the fourth sampling result and the fourth feature result are added element by element as the input of the fifth context layer to obtain the basic feature map.
- a 128 ⁇ 128 ⁇ 128 voxel three-dimensional image to be segmented can be input to the input layer of the segmentation model.
- each context layer is connected by a sampling layer, and each down-sampling layer
- the result of the context layer is added element by element as the input of the next down-sampling layer to obtain the basic feature map, that is, the rough segmentation map.
- the above-mentioned voxel also called volume element, is the smallest unit of digital data in the three-dimensional space segmentation.
- the voxel is mainly used in fields such as three-dimensional imaging, scientific data, and medical imaging.
- step S22 may include the following steps: S221. Up-sampling the basic feature map to obtain a first segmentation result; S222. The first segmentation result and the fourth feature result are fused, decoded, and up-sampled to obtain a second segmentation result; S223. The second segmentation result and the third feature result are merged and decoded to obtain The first output result is up-sampled to obtain the third segmentation result; S224. The third segmentation result and the second feature result are fused, decoded, and up-sampled to obtain the fourth segmentation result; S225. The fourth segmentation result is fused with the first feature result, convolved and segmented to obtain a fifth segmentation result, and the fifth segmentation result is used as the segmentation map.
- an intermediate module is used to perform feature extraction on the basic feature map to obtain a segmentation map.
- the intermediate module includes in sequence: the first upsampling layer, the first decoding layer, the second upsampling layer, the second decoding layer, the third upsampling layer, the third decoding layer, the fourth upsampling layer, the three-dimensional Convolutional layer and first segmentation layer.
- the sampling layer obtains the second segmentation result; the second segmentation result is fused with the third feature result, and the third segmentation result is obtained through the second decoding layer and the third up-sampling layer;
- the segmentation result is fused with the second feature result, and the fourth segmentation result is obtained through the third decoding layer and the fourth upsampling layer;
- the fourth segmentation result is fused with the first feature result, and the result is
- the three-dimensional convolution layer and the first segmentation layer obtain a fifth segmentation result, and the fifth segmentation result is used as the segmentation map.
- the first segmentation result is obtained by the first upsampling layer after the fifth context layer, the first segmentation result is fused with the fourth feature result, and the fusion passes through the first decoding layer and the second decoding layer.
- the upsampling layer obtains the second segmentation result, merges the second segmentation result and the third feature result, and then passes through the second decoding layer and the third upsampling layer to obtain the third segmentation result, and so on to obtain the fourth segmentation result.
- the fourth segmentation result is fused with the first feature result, and the fifth segmentation result is obtained through the three-dimensional convolution layer and the first segmentation layer.
- the up-sampling of the segmentation results is achieved through the up-sampling layer, so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented.
- step S23 may include the following steps: S231.
- the first output result is segmented to obtain a second output result, and the first additional segmentation result is obtained through up-sampling; S232.
- the second output result is combined with the The first additional segmentation result is added element by element and up-sampled to obtain a second additional result; S233.
- the segmentation map and the second additional result are added element by element and then classified to obtain the three-dimensional bone fragmentation. result.
- a segmentation module is used to fuse the segmentation map and the basic feature map to generate the three-dimensional bone fragmentation result.
- the segmentation module sequentially includes: a second segmentation layer, a fifth up-sampling layer, a sixth up-sampling layer, and a classification layer.
- the first output result of the second decoding layer is passed through the second segmentation layer and the fifth up-sampling layer to obtain the first additional segmentation result, and the first output result of the second decoding layer is passed through the second
- the second output result of the segmentation layer and the first additional segmentation result are added element by element, the second additional result is obtained through the sixth upsampling layer; after the segmentation map and the second additional result are added element by element Input the classification layer to obtain the three-dimensional fragmented bone segmentation result.
- the second segmentation layer and the fifth upsampling layer are used in turn to segment the first output result output by the second decoding layer to obtain the first additional segmentation result; the first output output from the second decoding layer
- the result is added to the first additional segmentation result element by element, and the second additional result is obtained by sampling the sixth upsampling layer.
- the segmentation map and the second additional result are added element by element, and the final segmentation result is output through the classification layer.
- the segmentation result It is a probability score matrix corresponding to the number of semantic segmentation categories and the same size as the original image.
- the final classification is determined by retrieving the probability that each pixel belongs to each category, and the final three-dimensional bone fragmentation result is formed.
- the segmentation model is used to identify the three-dimensional image to be segmented to obtain the three-dimensional bone fragmentation result, and the following steps are also included before.
- the initial classification model includes a feature extraction module, an intermediate module and a segmentation module.
- step A may include the following steps.
- A1 Reconstruct the two-dimensional sequence diagram in the sample from the three-dimensional diagram to obtain the three-dimensional training image.
- the two-dimensional sequence diagram is a two-dimensional sparse sequence diagram, and the two-dimensional sequence diagram is reconstructed by trilinear interpolation (also called trilinear interpolation) or super-resolution reconstruction method to obtain an isotropic sequence diagram.
- Trilinear interpolation is mainly used in a 3D cube, passing the value of a given vertex and then calculating the value of other points in the cube.
- a random batch of samples can be used, for example: voxels with a size of 2 and a size of 128 ⁇ 128 ⁇ 128, each round has more than 100 images, and a total of 300 rounds of training.
- the three-dimensional training image is normalized to realize the pixel planning of the image, and the pixels of the three-dimensional sample image are unified to facilitate subsequent model training.
- step A2 the two modes of spin-lattice relaxation time and spin-spin relaxation time of the three-dimensional training image can also be corrected for field deviation.
- the feature extraction module sequentially includes: a first context layer, a first down-sampling layer, a second context layer, a second down-sampling layer, a third context layer, a third down-sampling layer, a fourth context layer, and a second context layer.
- the fourth downsampling layer and the fifth context layer sequentially includes: a first context layer, a first down-sampling layer, a second context layer, a second down-sampling layer, a third context layer, a third down-sampling layer, a fourth context layer, and a second context layer.
- the fourth downsampling layer and the fifth context layer is performed by the third context layer.
- feature extraction is performed on the three-dimensional sample image by the feature extraction module in the initial classification model to obtain a basic training feature map.
- the three-dimensional sample image can be input to the input layer of the initial classification model.
- each context layer is connected by a sampling layer, and the results of each down-sampling layer and the context layer are added element by element as the next down-sampling
- the input of the layer to obtain the basic feature map that is, the rough segmentation map.
- A4. Perform feature extraction on the basic training feature map to obtain a segmentation training image.
- the intermediate module includes in sequence: the first upsampling layer, the first decoding layer, the second upsampling layer, the second decoding layer, the third upsampling layer, the third decoding layer, the fourth upsampling layer, the three-dimensional Convolutional layer and first segmentation layer.
- feature extraction is performed on the basic training feature map through the intermediate module to obtain a segmentation training image.
- the first segmentation result is obtained by the first upsampling layer behind the fifth context layer, the first segmentation result is fused with the fourth feature result, and the first decoding layer and the second upsampling layer are obtained after the fusion.
- the second segmentation result, the second segmentation result and the third feature result are merged, after the fusion, the third segmentation result is obtained through the second decoding layer and the third upsampling layer, and so on, the fourth segmentation result is obtained, and the fourth segmentation result is It is fused with the first feature result, and the fifth segmentation result is obtained through the three-dimensional convolution layer and the first segmentation layer.
- the up-sampling of the segmentation results is achieved through the up-sampling layer, so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented.
- the segmentation module sequentially includes: a second segmentation layer, a fifth up-sampling layer, a sixth up-sampling layer, and a classification layer.
- the segmentation training map and the basic training feature map are fused by a segmentation module to generate the three-dimensional bone fragmentation training segmentation result.
- the second additional result is sampled by the sixth up-sampling layer.
- the segmentation map and the second additional result are added element by element, and the final segmentation result is output by the classification layer.
- the segmentation result is related to the number of semantic segmentation categories.
- the final classification is determined by retrieving the probability that each pixel belongs to each category, and the final three-dimensional bone fragmentation result is formed.
- A6 Adjust the parameters in the initial classification model according to the training segmentation result to obtain the segmentation model.
- step A6 adjusting parameters in the feature extraction module, the intermediate module, and the segmentation module according to the training segmentation result to obtain the segmentation model may include: using an Adam optimizer according to the training segmentation result The parameters in the feature extraction module, the intermediate module, and the segmentation module are adjusted to obtain the segmentation model.
- the Adam optimizer can adjust a different learning rate for each different parameter, and update frequently changing parameters with a smaller step size, while sparse parameters are updated with a larger step size.
- the traditional cross-entropy loss function is abandoned, and the multi-category Dice loss function can be used for broken bone segmentation.
- the Dice loss function is a ensemble similarity measurement function, which is usually used to calculate the similarity of two samples (the similarity range is [0,1]). The Dice loss function is used to punish low-confidence predictions and improve the prediction effect.
- the bone fragmentation method of 3D images can convert the fragmentation of MRI images into pixel-level 3D image semantic annotation problems, using the context module and down-sampling module in the already trained segmentation model
- the intermediate module outputs rough segmentation maps corresponding to the number of semantic segmentation categories
- an upsampling module is added after the intermediate module to upsample the rough segmentation maps to obtain the original image Segmentation maps of the same size.
- the fragmented bone segmentation method not only improves the segmentation accuracy, but also greatly improves the efficiency of the segmentation; the fragmented bone segmentation method of the three-dimensional image uses the entire stereo image as the input image, which can save the time of segmentation and consider the spatial correlation. Higher segmentation accuracy.
- the bone fragmentation method for 3D images takes a 3D stereo image as input, and uses the feature extraction module in the segmentation model to perform feature extraction on the 3D image to be segmented to obtain a basic feature map, taking into account the correlation of the tissues in the stereo image. ;
- the basic feature map is extracted through the intermediate module to obtain a segmentation map with the same size as the original 3D image to be segmented, and the segmentation module is used to fuse the segmentation map with the basic feature map to obtain the 3D bone fragmentation results, saving segmentation time , Improve the accuracy and efficiency of segmentation.
- a three-dimensional image bone fragmentation device 1 of this embodiment includes: a receiving unit 11 and a dividing unit 12.
- the receiving unit 11 is used to obtain a three-dimensional image to be divided.
- the three-dimensional image to be segmented is a three-dimensional MRI image.
- MRI displays the internal information of the body in image mode, which has the advantages of non-invasive, multi-modal, accurate positioning and so on.
- the segmentation unit 12 is configured to use a segmentation model to recognize the three-dimensional image to be segmented to obtain a three-dimensional bone fragmentation result.
- the segmentation model shown in FIG. 4 includes a feature extraction module 121, an intermediate module 122, and a segmentation module 123.
- the segmentation unit 12 uses the feature extraction module 121 to perform feature extraction on the three-dimensional image to be segmented to obtain a basic feature map, and uses the intermediate module 122 to perform feature extraction on the basic feature map to obtain a segmentation map.
- the segmentation map is fused with the basic feature map to generate the three-dimensional fragmented bone segmentation result.
- the feature extraction module 121 sequentially includes: a first context layer, a first down-sampling layer, a second context layer, a second down-sampling layer, a third context layer, a third down-sampling layer, and a fourth The context layer, the fourth down-sampling layer, and the fifth context layer.
- the specific process of performing feature extraction on the three-dimensional image to be segmented through the feature extraction module 121 to obtain a basic feature map includes: performing feature extraction on the three-dimensional image to be segmented through the first context layer, and inputting the first feature result
- the first down-sampling layer is used to obtain a first sampling result
- the first sampling result and the first characteristic result are added element by element as the input of the second context layer to obtain a second characteristic result
- the second feature result is input to the second down-sampling layer to obtain a second sampling result
- the second sampling result and the second feature result are added element by element as the input of the third context layer to obtain the second sampling result.
- the third feature result is input to the third downsampling layer to obtain a third sampling result
- the third sampling result and the third feature result are added element by element as the fourth context Layer input to obtain the fourth feature result
- add the fourth sampling result and the fourth feature result element by element As an input of the fifth context layer to obtain the basic feature map.
- a 128 ⁇ 128 ⁇ 128 voxel three-dimensional image to be segmented can be input to the input layer of the segmentation model.
- each context layer is connected by a sampling layer, and each down-sampling layer
- the result of the context layer is added element by element as the input of the next down-sampling layer to obtain the basic feature map, that is, the rough segmentation map.
- the above-mentioned voxel also called volume element, is the smallest unit of digital data in the three-dimensional space segmentation.
- the voxel is mainly used in fields such as three-dimensional imaging, scientific data, and medical imaging.
- the intermediate module 122 sequentially includes: a first upsampling layer, a first decoding layer, a second upsampling layer, a second decoding layer, a third upsampling layer, a third decoding layer, and a fourth upsampling layer. Sampling layer, 3D convolutional layer and first segmentation layer.
- the specific process of performing feature extraction on the basic feature map through the intermediate module 122 to obtain a segmentation map includes: inputting the basic feature map to the first upsampling layer to obtain a first segmentation result, and dividing the first segmentation
- the result is fused with the fourth feature result, and a second segmentation result is obtained through the first decoding layer and the second upsampling layer; the second segmentation result is fused with the third feature result, and the second segmentation result is fused with the third feature result through the
- the second decoding layer and the third upsampling layer obtain a third segmentation result; the third segmentation result is fused with the second feature result, and the result is obtained through the third decoding layer and the fourth upsampling layer
- the fourth segmentation result; the fourth segmentation result is merged with the first feature result, a fifth segmentation result is obtained through the three-dimensional convolution layer and the first segmentation layer, and the fifth segmentation result is used as the result The segmentation diagram.
- the first segmentation result is obtained by the first upsampling layer after the fifth context layer, the first segmentation result is fused with the fourth feature result, and the fusion passes through the first decoding layer and the second decoding layer.
- the upsampling layer obtains the second segmentation result, merges the second segmentation result and the third feature result, and then passes through the second decoding layer and the third upsampling layer to obtain the third segmentation result, and so on to obtain the fourth segmentation result.
- the fourth segmentation result is fused with the first feature result, and the fifth segmentation result is obtained through the three-dimensional convolution layer and the first segmentation layer.
- the up-sampling of the segmentation results is achieved through the up-sampling layer, so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented.
- the segmentation module 123 sequentially includes: a second segmentation layer, a fifth upsampling layer, a sixth upsampling layer, and a classification layer.
- the specific process of fusing the segmentation map with the basic feature map by the segmentation module 123 to generate the three-dimensional bone fragmentation result includes: the first output result of the second decoding layer passes through the second segmentation layer and the The fifth up-sampling layer obtains the first additional segmentation result, and the first output result of the second decoding layer is compared with the first additional segmentation result element by element through the second output result of the second segmentation layer.
- the sixth up-sampling layer is added to obtain a second additional result; the segmentation map and the second additional result are added element by element and then input into the classification layer to obtain the three-dimensional bone fragmentation result.
- the second segmentation layer and the fifth upsampling layer are used in turn to segment the first output result output by the second decoding layer to obtain the first additional segmentation result; the first output output from the second decoding layer
- the result is added to the first additional segmentation result element by element, and the second additional result is obtained by sampling the sixth upsampling layer.
- the segmentation map and the second additional result are added element by element, and the final segmentation result is output through the classification layer.
- the segmentation result It is a probability score matrix corresponding to the number of semantic segmentation categories and the same size as the original image.
- the final classification is determined by retrieving the probability that each pixel belongs to each category, and the final three-dimensional bone fragmentation result is formed.
- the three-dimensional image bone fragmentation device 1 receives the three-dimensional three-dimensional image through the receiving unit 11, and uses the feature extraction module 121 in the segmentation model in the segmentation unit 12 to perform feature extraction on the three-dimensional image to be segmented to obtain the basic feature map.
- the intermediate module 122 performs feature extraction on the basic feature map to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, and the segmentation module 123 is used to fuse the segmentation map with the basic feature map, Furthermore, the three-dimensional fragmented bone segmentation results are obtained, which saves segmentation time and improves the accuracy and efficiency of segmentation.
- the present application also provides a computer device 2 which includes a plurality of computer devices 2.
- the components of the three-dimensional image fragmentation device 1 of the second embodiment can be dispersed in different computer devices 2 .
- the computer device 2 can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a cabinet server (including an independent server, or a combination of multiple servers) that executes the program Server cluster) and so on.
- the computer device 2 of this embodiment at least includes, but is not limited to: a memory, a processor, and a computer program stored in the memory and capable of running on the processor. When the processor executes the computer program, part or All steps.
- the computer equipment may also include a network interface and/or a three-dimensional image fragmentation device.
- a memory 21, a processor 23, a network interface 22, and a bone fragmentation device 1 for three-dimensional images (refer to FIG. 5) that can be communicably connected to each other through a system bus.
- FIG. 5 only shows the computer device 2 with components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
- the memory 21 includes at least one type of computer-readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access Memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
- the memory 21 may be an internal storage unit of the computer device 2, for example, a hard disk or a memory of the computer device 2.
- the memory 21 may also be an external storage device of the computer device 2, for example, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
- the memory 21 may also include both the internal storage unit of the computer device 2 and its external storage device.
- the memory 21 is generally used to store an operating system and various application software installed in the computer device 2, for example, the program code of the method for segmenting bone fragments of a three-dimensional image in the first embodiment.
- the memory 21 can also be used to temporarily store various types of data that have been output or will be output.
- the processor 23 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips.
- the processor 23 is generally used to control the overall operation of the computer device 2, for example, to perform data interaction or communication-related control and processing with the computer device 2.
- the processor 23 is used to run the program code or processed data stored in the memory 21, for example, to run the bone fragmentation device 1 of the three-dimensional image.
- the network interface 22 may include a wireless network interface or a wired network interface, and the network interface 22 is generally used to establish a communication connection between the computer device 2 and other computer devices 2.
- the network interface 22 is used to connect the computer device 2 with an external terminal through a network, and establish a data transmission channel and a communication connection between the computer device 2 and the external terminal.
- the network may be an intranet (Intranet), the Internet (Internet), a global system of mobile communication (Global System of Mobile) communication, GSM), Wideband Code Division Multiple Access (Wideband Code Division Multiple Access, WCDMA), 4G network, 5G network, Bluetooth (Bluetooth), Wi-Fi and other wireless or wired networks.
- FIG. 5 only shows the computer device 2 with components 21-23, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead.
- the three-dimensional image bone fragmentation device 1 stored in the memory 21 may also be divided into one or more program modules, and the one or more program modules are stored in the memory 21, and It is executed by one or more processors (the processor 23 in this embodiment) to complete the application.
- this application also provides a computer-readable storage medium, which includes multiple storage media, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM ), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic storage, magnetic disks, optical disks, servers, App applications
- a shopping mall, etc. has a computer program stored thereon, and the program is executed by the processor 23 to realize corresponding functions.
- the computer-readable storage medium of this embodiment is used to store the bone fragmentation device 1 of a three-dimensional image, and when executed by the processor 23, realizes the bone fragmentation method of the three-dimensional image of the first embodiment.
- the storage medium involved in this application may be non-volatile or volatile.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
La présente invention concerne un procédé et un appareil de segmentation de fragments d'os à partir d'une image tridimentionnelle, un dispositif informatique et un support de stockage, ayant trait au domaine du traitement médical intelligent. Selon le procédé de segmentation de fragments d'os à partir d'une image tridimensionnelle, une image stéréoscopique tridimensionnelle est prise en tant qu'entrée, un module d'extraction de caractéristiques dans un modèle de segmentation est utilisé pour effectuer une extraction de caractéristiques sur une image tridimensionnelle à segmenter, de façon à acquérir une image de caractéristique de base, qui prend en considération une corrélation de tissu dans l'image stéréoscopique; et une extraction de caractéristiques est effectuée sur l'image de caractéristique de base au moyen d'un module intermédiaire, pour obtenir une image de segmentation ayant la même taille que celle de l'image tridimensionnelle d'origine à segmenter, et un module de segmentation est utilisé pour fusionner l'image de segmentation avec l'image de caractéristique de base, de façon à acquérir un résultat de segmentation de fragment d'os tridimensionnel, à réduire le temps de segmentation, et à améliorer la précision et l'efficacité de segmentation.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011161212.X | 2020-10-27 | ||
CN202011161212.XA CN112241955B (zh) | 2020-10-27 | 2020-10-27 | 三维图像的碎骨分割方法、装置、计算机设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021179702A1 true WO2021179702A1 (fr) | 2021-09-16 |
Family
ID=74169897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/134546 WO2021179702A1 (fr) | 2020-10-27 | 2020-12-08 | Procédé et appareil de segmentation de fragments d'os à partir d'une image tridimensionnelle, dispositif informatique et support de stockage |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112241955B (fr) |
WO (1) | WO2021179702A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116187476A (zh) * | 2023-05-04 | 2023-05-30 | 珠海横琴圣澳云智科技有限公司 | 基于混合监督的肺叶分割模型训练、肺叶分割方法及装置 |
WO2023160157A1 (fr) * | 2022-02-28 | 2023-08-31 | 腾讯科技(深圳)有限公司 | Procédé et appareil de reconnaissance d'image médicale tridimensionnelle, ainsi que dispositif, support de stockage et produit |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240681B (zh) * | 2021-05-20 | 2022-07-08 | 推想医疗科技股份有限公司 | 图像处理的方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108257118A (zh) * | 2018-01-08 | 2018-07-06 | 浙江大学 | 一种基于法向腐蚀和随机游走的骨折粘连分割方法 |
US20190311223A1 (en) * | 2017-03-13 | 2019-10-10 | Beijing Sensetime Technology Development Co., Ltd. | Image processing methods and apparatus, and electronic devices |
CN111127636A (zh) * | 2019-12-24 | 2020-05-08 | 诸暨市人民医院 | 一种智能的复杂关节内骨折桌面级三维诊断系统 |
CN111192277A (zh) * | 2019-12-31 | 2020-05-22 | 华为技术有限公司 | 一种实例分割的方法及装置 |
CN111402216A (zh) * | 2020-03-10 | 2020-07-10 | 河海大学常州校区 | 基于深度学习的三维碎骨分割方法和装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109872328B (zh) * | 2019-01-25 | 2021-05-07 | 腾讯科技(深圳)有限公司 | 一种脑部图像分割方法、装置和存储介质 |
CN109903298B (zh) * | 2019-03-12 | 2021-03-02 | 数坤(北京)网络科技有限公司 | 血管分割图像断裂的修复方法、系统和计算机存储介质 |
CN111598893B (zh) * | 2020-04-17 | 2021-02-09 | 哈尔滨工业大学 | 基于多类型图像融合神经网络地方性氟骨病分级诊断系统 |
CN111429460B (zh) * | 2020-06-12 | 2020-09-22 | 腾讯科技(深圳)有限公司 | 图像分割方法、图像分割模型训练方法、装置和存储介质 |
-
2020
- 2020-10-27 CN CN202011161212.XA patent/CN112241955B/zh active Active
- 2020-12-08 WO PCT/CN2020/134546 patent/WO2021179702A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190311223A1 (en) * | 2017-03-13 | 2019-10-10 | Beijing Sensetime Technology Development Co., Ltd. | Image processing methods and apparatus, and electronic devices |
CN108257118A (zh) * | 2018-01-08 | 2018-07-06 | 浙江大学 | 一种基于法向腐蚀和随机游走的骨折粘连分割方法 |
CN111127636A (zh) * | 2019-12-24 | 2020-05-08 | 诸暨市人民医院 | 一种智能的复杂关节内骨折桌面级三维诊断系统 |
CN111192277A (zh) * | 2019-12-31 | 2020-05-22 | 华为技术有限公司 | 一种实例分割的方法及装置 |
CN111402216A (zh) * | 2020-03-10 | 2020-07-10 | 河海大学常州校区 | 基于深度学习的三维碎骨分割方法和装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023160157A1 (fr) * | 2022-02-28 | 2023-08-31 | 腾讯科技(深圳)有限公司 | Procédé et appareil de reconnaissance d'image médicale tridimensionnelle, ainsi que dispositif, support de stockage et produit |
CN116187476A (zh) * | 2023-05-04 | 2023-05-30 | 珠海横琴圣澳云智科技有限公司 | 基于混合监督的肺叶分割模型训练、肺叶分割方法及装置 |
CN116187476B (zh) * | 2023-05-04 | 2023-07-21 | 珠海横琴圣澳云智科技有限公司 | 基于混合监督的肺叶分割模型训练、肺叶分割方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN112241955A (zh) | 2021-01-19 |
CN112241955B (zh) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021179702A1 (fr) | Procédé et appareil de segmentation de fragments d'os à partir d'une image tridimensionnelle, dispositif informatique et support de stockage | |
JP6450053B2 (ja) | 3dバッチ正規化を伴う三次元(3d)畳み込み | |
CN111291825B (zh) | 病灶分类模型训练方法、装置、计算机设备和存储介质 | |
CN114066902A (zh) | 一种基于卷积和transformer融合的医学图像分割方法、系统、装置 | |
CN109063710A (zh) | 基于多尺度特征金字塔的3d cnn鼻咽癌分割方法 | |
CN113808146B (zh) | 一种医学图像多器官分割方法及系统 | |
US20230154142A1 (en) | Fundus color photo image grading method and apparatus, computer device, and storage medium | |
CN111145181A (zh) | 一种基于多视角分离卷积神经网络的骨骼ct图像三维分割方法 | |
CN112150470B (zh) | 图像分割方法、装置、介质及电子设备 | |
CN111369574B (zh) | 一种胸腔器官的分割方法及装置 | |
CN114494296A (zh) | 一种基于Unet和Transformer相融合的脑部胶质瘤分割方法与系统 | |
CN114742802B (zh) | 基于3Dtransformer混合卷积神经网络的胰腺CT图像分割方法 | |
CN116030259B (zh) | 一种腹部ct图像多器官分割方法、装置及终端设备 | |
Huang et al. | Style-invariant cardiac image segmentation with test-time augmentation | |
Wang et al. | SERR‐U‐Net: Squeeze‐and‐Excitation Residual and Recurrent Block‐Based U‐Net for Automatic Vessel Segmentation in Retinal Image | |
WO2022073100A1 (fr) | Systèmes et procédés de segmentation d'images 3d | |
WO2024102376A1 (fr) | Transformateur swin unifié à dimensions multiples pour segmentation de lésion | |
CN117422871A (zh) | 一种基于V-Net的轻量级脑肿瘤分割方法及系统 | |
CN116740081A (zh) | Ct图像中肺血管分割方法、装置、终端设备及介质 | |
CN116433970A (zh) | 甲状腺结节分类方法、系统、智能终端及存储介质 | |
CN116030043A (zh) | 一种多模态医学图像分割方法 | |
DE102022120117A1 (de) | On-Device erfolgendes Detektieren von Digitalobjekten und Generieren von Objektmasken | |
CN114419375A (zh) | 图像分类方法、训练方法、装置、电子设备以及存储介质 | |
CN113409324A (zh) | 一种融合微分几何信息的脑分割方法 | |
CN118262117B (zh) | 基于混合增强与交叉ema的半监督医学图像语义分割方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20924052 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20924052 Country of ref document: EP Kind code of ref document: A1 |