CN112241955B - Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium - Google Patents

Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium Download PDF

Info

Publication number
CN112241955B
CN112241955B CN202011161212.XA CN202011161212A CN112241955B CN 112241955 B CN112241955 B CN 112241955B CN 202011161212 A CN202011161212 A CN 202011161212A CN 112241955 B CN112241955 B CN 112241955B
Authority
CN
China
Prior art keywords
result
segmentation
dimensional
feature
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011161212.XA
Other languages
Chinese (zh)
Other versions
CN112241955A (en
Inventor
洪振厚
王健宗
瞿晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011161212.XA priority Critical patent/CN112241955B/en
Priority to PCT/CN2020/134546 priority patent/WO2021179702A1/en
Publication of CN112241955A publication Critical patent/CN112241955A/en
Application granted granted Critical
Publication of CN112241955B publication Critical patent/CN112241955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a broken bone segmentation method and device of a three-dimensional image, computer equipment and a storage medium, and belongs to the field of intelligent medical treatment. The broken bone segmentation method of the three-dimensional image takes a three-dimensional stereoscopic image as input, and performs feature extraction on the three-dimensional image to be segmented through a feature extraction module in a segmentation model to obtain a basic feature image, so that the correlation of tissues in the stereoscopic image is considered; the intermediate module is used for carrying out feature extraction on the basic feature map so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, and the segmentation map is fused with the basic feature map by the segmentation module, so that a three-dimensional broken bone segmentation result is obtained, the segmentation time is saved, and the segmentation precision and the segmentation efficiency are improved.

Description

Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium
Technical Field
The invention relates to the field of intelligent medical treatment, in particular to a broken bone segmentation method and device of a three-dimensional image, computer equipment and a storage medium.
Background
Fracture is a common disease, broken bones are often generated in severe fracture, and serious consequences can be caused if broken bones are missed in the surgical treatment process. The existing traditional segmentation method mainly understands images through artificial subjective consciousness, so as to extract specific characteristic information, such as: the segmentation method only aims at a specific image, but has the problems of too rough segmentation result and lower segmentation efficiency. With the advent of artificial intelligence, and particularly deep learning, providing new directions for automatic segmentation of fractured bones, traditional methods of fractured bone segmentation have gradually been replaced by machine-based approaches. The convolutional neural network is used as a representative of supervised learning, can directly learn feature representation from data, and is used for combining images from simple bottom features such as edges, corner points and the like layer by layer through layer feature extraction to form more abstract high-level features layer by layer, so that a remarkable effect is achieved in the field of image recognition, and the convolutional neural network is widely applied to medical image processing. The main machine segmentation method at present mainly segments MRI images based on tissue slices, and has the problems that the spatial correlation of tissues cannot be reflected and the segmentation efficiency is low.
Disclosure of Invention
Aiming at the problem that the existing broken bone segmentation method cannot consider the spatial correlation of tissues, the invention provides a broken bone segmentation method, a device, computer equipment and a storage medium for three-dimensional images which can consider the spatial correlation of tissues.
In order to achieve the above object, the present invention provides a fractured-bone segmentation method of a three-dimensional image, comprising:
acquiring a three-dimensional image to be segmented;
identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the step of identifying the three-dimensional image to be segmented by adopting the segmentation model to obtain a three-dimensional broken bone segmentation result comprises the following steps:
extracting features of the three-dimensional image to be segmented to obtain a basic feature map;
extracting features of the basic feature map to obtain a segmentation map;
and fusing the segmentation map with the basic feature map to generate the three-dimensional broken bone segmentation result.
Optionally, extracting features of the three-dimensional image to be segmented to obtain a basic feature map, including:
a first characteristic result is obtained by convolving the three-dimensional image to be segmented;
downsampling the first feature result to obtain a first sampling result;
adding the first sampling result and the first characteristic result element by element and then convolving to obtain a second characteristic result;
Downsampling the second feature result to obtain a second sampled result;
adding the second sampling result and the second characteristic result element by element and then convolving to obtain a third characteristic result;
downsampling the third feature result to obtain a third sampled result;
adding the third sampling result and the third characteristic result element by element and then convolving to obtain a fourth characteristic result;
downsampling the fourth feature result to obtain a fourth sampled result;
and adding the fourth sampling result and the fourth characteristic result element by element and then convolving to obtain the basic characteristic diagram.
Optionally, performing feature extraction on the basic feature map to obtain a segmentation map, including:
upsampling the basic feature map to obtain a first segmentation result;
fusing, decoding and upsampling the first segmentation result and the fourth feature result to obtain a second segmentation result;
fusing and decoding the second segmentation result and the third characteristic result to obtain a first output result, and upsampling to obtain a third segmentation result;
fusing, decoding and upsampling the third segmentation result and the second feature result to obtain a fourth segmentation result;
And fusing the fourth segmentation result with the first characteristic result, convolving and segmenting to obtain a fifth segmentation result, and taking the fifth segmentation result as the segmentation map.
Optionally, fusing the segmentation map with the basic feature map to generate the three-dimensional fractured-bone segmentation result, including:
dividing the first output result to obtain a second output result, and up-sampling to obtain a first additional division result;
adding the second output result with the first additional segmentation result element by element, and up-sampling to obtain a second additional result;
and adding the segmentation map and the second additional result element by element, and classifying to obtain the three-dimensional broken bone segmentation result.
Optionally, identifying the three-dimensional image to be segmented by using a segmentation model to obtain a three-dimensional broken bone segmentation result, and before the step of obtaining the three-dimensional broken bone segmentation result further comprises:
an initial classification model is trained to obtain the segmentation model.
Optionally, training the initial classification model to obtain the segmentation model includes:
reconstructing a two-dimensional sequence diagram in the sample through a three-dimensional diagram to obtain a three-dimensional training image;
normalizing the three-dimensional training image to obtain a three-dimensional sample image;
Extracting features of the three-dimensional sample image to obtain a basic training feature map;
extracting features of the basic training feature map to obtain a segmentation training map;
fusing the segmentation training diagram with the basic training feature diagram to generate the three-dimensional broken bone training segmentation result;
and adjusting parameters in the feature extraction module, the middle module and the segmentation module according to the training segmentation result to obtain the segmentation model.
Optionally, adjusting parameters in the initial classification model according to the training segmentation result to obtain the segmentation model, including:
and adjusting parameters in the initial classification model by adopting an Adam optimizer according to the training segmentation result to obtain the segmentation model.
In order to achieve the above object, the present invention also provides a fractured-bone segmentation apparatus of a three-dimensional image, comprising:
the receiving unit is used for acquiring a three-dimensional image to be segmented;
the segmentation unit is used for identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the segmentation model comprises a feature extraction module, a middle module and a segmentation module;
the segmentation unit performs feature extraction on the three-dimensional image to be segmented through the feature extraction module to obtain a basic feature image, performs feature extraction on the basic feature image through the intermediate module to obtain a segmentation image, and fuses the segmentation image with the basic feature image through the segmentation module to generate the three-dimensional broken bone segmentation result.
To achieve the above object, the present invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above method.
According to the broken bone segmentation method, the broken bone segmentation device, the computer equipment and the storage medium of the three-dimensional image, the three-dimensional image is taken as input, the feature extraction module in the segmentation model is used for extracting the features of the three-dimensional image to be segmented to obtain a basic feature image, and the correlation of tissues in the three-dimensional image is considered; the intermediate module is used for carrying out feature extraction on the basic feature map so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, and the segmentation map is fused with the basic feature map by the segmentation module, so that a three-dimensional broken bone segmentation result is obtained, the segmentation time is saved, and the segmentation precision and the segmentation efficiency are improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for fractured-bone segmentation of a three-dimensional image according to the present application;
FIG. 2 is a flow chart of one embodiment of training an initial classification model to obtain a segmentation model in accordance with the present application;
FIG. 3 is a block diagram of one embodiment of a three-dimensional image fractured-bone segmentation apparatus according to the present application;
FIG. 4 is an internal block diagram of a segmentation model according to the present application;
FIG. 5 is a hardware architecture diagram of one embodiment of a computer device of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
The method, the device, the computer equipment and the storage medium for segmenting the broken bones of the three-dimensional image are suitable for the field of intelligent medical treatment. The application takes a three-dimensional image as input, and performs feature extraction on the three-dimensional image to be segmented through a feature extraction module in the segmentation model to obtain a basic feature image, so that the correlation of tissues in the three-dimensional image is considered; the intermediate module is used for carrying out feature extraction on the basic feature map so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, and the segmentation map is fused with the basic feature map by the segmentation module, so that a three-dimensional broken bone segmentation result is obtained, the segmentation time is saved, and the segmentation precision and the segmentation efficiency are improved.
Example 1
Referring to fig. 1, a method for segmenting broken bones of a three-dimensional image according to the present embodiment includes:
s1, acquiring a three-dimensional image to be segmented.
It should be noted that: the three-dimensional image to be segmented is a three-dimensional MRI (Magnetic Resonance Imaging ) image. MRI displays internal information of the body in an image mode, and has the advantages of no wound, multiple modes, accurate positioning and the like.
S2, identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result.
The segmentation model comprises a feature extraction module, an intermediate module and a segmentation module.
Specifically, the step S2 of identifying the three-dimensional image to be segmented by using a segmentation model to obtain a three-dimensional fractured bone segmentation result includes:
s21, extracting features of the three-dimensional image to be segmented to obtain a basic feature map.
Further, the step S21 of extracting features from the three-dimensional image to be segmented to obtain a basic feature map may include the following steps:
s211, obtaining a first characteristic result by convolving the three-dimensional image to be segmented;
s212, downsampling the first characteristic result to obtain a first sampling result;
s213, adding the first sampling result and the first characteristic result element by element, and then convolving to obtain a second characteristic result;
S214, downsampling the second characteristic result to obtain a second sampling result;
s215, adding the second sampling result and the second characteristic result element by element and then convolving to obtain a third characteristic result;
s216, downsampling the third characteristic result to obtain a third sampling result;
s217, adding the third sampling result and the third characteristic result element by element and then convolving to obtain a fourth characteristic result;
s218, downsampling the fourth characteristic result to obtain a fourth sampling result;
s219, adding the fourth sampling result and the fourth characteristic result element by element, and then convolving to obtain the basic characteristic diagram.
In this embodiment, a feature extraction module is used to perform feature extraction on the three-dimensional image to be segmented to obtain a basic feature map. Wherein, the feature extraction module includes in order: the first context layer, the first downsampling layer, the second context layer, the second downsampling layer, the third context layer, the third downsampling layer, the fourth context layer, the fourth downsampling layer, and the fifth context layer. The method comprises the steps of carrying out feature extraction on the three-dimensional image to be segmented through the first context layer, inputting a first feature result into the first downsampling layer to obtain a first sampling result, adding the first sampling result and the first feature result element by element to be used as input of the second context layer to obtain a second feature result, inputting the second feature result into the second downsampling layer to obtain a second sampling result, adding the second sampling result and the second feature result element by element to be used as input of the third context layer to obtain a third feature result, inputting the third feature result into the third downsampling layer to obtain a third sampling result, adding the third sampling result and the third feature result element by element to be used as input of the fourth context layer to obtain a fourth feature result, inputting the fourth feature result into the fourth downsampling layer to obtain a fourth sampling result, and adding the fourth sampling result and the fourth feature result element by element to be used as input of the fifth context layer to obtain a basic image.
By way of example, and not limitation, a 128 x 128 voxel three-dimensional image to be segmented may be input to an input layer of a segmentation model, through five context layers, each connected by a sampling layer, the results of each downsampling layer and context layer are added element by element as input to the next downsampling layer to obtain a base feature map, i.e., a coarse segmentation map.
The voxels are also called volume elements, are the minimum units of digital data on three-dimensional space division, and are mainly used in the fields of three-dimensional imaging, scientific data, medical images and the like.
S22, carrying out feature extraction on the basic feature map to obtain a segmentation map.
Further, step S22 may include the steps of:
s221, up-sampling the basic feature map to obtain a first segmentation result;
s222, fusing, decoding and upsampling the first segmentation result and the fourth characteristic result to obtain a second segmentation result;
s223, fusing and decoding the second segmentation result and the third characteristic result to obtain a first output result, and upsampling to obtain a third segmentation result;
s224, fusing, decoding and upsampling the third segmentation result and the second characteristic result to obtain a fourth segmentation result;
S225, fusing the fourth segmentation result with the first characteristic result, and carrying out convolution and segmentation to obtain a fifth segmentation result, wherein the fifth segmentation result is used as the segmentation map.
In this embodiment, an intermediate module is used to perform feature extraction on the basic feature map to obtain a segmentation map. Wherein, the intermediate module includes in order: a first upsampling layer, a first decoding layer, a second upsampling layer, a second decoding layer, a third upsampling layer, a third decoding layer, a fourth upsampling layer, a three dimensional convolution layer, and a first segmentation layer. Inputting the basic feature map to the first upsampling layer to obtain a first segmentation result, fusing the first segmentation result with the fourth feature result, and obtaining a second segmentation result through the first decoding layer and the second upsampling layer; fusing the second segmentation result with the third characteristic result, and obtaining a third segmentation result through the second decoding layer and the third upsampling layer; fusing the third segmentation result with the second characteristic result, and obtaining a fourth segmentation result through the third decoding layer and the fourth upsampling layer; and fusing the fourth segmentation result with the first characteristic result, obtaining a fifth segmentation result through the three-dimensional convolution layer and the first segmentation layer, and taking the fifth segmentation result as the segmentation map.
In this embodiment, a first segmentation result is obtained by a first upsampling layer behind a fifth context layer, the first segmentation result is fused with the fourth feature result, a second segmentation result is obtained by a first decoding layer and a second upsampling layer after the fusion, a third feature result of the second segmentation result is fused, a third segmentation result is obtained by a second decoding layer and a third upsampling layer after the fusion, a fourth segmentation result is obtained by such a pushing, the fourth segmentation result is fused with the first feature result, and a fifth segmentation result is obtained by a three-dimensional convolution layer and the first segmentation layer. In this embodiment, up-sampling of the segmentation result is achieved by the up-sampling layer, so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented.
S23, fusing the segmentation map and the basic feature map to generate the three-dimensional broken bone segmentation result.
Further, step S23 may include the steps of:
s231, dividing the first output result to obtain a second output result, and up-sampling to obtain a first additional division result;
s232, adding the second output result and the first additional segmentation result element by element, and up-sampling to obtain a second additional result;
S233, adding the segmentation map and the second additional result element by element, and classifying to obtain the three-dimensional broken bone segmentation result.
In this embodiment, a segmentation module is used to fuse the segmentation map with the basic feature map, so as to generate the three-dimensional fractured-bone segmentation result. The segmentation module sequentially comprises: a second segmentation layer, a fifth upsampling layer, a sixth upsampling layer, and a classification layer. A first output result of a second decoding layer is subjected to the second segmentation layer and the fifth upsampling layer to obtain a first additional segmentation result, and the first output result of the second decoding layer is subjected to element-by-element addition with the first additional segmentation result through a second output result of the second segmentation layer and is subjected to the sixth upsampling layer to obtain a second additional result; and adding the segmentation map and the second additional result element by element, and inputting the added result into the classification layer to obtain the three-dimensional broken bone segmentation result.
In this embodiment, the second segmentation layer and the fifth upsampling layer are sequentially adopted to segment the first output result output by the second decoding layer, so as to obtain a first additional segmentation result; and adding the first output result output by the second decoding layer with the first additional segmentation result element by element, sampling by a sixth upsampling layer to obtain a second additional result, adding the segmentation map with the second additional result element by element, and outputting a final segmentation result by a classification layer, wherein the segmentation result is a probability score matrix which corresponds to the number of semantic segmentation categories and has the same size as the original image, and determining the final classification by searching the probability that each pixel belongs to various categories to form a final three-dimensional broken bone segmentation result.
In an embodiment, in executing step S2, the identifying the three-dimensional image to be segmented by using the segmentation model to obtain a three-dimensional fractured-bone segmentation result further includes:
A. an initial classification model is trained to obtain the segmentation model.
The initial classification model comprises a feature extraction module, an intermediate module and a segmentation module.
Specifically, step a may include:
A1. reconstructing the two-dimensional sequence diagram in the sample through the three-dimensional diagram to obtain a three-dimensional training image.
In this embodiment, the two-dimensional sequence diagram is a two-dimensional sparse sequence diagram, and three-linear interpolation (also called three-dimensional linear difference, trilinear interpolation) or super-resolution reconstruction is performed on the two-dimensional sequence diagram to obtain an isotropic three-dimensional training image. Tri-linear interpolation is primarily a linear interpolation method used in a 3D cube to calculate values for other points in the cube by giving the values for the vertices.
In practice, a random batch (batch) of samples may be used, for example: a size (size) of 2, a voxel size of 128 x 128, each round exceeds 100 images for a total of 300 rounds of training.
A2. And carrying out normalization processing on the three-dimensional training image to obtain a three-dimensional sample image.
In this embodiment, the pixels of the three-dimensional sample image are unified by performing normalization processing on the three-dimensional training image to implement pixel planning of the image, so as to facilitate subsequent model training.
The correction of the field bias can also be performed for both modes of the spin-lattice relaxation time and the spin-spin relaxation time of the three-dimensional training image before step A2 is performed.
A3. And extracting the characteristics of the three-dimensional sample image to obtain a basic training characteristic diagram. Wherein, the feature extraction module includes in order: the first context layer, the first downsampling layer, the second context layer, the second downsampling layer, the third context layer, the third downsampling layer, the fourth context layer, the fourth downsampling layer, and the fifth context layer.
In this embodiment, the feature extraction module in the initial classification model performs feature extraction on the three-dimensional sample image to obtain a basic training feature map. The three-dimensional sample image can be input into an input layer of the initial classification model, through five context layers, each context layer is connected through a sampling layer, and the result of each downsampling layer and the context layer is added element by element to be used as the input of the next downsampling layer, so that a basic feature map, namely a rough segmentation map, is obtained.
A4. And carrying out feature extraction on the basic training feature map to obtain a segmentation training map.
Wherein, the intermediate module includes in order: a first upsampling layer, a first decoding layer, a second upsampling layer, a second decoding layer, a third upsampling layer, a third decoding layer, a fourth upsampling layer, a three dimensional convolution layer, and a first segmentation layer.
In this embodiment, the intermediate module performs feature extraction on the basic training feature map to obtain a segmentation training map. The first segmentation result and the fourth characteristic result are fused through a first upsampling layer behind a fifth context layer to obtain a first segmentation result, a second segmentation result is obtained through a first decoding layer and a second upsampling layer after the first segmentation result and the fourth segmentation result are fused, a third segmentation result is obtained through a second decoding layer and a third upsampling layer after the second segmentation result and the third segmentation result are fused, the fourth segmentation result is obtained through pushing in the same way, the fourth segmentation result and the first characteristic result are fused, and the fifth segmentation result is obtained through a three-dimensional convolution layer and the first segmentation layer. In this embodiment, up-sampling of the segmentation result is achieved by the up-sampling layer, so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented.
A5. And fusing the segmentation training diagram with the basic training feature diagram to generate the three-dimensional broken bone training segmentation result.
Wherein, the segmentation module includes in order: a second segmentation layer, a fifth upsampling layer, a sixth upsampling layer, and a classification layer.
In this embodiment, the segmentation module fuses the segmentation training graph and the basic training feature graph to generate the three-dimensional broken bone training segmentation result. Dividing a first output result output by the second decoding layer by adopting a second dividing layer and a fifth up-sampling layer in sequence to obtain a first additional dividing result; and adding the first output result output by the second decoding layer with the first additional segmentation result element by element, sampling by a sixth upsampling layer to obtain a second additional result, adding the segmentation map with the second additional result element by element, and outputting a final segmentation result by a classification layer, wherein the segmentation result is a probability distribution matrix which corresponds to the number of semantic segmentation categories and has the same size as the original image, and determining the final classification by searching the probability that each pixel belongs to various categories to form a final three-dimensional broken bone segmentation result.
A6. And adjusting parameters in the initial classification model according to the training segmentation result to obtain the segmentation model.
Specifically, step A6 of adjusting parameters in the feature extraction module, the intermediate module, and the segmentation module according to the training segmentation result, so as to obtain the segmentation model may include:
and adjusting parameters in the feature extraction module, the intermediate module and the segmentation module by adopting an Adam optimizer according to the training segmentation result so as to obtain the segmentation model.
In this embodiment, the Adam optimizer is able to adjust different learning rates for each different parameter, update frequently changing parameters with smaller steps, and update sparse parameters with larger steps. To solve the problem of training data class imbalance, the traditional cross entropy loss function is abandoned, and a multi-class Dice loss function can be used for broken bone segmentation. The Dice loss function is a set similarity measure function that is typically used to calculate the similarity of two samples (similarity range 0, 1). And punishment of low confidence prediction is performed through the Dice loss function, so that the prediction effect is improved.
Compared with the traditional segmentation method, the method for segmenting the broken bones of the three-dimensional image can convert the segmentation of the broken bones of the MRI image into the pixel-level 3D image semantic annotation problem, and a context module and a downsampling module in a trained segmentation model are used as a feature extraction module of the 3D convolution network, so that the middle module outputs rough segmentation graphs corresponding to the number of semantic segmentation categories, and an upsampling module is added behind the middle module and used for upsampling the rough segmentation graphs to obtain segmentation graphs with the same size as the original images. The broken bone segmentation method of the three-dimensional image has the advantages that: compared with the existing segmentation method of the single Zhang Qiepian, the broken bone segmentation method of the three-dimensional image does not need manual intervention or segmentation one by one, is an automatic and simple broken bone segmentation method, and not only improves the segmentation precision, but also greatly improves the segmentation efficiency; the broken bone segmentation method of the three-dimensional image takes the whole three-dimensional image as an input image, so that the segmentation time can be saved, the spatial correlation can be considered, and the higher segmentation accuracy can be obtained.
In the embodiment, the method for segmenting the broken bones of the three-dimensional image takes the three-dimensional image as input, and extracts the characteristics of the three-dimensional image to be segmented through the characteristic extraction module in the segmentation model to obtain a basic characteristic image, so that the correlation of tissues in the three-dimensional image is considered; the intermediate module is used for carrying out feature extraction on the basic feature map so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, and the segmentation map is fused with the basic feature map by the segmentation module, so that a three-dimensional broken bone segmentation result is obtained, the segmentation time is saved, and the segmentation precision and the segmentation efficiency are improved.
Example two
Referring to fig. 3, a three-dimensional image broken bone segmentation apparatus 1 according to the present embodiment includes: a receiving unit 11 and a dividing unit 12.
A receiving unit 11 for acquiring a three-dimensional image to be segmented.
It should be noted that: the three-dimensional image to be segmented is a three-dimensional MRI image. MRI displays internal information of the body in an image mode, and has the advantages of no wound, multiple modes, accurate positioning and the like.
The segmentation unit 12 is configured to identify the three-dimensional image to be segmented by using a segmentation model to obtain a three-dimensional fractured bone segmentation result.
The segmentation model shown in fig. 4 includes a feature extraction module 121, an intermediate module 122, and a segmentation module 123.
The segmentation unit 12 performs feature extraction on the three-dimensional image to be segmented through the feature extraction module 121 to obtain a basic feature map, performs feature extraction on the basic feature map through the intermediate module 122 to obtain a segmentation map, and fuses the segmentation map with the basic feature map through the segmentation module 123 to generate the three-dimensional broken bone segmentation result.
In this embodiment, the feature extraction module 121 sequentially includes: the first context layer, the first downsampling layer, the second context layer, the second downsampling layer, the third context layer, the third downsampling layer, the fourth context layer, the fourth downsampling layer, and the fifth context layer.
The specific process of extracting the features of the three-dimensional image to be segmented by the feature extraction module 121 to obtain a basic feature map includes:
the method comprises the steps of carrying out feature extraction on the three-dimensional image to be segmented through the first context layer, inputting a first feature result into the first downsampling layer to obtain a first sampling result, adding the first sampling result and the first feature result element by element to be used as input of the second context layer to obtain a second feature result, inputting the second feature result into the second downsampling layer to obtain a second sampling result, adding the second sampling result and the second feature result element by element to be used as input of the third context layer to obtain a third feature result, inputting the third feature result into the third downsampling layer to obtain a third sampling result, adding the third sampling result and the third feature result element by element to be used as input of the fourth context layer to obtain a fourth feature result, inputting the fourth feature result into the fourth downsampling layer to obtain a fourth sampling result, and adding the fourth sampling result and the fourth feature result element by element to be used as input of the fifth context layer to obtain a basic image.
By way of example, and not limitation, a 128 x 128 voxel three-dimensional image to be segmented may be input to an input layer of a segmentation model, through five context layers, each connected by a sampling layer, the results of each downsampling layer and context layer are added element by element as input to the next downsampling layer to obtain a base feature map, i.e., a coarse segmentation map.
The voxels are also called volume elements, are the minimum units of digital data on three-dimensional space division, and are mainly used in the fields of three-dimensional imaging, scientific data, medical images and the like.
In this embodiment, the intermediate module 122 sequentially includes: a first upsampling layer, a first decoding layer, a second upsampling layer, a second decoding layer, a third upsampling layer, a third decoding layer, a fourth upsampling layer, a three dimensional convolution layer, and a first segmentation layer.
The specific process of extracting the features of the basic feature map by the intermediate module 122 to obtain a segmentation map includes:
inputting the basic feature map to the first upsampling layer to obtain a first segmentation result, fusing the first segmentation result with the fourth feature result, and obtaining a second segmentation result through the first decoding layer and the second upsampling layer; fusing the second segmentation result with the third characteristic result, and obtaining a third segmentation result through the second decoding layer and the third upsampling layer; fusing the third segmentation result with the second characteristic result, and obtaining a fourth segmentation result through the third decoding layer and the fourth upsampling layer; and fusing the fourth segmentation result with the first characteristic result, obtaining a fifth segmentation result through the three-dimensional convolution layer and the first segmentation layer, and taking the fifth segmentation result as the segmentation map.
In this embodiment, a first segmentation result is obtained by a first upsampling layer behind a fifth context layer, the first segmentation result is fused with the fourth feature result, a second segmentation result is obtained by a first decoding layer and a second upsampling layer after the fusion, a third feature result of the second segmentation result is fused, a third segmentation result is obtained by a second decoding layer and a third upsampling layer after the fusion, a fourth segmentation result is obtained by such a pushing, the fourth segmentation result is fused with the first feature result, and a fifth segmentation result is obtained by a three-dimensional convolution layer and the first segmentation layer. In this embodiment, up-sampling of the segmentation result is achieved by the up-sampling layer, so as to obtain a segmentation map with the same size as the original three-dimensional image to be segmented.
In this embodiment, the dividing module 123 sequentially includes: a second segmentation layer, a fifth upsampling layer, a sixth upsampling layer, and a classification layer.
The specific process of generating the three-dimensional broken bone segmentation result by fusing the segmentation map with the basic feature map through the segmentation module 123 includes:
a first output result of the second decoding layer is subjected to element-by-element addition with a second output result of the second decoding layer through the second segmentation layer and the first additional segmentation result through the sixth upsampling layer to obtain a second additional result; and adding the segmentation map and the second additional result element by element, and inputting the added result into the classification layer to obtain the three-dimensional broken bone segmentation result.
In this embodiment, the second segmentation layer and the fifth upsampling layer are sequentially adopted to segment the first output result output by the second decoding layer, so as to obtain a first additional segmentation result; and adding the first output result output by the second decoding layer with the first additional segmentation result element by element, sampling by a sixth upsampling layer to obtain a second additional result, adding the segmentation map with the second additional result element by element, and outputting a final segmentation result by a classification layer, wherein the segmentation result is a probability score matrix which corresponds to the number of semantic segmentation categories and has the same size as the original image, and determining the final classification by searching the probability that each pixel belongs to various categories to form a final three-dimensional broken bone segmentation result.
In the present embodiment, the broken bone dividing device 1 for three-dimensional image receives three-dimensional stereo image through the receiving unit 11, and performs feature extraction on the three-dimensional image to be divided through the feature extraction module 121 in the dividing model in the dividing unit 12 to obtain a basic feature map, taking the correlation of the tissues in the stereo image into consideration; the intermediate module 122 is used for extracting the characteristics of the basic characteristic map to obtain a segmentation map with the same size as the original three-dimensional image to be segmented, and the segmentation map is fused with the basic characteristic map by the segmentation module 123, so that a three-dimensional broken bone segmentation result is obtained, the segmentation time is saved, and the segmentation precision and the segmentation efficiency are improved.
Example III
In order to achieve the above objective, the present invention further provides a computer device 2, where the computer device 2 includes a plurality of computer devices 2, and the components of the apparatus 1 for splitting a three-dimensional image according to the second embodiment may be dispersed in different computer devices 2, and the computer device 2 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted server, a blade server, a tower server, or a rack-mounted server (including a stand-alone server, or a server cluster formed by a plurality of servers) for executing a program, and so on. The computer device 2 of the present embodiment includes at least, but is not limited to: the memory 21, the processor 23, the network interface 22, and the crushed bone segmentation device 1 of the three-dimensional image, which can be communicatively connected to each other through a system bus (refer to fig. 5). It should be noted that fig. 5 only shows a computer device 2 having components, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the memory 21 includes at least one type of computer readable storage medium, including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 2. Of course, the memory 21 may also comprise both an internal memory unit of the computer device 2 and an external memory device. In the present embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 2 and various types of application software, such as program codes of the fractured-bone segmentation method of the three-dimensional image of the first embodiment. Further, the memory 21 may be used to temporarily store various types of data that have been output or are to be output.
The processor 23 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 23 is typically used to control the overall operation of the computer device 2, e.g. to perform control and processing related to data interaction or communication with said computer device 2, etc. In this embodiment, the processor 23 is configured to execute the program code or the processing data stored in the memory 21, for example, the broken bone segmentation device 1 for three-dimensional image.
The network interface 22 may comprise a wireless network interface or a wired network interface, which network interface 22 is typically used to establish a communication connection between the computer device 2 and other computer devices 2. For example, the network interface 22 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be an Intranet (Intranet), the Internet (Internet), a global system for mobile communications (Global System of Mobile communication, GSM), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), wi-Fi, or other wireless or wired network.
It is noted that fig. 5 only shows a computer device 2 having components 21-23, but it is understood that not all of the illustrated components are required to be implemented, and that more or fewer components may alternatively be implemented.
In the present embodiment, the fractured-bone segmentation apparatus 1 of the three-dimensional image stored in the memory 21 may also be segmented into one or more program modules stored in the memory 21 and executed by one or more processors (the processor 23 in the present embodiment) to complete the present invention.
Example IV
To achieve the above object, the present invention also provides a computer-readable storage medium including a plurality of storage media such as a flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by the processor 23, performs the corresponding functions. The computer-readable storage medium of the present embodiment is used for storing the crushed bone segmentation apparatus 1 of the three-dimensional image, and when executed by the processor 23, implements the crushed bone segmentation method of the three-dimensional image of the first embodiment.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. A method for fractured bone segmentation of a three-dimensional image, comprising:
acquiring a three-dimensional image to be segmented;
identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the step of identifying the three-dimensional image to be segmented by adopting the segmentation model to obtain a three-dimensional broken bone segmentation result comprises the following steps:
extracting features of the three-dimensional image to be segmented to obtain a basic feature map, wherein the method comprises the following steps: a first characteristic result is obtained by convolving the three-dimensional image to be segmented; downsampling the first feature result to obtain a first sampling result; adding the first sampling result and the first characteristic result element by element and then convolving to obtain a second characteristic result; downsampling the second feature result to obtain a second sampled result; adding the second sampling result and the second characteristic result element by element and then convolving to obtain a third characteristic result; downsampling the third feature result to obtain a third sampled result; adding the third sampling result and the third characteristic result element by element and then convolving to obtain a fourth characteristic result; downsampling the fourth feature result to obtain a fourth sampled result; adding the fourth sampling result and the fourth characteristic result element by element and then convolving to obtain the basic characteristic diagram;
Performing feature extraction on the basic feature map to obtain a segmentation map, including: upsampling the basic feature map to obtain a first segmentation result; fusing, decoding and upsampling the first segmentation result and the fourth feature result to obtain a second segmentation result; fusing and decoding the second segmentation result and the third characteristic result to obtain a first output result, and upsampling to obtain a third segmentation result; fusing, decoding and upsampling the third segmentation result and the second feature result to obtain a fourth segmentation result; fusing the fourth segmentation result with the first characteristic result, convoluting and segmenting to obtain a fifth segmentation result, and taking the fifth segmentation result as the segmentation map;
fusing the segmentation map with the basic feature map to generate the three-dimensional broken bone segmentation result, wherein the three-dimensional broken bone segmentation result comprises the following steps: dividing the first output result to obtain a second output result, and up-sampling to obtain a first additional division result; adding the second output result with the first additional segmentation result element by element, and up-sampling to obtain a second additional result; and adding the segmentation map and the second additional result element by element, and classifying to obtain the three-dimensional broken bone segmentation result.
2. The method for segmenting a three-dimensional image according to claim 1, wherein the three-dimensional image to be segmented is identified by a segmentation model to obtain a three-dimensional fractured-bone segmentation result, further comprising:
an initial classification model is trained to obtain the segmentation model.
3. The method of claim 2, wherein training an initial classification model to obtain the segmentation model comprises:
reconstructing a two-dimensional sequence diagram in the sample through a three-dimensional diagram to obtain a three-dimensional training image;
normalizing the three-dimensional training image to obtain a three-dimensional sample image;
extracting features of the three-dimensional sample image to obtain a basic training feature map;
extracting features of the basic training feature map to obtain a segmentation training map;
fusing the segmentation training diagram with the basic training feature diagram to generate the three-dimensional broken bone training segmentation result;
and adjusting parameters in the initial classification model according to the training segmentation result to obtain the segmentation model.
4. A method of fractured-bone segmentation of a three-dimensional image according to claim 3, wherein adjusting parameters in the initial classification model based on the training segmentation result to obtain the segmentation model comprises:
And adjusting parameters in the initial classification model by adopting an Adam optimizer according to the training segmentation result to obtain the segmentation model.
5. A fractured-bone segmentation device for three-dimensional images, comprising:
the receiving unit is used for acquiring a three-dimensional image to be segmented;
the segmentation unit is used for identifying the three-dimensional image to be segmented by adopting a segmentation model to obtain a three-dimensional broken bone segmentation result;
the segmentation model comprises a feature extraction module, a middle module and a segmentation module;
the segmentation unit performs feature extraction on the three-dimensional image to be segmented through the feature extraction module to obtain a basic feature map: a first characteristic result is obtained by convolving the three-dimensional image to be segmented; downsampling the first feature result to obtain a first sampling result; adding the first sampling result and the first characteristic result element by element and then convolving to obtain a second characteristic result; downsampling the second feature result to obtain a second sampled result; adding the second sampling result and the second characteristic result element by element and then convolving to obtain a third characteristic result; downsampling the third feature result to obtain a third sampled result; adding the third sampling result and the third characteristic result element by element and then convolving to obtain a fourth characteristic result; downsampling the fourth feature result to obtain a fourth sampled result; adding the fourth sampling result and the fourth characteristic result element by element and then convolving to obtain the basic characteristic diagram;
The segmentation unit performs feature extraction on the basic feature map through the intermediate module to obtain a segmentation map: upsampling the basic feature map to obtain a first segmentation result; fusing, decoding and upsampling the first segmentation result and the fourth feature result to obtain a second segmentation result; fusing and decoding the second segmentation result and the third characteristic result to obtain a first output result, and upsampling to obtain a third segmentation result; fusing, decoding and upsampling the third segmentation result and the second feature result to obtain a fourth segmentation result; fusing the fourth segmentation result with the first characteristic result, convoluting and segmenting to obtain a fifth segmentation result, and taking the fifth segmentation result as the segmentation map;
the segmentation unit fuses the segmentation map with the basic feature map through a segmentation module to generate the three-dimensional broken bone segmentation result: dividing the first output result to obtain a second output result, and up-sampling to obtain a first additional division result; adding the second output result with the first additional segmentation result element by element, and up-sampling to obtain a second additional result; and adding the segmentation map and the second additional result element by element, and classifying to obtain the three-dimensional broken bone segmentation result.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 4 when the computer program is executed.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 4.
CN202011161212.XA 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium Active CN112241955B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011161212.XA CN112241955B (en) 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium
PCT/CN2020/134546 WO2021179702A1 (en) 2020-10-27 2020-12-08 Method and apparatus for segmenting bone fragments from three-dimensional image, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011161212.XA CN112241955B (en) 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112241955A CN112241955A (en) 2021-01-19
CN112241955B true CN112241955B (en) 2023-08-25

Family

ID=74169897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011161212.XA Active CN112241955B (en) 2020-10-27 2020-10-27 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112241955B (en)
WO (1) WO2021179702A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240681B (en) * 2021-05-20 2022-07-08 推想医疗科技股份有限公司 Image processing method and device
CN114581396A (en) * 2022-02-28 2022-06-03 腾讯科技(深圳)有限公司 Method, device, equipment, storage medium and product for identifying three-dimensional medical image
CN116187476B (en) * 2023-05-04 2023-07-21 珠海横琴圣澳云智科技有限公司 Lung lobe segmentation model training and lung lobe segmentation method and device based on mixed supervision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872328A (en) * 2019-01-25 2019-06-11 腾讯科技(深圳)有限公司 A kind of brain image dividing method, device and storage medium
CN109903298A (en) * 2019-03-12 2019-06-18 数坤(北京)网络科技有限公司 Restorative procedure, system and the computer storage medium of blood vessel segmentation image fracture
CN111127636A (en) * 2019-12-24 2020-05-08 诸暨市人民医院 Intelligent desktop-level three-dimensional diagnosis system for complex intra-articular fracture
CN111402216A (en) * 2020-03-10 2020-07-10 河海大学常州校区 Three-dimensional broken bone segmentation method and device based on deep learning
CN111429460A (en) * 2020-06-12 2020-07-17 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium
CN111598893A (en) * 2020-04-17 2020-08-28 哈尔滨工业大学 Regional fluorine bone disease grading diagnosis system based on multi-type image fusion neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934397B (en) * 2017-03-13 2020-09-01 北京市商汤科技开发有限公司 Image processing method and device and electronic equipment
CN108257118B (en) * 2018-01-08 2020-07-24 浙江大学 Fracture adhesion segmentation method based on normal corrosion and random walk
CN111192277A (en) * 2019-12-31 2020-05-22 华为技术有限公司 Instance partitioning method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872328A (en) * 2019-01-25 2019-06-11 腾讯科技(深圳)有限公司 A kind of brain image dividing method, device and storage medium
CN109903298A (en) * 2019-03-12 2019-06-18 数坤(北京)网络科技有限公司 Restorative procedure, system and the computer storage medium of blood vessel segmentation image fracture
CN111127636A (en) * 2019-12-24 2020-05-08 诸暨市人民医院 Intelligent desktop-level three-dimensional diagnosis system for complex intra-articular fracture
CN111402216A (en) * 2020-03-10 2020-07-10 河海大学常州校区 Three-dimensional broken bone segmentation method and device based on deep learning
CN111598893A (en) * 2020-04-17 2020-08-28 哈尔滨工业大学 Regional fluorine bone disease grading diagnosis system based on multi-type image fusion neural network
CN111429460A (en) * 2020-06-12 2020-07-17 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium

Also Published As

Publication number Publication date
CN112241955A (en) 2021-01-19
WO2021179702A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
CN112241955B (en) Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium
CA3041140C (en) Systems and methods for segmenting an image
CN111429460B (en) Image segmentation method, image segmentation model training method, device and storage medium
EP3937124A1 (en) Image processing method, device and apparatus, and storage medium
DE102019000171A1 (en) Digital environment for the location of semantic classes
US11636570B2 (en) Generating digital images utilizing high-resolution sparse attention and semantic layout manipulation neural networks
CN111291825A (en) Focus classification model training method and device, computer equipment and storage medium
CN111627024A (en) U-net improved kidney tumor segmentation method
CN116030259B (en) Abdominal CT image multi-organ segmentation method and device and terminal equipment
CN110659667A (en) Picture classification model training method and system and computer equipment
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN112150470B (en) Image segmentation method, device, medium and electronic equipment
CN115375999B (en) Target detection model, method and device applied to hazardous chemical vehicle detection
CN114742802B (en) Pancreas CT image segmentation method based on 3D transform mixed convolution neural network
CN112348819A (en) Model training method, image processing and registering method, and related device and equipment
US20230386067A1 (en) Systems and methods for segmenting 3d images
CN114692725A (en) Decoupling representation learning method and system for multi-temporal image sequence
CN113409324B (en) Brain segmentation method fusing differential geometric information
CN116433970A (en) Thyroid nodule classification method, thyroid nodule classification system, intelligent terminal and storage medium
CN116091763A (en) Apple leaf disease image semantic segmentation system, segmentation method, device and medium
CN115115900A (en) Training method, device, equipment, medium and program product of image reconstruction model
CN112633285A (en) Domain adaptation method, domain adaptation device, electronic equipment and storage medium
CN117079015A (en) Training and classifying method, device, medium and equipment for image classifying model
CN114283277B (en) Disparity map acquisition method, occlusion detection network acquisition method and electronic equipment
CN113012152B (en) Image tampering chain detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40043401

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant