WO2023066364A1 - 三维图像处理方法、装置、计算机设备和存储介质 - Google Patents

三维图像处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2023066364A1
WO2023066364A1 PCT/CN2022/126618 CN2022126618W WO2023066364A1 WO 2023066364 A1 WO2023066364 A1 WO 2023066364A1 CN 2022126618 W CN2022126618 W CN 2022126618W WO 2023066364 A1 WO2023066364 A1 WO 2023066364A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional image
training
image processing
processed
image
Prior art date
Application number
PCT/CN2022/126618
Other languages
English (en)
French (fr)
Inventor
刘赫
张朗
刘鹏飞
Original Assignee
苏州微创畅行机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州微创畅行机器人有限公司 filed Critical 苏州微创畅行机器人有限公司
Publication of WO2023066364A1 publication Critical patent/WO2023066364A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Definitions

  • the present application relates to the field of medical technology, and in particular to a three-dimensional image processing method, device, computer equipment and storage medium.
  • CT images such as CT images
  • the computer uses the CT image sequence to reconstruct the three-dimensional model of the patient's leg bones.
  • algorithms such as 2D network (U-Net) SVM and graph-cut image segmentation methods are used to perform image segmentation on two-dimensional CT images to obtain bone image subsequences.
  • the image subsequence of the bone is then used to reconstruct the three-dimensional model of the corresponding bone.
  • each 2D image needs to be segmented. After each image is segmented, it is difficult to automatically reconstruct the three-dimensional model of the skeleton. This is because the image segmentation accuracy and generalization of each image are not uniform due to the lack of spatial information among several images in the CT image sequence. In this way, not only is the speed of image segmentation slow due to the influence of the number of CT images, but it may also be necessary to trim the bone images after each image segmentation by means of manual interaction.
  • the contours of key bones such as joint areas extracted from two-dimensional CT images, and then three-dimensionally reconstructed, the reconstructed joint model is prone to bone loss, or abnormal discontinuous protrusions and other models defect.
  • This is related to the fact that the segmented CT image does not consider spatial information and the algorithm used. How to establish a model with high segmentation efficiency and high segmentation accuracy is an urgent problem to be solved in the industry.
  • An embodiment of the present application provides a three-dimensional image processing method, and the three-dimensional image processing method includes:
  • Processing the three-dimensional image to be processed by using a pre-trained three-dimensional image processing model to obtain the probability that each voxel in the three-dimensional image to be processed belongs to at least one target object;
  • the three-dimensional image to be processed is segmented according to the probability to obtain each target object.
  • the pre-trained three-dimensional image processing model is used to process the three-dimensional image to be processed, including:
  • the target object is obtained according to the segmentation result of the voxels.
  • the processing of the three-dimensional image to be processed through the pre-trained three-dimensional image processing model further includes:
  • the target object corresponding to the maximum probability of each voxel is obtained as the segmentation result of the voxel under the target category.
  • the processing of the three-dimensional image to be processed through the pre-trained three-dimensional image processing model includes:
  • the first preprocessing includes setting window width and level, resampling, data normalization and self-adaptation Adjust at least one of the image sizes.
  • the segmenting the three-dimensional image to be processed according to the probability to obtain each target object includes:
  • Post-processing is performed on the three-dimensional segmentation mask image to obtain each target object.
  • post-processing is performed on the three-dimensional segmentation mask map to obtain each target object, including:
  • At least one of morphological operation, resampling and smoothing is performed on the segmentation mask map to obtain each target object; wherein the morphological operation includes connected domain marking and/or hole filling.
  • the processing of the three-dimensional image to be processed through the pre-trained three-dimensional image processing model includes:
  • the network processing includes: performing neural network-based layer processing on the three-dimensional data represented by the multiple slice images to extract An image feature of a three-dimensional image region described by the plurality of slice images is included; the image feature is used to identify a probability that its corresponding voxel belongs to at least one target object.
  • the pre-trained three-dimensional image processing model includes image processing channels for identifying the probability of at least one target object; wherein, each of the image processing channels is used to calculate the three-dimensional image to be processed The probability that each voxel of is belonging to the corresponding target object.
  • the three-dimensional image to be processed includes a slice image sequence obtained based on CT medical imaging equipment capturing bones.
  • a training method of a three-dimensional image processing model including:
  • the training data including training three-dimensional images and labels corresponding to the training three-dimensional images; the labels represent the attribute relationship between each voxel in the training three-dimensional images and the target object;
  • the segmentation probability map is processed to obtain a preset training cut-off condition.
  • the processing of the segmentation probability map to obtain a preset training cut-off condition includes:
  • the three-dimensional image processing model is iteratively trained by using the deviation information until the obtained deviation information meets a preset training cut-off condition.
  • the calculating the deviation information of the segmentation probability map by using the corresponding label includes: calculating the loss function of the segmentation probability map by using the corresponding label to obtain the deviation information.
  • the inputting the training three-dimensional image into the three-dimensional image processing model to be trained to output the segmentation probability map corresponding to the training three-dimensional image comprises:
  • the training feature image is sequentially up-sampled through the up-sampling layer to obtain a training segmentation probability map.
  • the training method before inputting the training three-dimensional image into the feature extraction layer, and performing feature extraction to obtain the initial training feature image, the training method further includes:
  • the second preprocessing includes at least one of setting window width and level, resampling, data enhancement, data normalization, and adaptive adjustment of image size.
  • the data enhancement includes:
  • At least one of random rotation, random horizontal or vertical flip, and random cropping is performed on the training three-dimensional image.
  • the self-adaptive adjustment of image size includes:
  • a three-dimensional image processing device is also provided, and the three-dimensional image processing device includes:
  • an acquisition unit configured to acquire a three-dimensional image to be processed
  • a processing unit configured to process the three-dimensional image to be processed through a pre-trained three-dimensional image processing model, so as to obtain the probability that each voxel in the three-dimensional image to be processed belongs to the target object;
  • a segmentation unit configured to segment the three-dimensional image to be processed according to the probability to obtain each target object.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the method described in any one of the above embodiments when executing the computer program .
  • a computer storage medium stores a computer program thereon, and when the computer program is executed by a processor, the steps of the method described in any one of the above embodiments are implemented.
  • the above-mentioned three-dimensional image processing method, device, computer equipment and storage medium obtain the three-dimensional image to be processed by obtaining the three-dimensional image to be processed; the three-dimensional image to be processed is processed by the pre-trained three-dimensional image processing model, so as to obtain each voxel belonging to the target object in the three-dimensional image to be processed Probability: Segment the three-dimensional image to be processed according to the probability to obtain each target object.
  • the application realizes the automatic segmentation of joint bone structure, directly segments the three-dimensional image to be processed, improves the segmentation accuracy and improves work efficiency, is suitable for surgical replacement robots of knee joints or hip joints, and improves the degree of automation and intelligence.
  • FIG. 1 is a schematic diagram of an application of a three-dimensional image processing device in an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a processing flow of a 3D image to be processed in a 3D image processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of window width and window level adjustment of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of data resampling in a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an image segmentation process of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a segmented joint area in a three-dimensional image processing method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of post-processing of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a model training process of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a model training network of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of model training steps of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a loss function of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the second image preprocessing of the three-dimensional image processing method in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of data enhancement of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 15 is a schematic diagram of adaptive size adjustment of a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 16 is a schematic diagram of data orientation adjustment in a three-dimensional image processing method in an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a three-dimensional image processing device in an embodiment of the present application.
  • FIG. 18 is an internal structure diagram of a computer device in an embodiment of the present application.
  • FIG. 1 a schematic diagram of the application of a three-dimensional image processing device provided by the present application, wherein the image feature extraction layer of the three-dimensional image processing device obtains the three-dimensional image to be processed; the three-dimensional image to be processed is obtained through the pre-trained three-dimensional image processing model Processing is performed to obtain the probability that each voxel in the three-dimensional image to be processed belongs to the target object; the three-dimensional image to be processed is segmented according to the probability to obtain each target object.
  • the application realizes the automatic segmentation of joint bone structure, directly segments the three-dimensional image to be processed, improves the segmentation accuracy and improves work efficiency, is suitable for surgical replacement robots of knee joints or hip joints, and improves the degree of automation and intelligence.
  • a three-dimensional image processing method is provided.
  • the application of the method to the three-dimensional image processing device shown in FIG. 1 is used as an example for illustration, including the following steps:
  • the 3D image processing device acquires the 3D image to be processed, that is, collects data of the 3D image to be processed, and what is acquired here may be a CT 3D image of the knee joint or a CT 3D image of the hip joint.
  • S204 Process the 3D image to be processed by using the pre-trained 3D image processing model to obtain the probability that each voxel in the 3D image to be processed belongs to the target object.
  • a voxel is an abbreviation of a volume element (volume pixel). Refers to the smallest unit of digital data in three-dimensional space segmentation, voxel is used in three-dimensional imaging, scientific data and medical imaging and other fields.
  • the 3D image processing device acquires the 3D image to be processed and collects the data of the 3D image to be processed, and inputs the data of the 3D image to be processed into the pre-trained 3D image processing model, and the preprocessing unit of the 3D image processing device
  • the image is processed, and the forward calculation of network parameters is performed on the three-dimensional image to be processed to obtain the probability that each voxel in the three-dimensional image to be processed belongs to the target object, and a probability map is generated according to the probability that each voxel belongs to the target object.
  • S206 Segment the three-dimensional image to be processed according to the probability to obtain each target object.
  • the three-dimensional image processing apparatus divides the three-dimensional image to be processed according to the probability to obtain each target object.
  • the 3D image processing device inputs a multi-channel segmentation probability map of the same image size to the pre-trained 3D image processing model, each channel represents each defined target category, and finds the category with the highest probability corresponding to each image voxel
  • the label obtains the coarse segmentation mask map, that is, obtains each target object.
  • the three-dimensional image processing method obtains the three-dimensional image to be processed; processes the three-dimensional image to be processed through the pre-trained three-dimensional image processing model to obtain the probability that each voxel in the three-dimensional image to be processed belongs to the target object; according to the probability
  • the three-dimensional image to be processed is segmented to obtain each target object.
  • the application realizes the automatic segmentation of joint bone structure, directly segments the three-dimensional image to be processed, improves the segmentation accuracy and improves work efficiency, is suitable for surgical replacement robots of knee joints or hip joints, and improves the degree of automation and intelligence.
  • the pre-trained 3D image processing model is used to process the 3D image to be processed to obtain the probability that each voxel in the 3D image to be processed belongs to the target object, including:
  • the 3D image processing model processes the 3D image to be processed to obtain the probability that each voxel in the 3D image to be processed belongs to the target object.
  • the three-dimensional image processing model obtains each voxel again, and uses the target object corresponding to the maximum probability of each voxel as the segmentation result of the voxel.
  • the three-dimensional image processing model acquires each voxel again, uses the target object corresponding to the maximum probability of each voxel as the voxel segmentation result, and obtains the target object according to the voxel segmentation result.
  • the target object corresponding to the maximum probability of each voxel is acquired as the voxel segmentation result, and the target object is obtained according to the voxel segmentation result.
  • the target object corresponding to the maximum probability of each voxel is used as the segmentation result of the voxel, which realizes the precise segmentation and reconstruction of the articular bone, and improves the segmentation accuracy at the same time.
  • processing the three-dimensional image to be processed by using a pre-trained three-dimensional image processing model further includes: obtaining the target object corresponding to the maximum probability of each voxel as the target object under the target category of different channels. Segmentation results for voxels under the target category.
  • the 3D image to be processed by the 3D image processing device After preprocessing the 3D image to be processed by the 3D image processing device, it is input to the pre-trained 3D image processing model for forward calculation, and a multi-channel segmentation probability map of the same 3D image size is obtained.
  • Each channel represents Each target category defined; then under the target category of different channels, find the category label (channel index) with the highest probability corresponding to each image voxel to obtain the segmentation result.
  • the method before processing the 3D image to be processed by using a pre-trained 3D image processing model to obtain the probability that each voxel in the 3D image to be processed belongs to the target object, the method further includes: performing a first preprocessing.
  • the first preprocessing includes setting at least one of window width and level, resampling, data normalization and adaptive adjustment of image size.
  • the 3D image processing device acquires the 3D image to be processed, and performs the first preprocessing on the 3D image to be processed before processing the 3D image to be processed, wherein the first preprocessing includes setting the window width and level, resampling, data normalization and adaptively adjust image size.
  • setting the window width and window level is to set a specific window width and window level for the 3D image to be processed input into the 3D image processing model, so as to compress the HU value range of the 3D image to be processed and realize the HU value range of the 3D image to be processed.
  • the filtering of 3D images is beneficial to the processing of 3D image processing models.
  • the 3D image to be processed is resampled to unify the resolution of different 3D image data to be processed.
  • data normalization needs to be performed on the 3D image to be processed.
  • the specific method is not limited here.
  • the function of data normalization is to unify the distribution of data and accelerate network convergence.
  • the last step is to adaptively adjust the size of the 3D image to be processed, in order to meet the requirements of the segmentation network for the size of the input image.
  • the adaptive adjustment includes edge cropping and padding.
  • the data of the three-dimensional image to be processed is resampled.
  • each black dot in the figure represents a voxel.
  • the resampling process is implemented by interpolation, which does not change the physical size, but can change the resolution of the image.
  • the 3D image processing device acquires the 3D image to be processed, and performs first preprocessing on the 3D image to be processed before processing the 3D image to be processed, wherein the first preprocessing includes setting the window width and level, resampling, Data normalization and adaptive adjustment of image size.
  • the 3D image processing device can improve the data processing speed and accelerate the convergence of the 3D image processing device.
  • the segmentation of the three-dimensional image to be processed according to the probability to obtain each target object includes:
  • S402 Process the 3D image to be processed according to the probability to obtain a 3D segmentation mask.
  • the 3D image processing model processes the 3D image to be processed to obtain the probability that each voxel in the 3D image to be processed belongs to the target object.
  • the 3D image processing model acquires each voxel again, takes the target object corresponding to the maximum probability of each voxel as the segmentation result of the voxel, and processes the 3D image to be processed according to the segmentation result to obtain a 3D segmentation mask.
  • the segmentation result is a rough segmentation mask
  • the coarse three-dimensional segmentation mask is post-processed to obtain a fine three-dimensional segmentation mask.
  • S404 Perform post-processing on the three-dimensional segmentation mask image to obtain each target object.
  • the 3D image processing model processes the 3D image to be processed according to the probability to obtain a 3D segmentation mask, and the segmentation result is a coarse segmentation mask; the coarse 3D segmentation mask is post-processed to obtain a fine 3D segmentation mask ; The fine three-dimensional segmentation mask obtained by post-processing the coarse three-dimensional segmentation mask is each target object.
  • the 3D image processing model processes the 3D image to be processed according to the probability to obtain a 3D segmentation mask map, and performs post-processing on the 3D segmentation mask to obtain each target object.
  • the 3D image to be processed can be directly segmented, and the segmentation accuracy and time performance can be improved at the same time. It can be applied to surgical replacement robots for knee joints or hip joints, improving automation and intelligence, and improving work efficiency.
  • post-processing is performed on the three-dimensional segmentation mask to obtain each target object, including: performing morphological operations, resampling, and smoothing on the segmentation mask At least one of ; morphological operations include connected domain labeling and/or hole filling.
  • the 3D image processing model performs post-processing on the 3D segmentation mask to obtain each target object.
  • the specific post-processing includes: First, perform some binary image morphological operations on the 3D segmentation mask, specifically for each The segmentation category of the 3D segmentation mask map is connected domain labeling, and the largest one is retained. Then fill the holes in the 3D segmentation mask to repair some holes caused by incomplete segmentation. Resampling is then performed to restore the resolution of the original CT image. Finally, the three-dimensional segmentation mask is smoothed, and the jagged effect of the sagittal plane or coronal plane that may be caused by resampling or other reasons can be optimized for the three-dimensional segmentation mask. After optimization, segmentation can be performed to obtain the target object.
  • the 3D image processing model performs post-processing on the 3D segmentation mask to obtain each target object, and specifically performs at least one of morphological operation, resampling and smoothing on the segmentation mask; the morphological operation Include connected domain labeling and/or hole filling.
  • processing the 3D image to be processed through the pre-trained 3D image processing model includes: performing at least one layer of network processing on multiple adjacent slice images in the 3D image to be processed; wherein the network processing includes: Perform neural network-based layer processing on the three-dimensional data represented by multiple slice images to extract image features including the three-dimensional image area described by multiple slice images; wherein the image features are used to identify its corresponding voxel belongs to at least one target object probabilities.
  • the three-dimensional image processing device processes the three-dimensional image to be processed through a pre-trained three-dimensional image processing model.
  • the 3D image processing model performs at least one layer of network processing on multiple adjacent slice images in the 3D image to be processed; Contains image features for 3D image regions described by multiple slice images.
  • the extracted image feature is used to identify the probability that its corresponding voxel belongs to at least one target object.
  • the image feature may be an abstracted confidence degree, which is normalized to obtain a corresponding probability.
  • the image feature may be an abstracted feature value representing the target object, and the corresponding probability is obtained after evaluating its possibility.
  • the pre-trained three-dimensional image processing model includes image processing channels for identifying the probability of at least one target object; wherein, each image processing channel is used to calculate the belongingness of each voxel in the three-dimensional image to be processed The probability of the corresponding target object.
  • the pre-trained three-dimensional image processing model includes an image processing channel for identifying the probability of a single target object, and may also be an image processing channel for identifying multiple target objects.
  • each image processing channel is used to calculate the probability that each voxel in the three-dimensional image to be processed belongs to the corresponding target object.
  • the three-dimensional image to be processed includes a slice image sequence obtained based on a CT medical imaging device capturing bones.
  • the training method of the three-dimensional image processing model in the three-dimensional image processing method includes:
  • S502 Acquire training data; the training data includes training 3D images and labels corresponding to the training 3D images; the labels represent attribute relationships between voxels in the training 3D images and target objects.
  • the 3D image processing model acquires training data, wherein the training data includes training 3D images and labels corresponding to the training 3D images; the labels represent attribute relationships between voxels in the training 3D images and target objects.
  • the joint CT image data is obtained, first, it is divided into training data and test data according to a specific ratio; the training data needs to be manually marked by doctors or medical staff with relevant qualifications to obtain the bones of each 3D image. mask map. Then, divide the marked training data again according to a specific ratio to obtain a training set and a verification set; each set of data includes CT images and marked three-dimensional segmentation masks.
  • S504 Input the training 3D image into the 3D image processing model to be trained, so as to output a segmentation probability map corresponding to the training 3D image.
  • the 3D image processing model acquires training data, inputs the training 3D image into the feature extraction layer, and performs feature extraction to obtain an initial training feature image.
  • S506 Process the segmentation probability map to obtain a preset training cut-off condition.
  • the loss function is calculated according to the training segmentation probability map and the corresponding labels, and then the parameters can be adjusted using the optimization method according to the loss function.
  • the optimization method here can be the Adam method, the gradient descent algorithm, etc. It is not limited here.
  • the loss function is defined as:
  • processing the segmentation probability map to meet the preset training cut-off condition includes: using the corresponding label to calculate the deviation information of the segmentation probability map; wherein, the deviation information is used to evaluate the three-dimensional image processing to be trained The predictive accuracy of the model.
  • the 3D image processing model uses the corresponding labels to calculate the deviation information of the segmentation probability map; wherein, the deviation information is used to evaluate the prediction accuracy of the 3D image processing model to be trained until the 3D image processing model processes the segmentation probability map to obtain Until the preset training cut-off conditions are met.
  • the three-dimensional image processing model is iteratively trained by using the deviation information until the obtained deviation information meets the preset training cut-off condition. Specifically, the three-dimensional image processing model is iterated according to the loss function to train the three-dimensional image processing model, and the parameters are updated through repeated iterations of the training set data, which will gradually reduce the loss.
  • using the corresponding labels to calculate the deviation information of the segmentation probability map includes: using the corresponding labels to calculate a loss function of the segmentation probability map to obtain the deviation information.
  • the loss function of the segmentation probability map is calculated to obtain the deviation information.
  • the verification set is used to verify the 3D image processing model to obtain an average dice coefficient, which is used as the evaluation coefficient of the 3D image processing model.
  • the training can be stopped to obtain a pre-trained 3D image processing model.
  • the training three-dimensional image is input into the three-dimensional image processing model to be trained, so as to output a segmentation probability map corresponding to the training three-dimensional image, including:
  • S602 Input the training three-dimensional image into the feature extraction layer, and perform feature extraction to obtain an initial training feature image.
  • the 3D image processing model acquires training data, inputs the training 3D image into the feature extraction layer, and performs feature extraction to obtain an initial training feature image.
  • S604 Perform downsampling on the initial training feature image sequentially through the downsampling layer.
  • the 3D image processing model acquires training data, inputs the training 3D image into the feature extraction layer, and performs feature extraction to obtain an initial training feature image.
  • S606 Perform reverse residual calculation on the downsampled initial training feature image through the residual convolution block to obtain the training feature image.
  • the 3D image processing model sequentially down-samples the initial training feature image through the down-sampling layer, and then performs reverse residual calculation on the down-sampled initial training feature image through the residual convolution block to obtain the training feature image.
  • the convolutional block in the network structure of the three-dimensional image processing model may be a residual convolutional block, or alternatively use other structures as the convolutional block.
  • S608 Upsampling the training feature images sequentially through the upsampling layer to obtain a training segmentation probability map.
  • the 3D image processing model performs reverse residual calculation on the downsampled initial training feature image through the residual convolution block to obtain the training feature image, and then sequentially upsamples the training feature image through the upsampling layer to obtain Training split probability map.
  • the upsampling layer can be implemented by interpolation or deconvolution.
  • the 3D image processing model inputs the training 3D image into the feature extraction layer, performs feature extraction to obtain the initial training feature image;
  • the initial training feature image after downsampling is subjected to reverse residual calculation to obtain the training feature image.
  • the training feature image is sequentially upsampled to obtain the training segmentation probability map, the loss function is calculated according to the training segmentation probability map and the corresponding label, and the three-dimensional image processing model is trained according to the loss function iteration.
  • a large amount of training data is used to train the convolutional neural network to achieve precise segmentation and reconstruction of articular bones.
  • the generalization of the algorithm is greatly improved, so that the 3D image processing model can realize the segmentation of various types of bones at the same time, and it is applied to the surgical replacement robot of the knee joint or hip joint to improve automation and intelligence. degree, improve work efficiency.
  • the training three-dimensional image is input into the feature extraction layer, and before performing feature extraction to obtain the initial training feature image, it also includes: performing a second preprocessing on the three-dimensional image to be processed; wherein the second preprocessing It includes at least one of setting window width and level, resampling, data enhancement, data normalization and self-adaptive adjustment of image size.
  • the 3D image processing device acquires the training 3D image, and performs the second preprocessing on the training 3D image before performing feature extraction to obtain the initial training feature image; wherein the second preprocessing includes setting the window width and level, resampling, data enhancement , data normalization and adaptive adjustment of image size.
  • the 3D image processing device acquires the training 3D image, performs second preprocessing on the training 3D image, and adjusts the training 3D image to an optimal observation orientation, thereby improving the learning ability of the 3D image processing model network and accelerating convergence.
  • data enhancement includes: performing at least one of random rotation, random horizontal or vertical flip, and random cropping on the training three-dimensional image.
  • data enhancement mainly involves three steps: random rotation of the training 3D image; random horizontal or vertical flipping of the training 3D image, where the horizontal rotation refers to the rotation along the horizontal axis, and the vertical rotation Flipping refers to rotating with the vertical axis as the axis; random cropping of training 3D images.
  • the three-dimensional image processing model randomly rotates, flips, and crops the training three-dimensional image to expand the training data.
  • adaptively adjusting the image size includes: performing edge filling and/or edge cropping on the training three-dimensional image.
  • the adaptive size adjustment is to make the size of the training 3D image input by the network meet the requirements of the 3D image processing device.
  • the left half of the network is down-sampled to obtain image feature maps of different resolution levels, and the right half uses feature maps of different levels to perform up-sampling for image restoration, and the middle is compensated by cross-layer features.
  • the size of the image must meet the requirement that after each downsampling, the output is exactly 1/2 of the size of the input training 3D image, so the input image needs to be adaptively adjusted, and the adaptive image size adjustment method can be two ways:
  • the cropping method can be to crop both sides of the image at the same time or only crop one edge.
  • the three-dimensional image processing device adaptively adjusts the image size of the training three-dimensional image, including: performing edge filling and/or edge cutting on the training three-dimensional image, so that the input training three-dimensional image data size meets the requirements of the three-dimensional image processing device. need.
  • the last step of preprocessing is volume data orientation adjustment.
  • the slice direction of the 3D image has changed from the transverse plane to the coronal plane (or sagittal plane).
  • the purpose is to enable the knee joint to be better globally observed.
  • each slide along the direction of the slice plane can cover different types of bones as much as possible, which is conducive to the learning of the network and accelerates the convergence of model training.
  • a three-dimensional image processing device including: an acquisition module, a processing unit, and a segmentation unit, wherein:
  • an acquisition unit configured to acquire a three-dimensional image to be processed
  • the processing unit is used to process the three-dimensional image to be processed through the pre-trained three-dimensional image processing model, so as to obtain the probability that each voxel in the three-dimensional image to be processed belongs to the target object;
  • the segmentation unit is configured to segment the three-dimensional image to be processed according to the probability to obtain each target object.
  • the three-dimensional image processing device further includes:
  • the obtaining unit is used to obtain the target object corresponding to the maximum probability of each voxel as the segmentation result of the voxel;
  • the target object unit is used to obtain the target object according to the segmentation result of the voxels.
  • the 3D image processing device before the pre-trained 3D image processing model is used to process the 3D image to be processed to obtain the probability that each voxel in the 3D image to be processed belongs to the target object, the 3D image processing device further includes:
  • the first pre-processing unit is configured to perform first pre-processing on the 3D image to be processed, wherein the first pre-processing includes at least one of setting window width and level, resampling, data normalization, and adaptive image size adjustment.
  • the 3D image processing device further includes:
  • the three-dimensional segmentation mask unit is used to process the three-dimensional image to be processed according to the probability to obtain a three-dimensional segmentation mask
  • the post-processing unit is configured to perform post-processing on the three-dimensional segmentation mask map to obtain each target object.
  • the three-dimensional image processing device includes:
  • a post-processing unit configured to perform at least one of morphological operations, resampling and smoothing on the three-dimensional segmentation mask; wherein the morphological operations include connected domain marking and/or hole filling.
  • the 3D image processing device further includes:
  • a network processing unit configured to perform at least one layer of network processing on multiple adjacent slice images in the three-dimensional image to be processed, wherein the network processing includes: performing neural network-based processing on the three-dimensional data represented by the multiple slice images layer processing to extract image features including the three-dimensional image regions described by the plurality of slice images; wherein the image features are used to identify the probability that its corresponding voxel belongs to at least one target object.
  • the 3D image processing device further includes:
  • the image processing channel unit is used to identify the image processing channel corresponding to the probability of at least one target object; wherein each image processing channel is used to calculate the probability that each voxel in the three-dimensional image to be processed belongs to the corresponding target object.
  • the 3D image processing device further includes:
  • the slice image acquisition unit is used for slice image sequences obtained by capturing bones based on CT medical imaging equipment.
  • the three-dimensional image processing device includes:
  • an acquisition unit configured to acquire training data, wherein the training data includes training three-dimensional images and labels corresponding to the training three-dimensional images;
  • the output unit is used to input the training three-dimensional image into the three-dimensional image processing model to be trained, so as to output the segmentation probability map corresponding to the training three-dimensional image;
  • the segmentation probability map processing unit is configured to process the segmentation probability map to meet the preset training cut-off condition.
  • the 3D image processing device further includes:
  • the analysis unit is used to calculate the deviation information of the segmentation probability map by using the corresponding label; wherein the deviation information is used to evaluate the prediction accuracy of the three-dimensional image processing model to be trained;
  • the deviation information iteration unit is configured to use the deviation information to iteratively train the 3D image processing model until the obtained deviation information meets the preset training cut-off condition.
  • the 3D image processing device further includes:
  • the deviation information unit is used to calculate the loss function of the segmentation probability map by using the corresponding label, so as to obtain the deviation information.
  • the 3D image processing device further includes:
  • a feature extraction unit is used to input the training three-dimensional image into the feature extraction layer, and perform feature extraction to obtain an initial training feature image;
  • the downsampling layer unit is used for downsampling the initial training feature image successively through the downsampling layer;
  • the reverse residual calculation unit is used to perform reverse residual calculation on the down-sampled initial training feature image through the residual convolution block to obtain the training feature image;
  • the upsampling layer unit is used to sequentially upsample the training feature image through the upsampling layer to obtain a training segmentation probability map.
  • the training three-dimensional image is input into the feature extraction layer, and before performing feature extraction to obtain the initial training feature image, the three-dimensional image processing device further includes:
  • the second pre-processing unit is used to perform second pre-processing on the three-dimensional image to be processed; wherein the second pre-processing includes at least one of setting window width and level, resampling, data enhancement, data normalization, and adaptive adjustment of image size one item.
  • the 3D image processing device further includes:
  • the data enhancement unit is used to perform at least one of random rotation, random horizontal flip, and random cropping on the training three-dimensional image.
  • the 3D image processing device further includes:
  • the image size adjustment unit is adapted to perform edge filling and/or edge cutting on the training 3D image.
  • Each module in the above-mentioned three-dimensional image processing device may be fully or partially realized by software, hardware or a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 18 .
  • the computer device includes a processor, memory, network interface and database connected by a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer programs and databases.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer device is used to store periodic task assignment data, such as configuration files, theoretical operating parameters and theoretical deviation ranges, task attribute information, and the like.
  • the network interface of the computer device is used to communicate with an external terminal via a network connection. When the computer program is executed by the processor, a three-dimensional image processing method is realized.
  • Figure 18 is only a block diagram of a partial structure related to the solution of this application, and does not constitute a limitation on the computer equipment to which the solution of this application is applied.
  • the specific computer equipment may include There may be more or fewer components than shown in the figures, or certain components may be combined, or have different component arrangements.
  • a computer device including a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • the three-dimensional image to be processed is segmented according to the probability to obtain each target object.
  • the pre-trained three-dimensional image processing model is used to process the three-dimensional image to be processed, so as to further realize:
  • the target object is obtained according to the segmentation result of the voxel.
  • the pre-trained 3D image processing model is used to process the 3D image to be processed to obtain the probability that each voxel in the 3D image to be processed belongs to the target object, and the processor executes the computer program to further implement:
  • Performing first preprocessing on the 3D image to be processed wherein the first preprocessing includes at least one of setting window width and level, resampling, data normalization, and adaptive image size adjustment.
  • the processor executes the computer program to further realize:
  • the 3D segmentation mask is post-processed to obtain individual object objects.
  • the processor executes the computer program to further realize:
  • At least one of morphological operation, resampling and smoothing is performed on the segmentation mask map; wherein the morphological operation includes connected domain marking and/or hole filling.
  • the network processing includes: performing neural network-based layer processing on the three-dimensional data represented by the multiple slice images to extract The image features of the three-dimensional image region described by the image; wherein the image features are used to identify the probability that its corresponding voxel belongs to at least one target object.
  • the pre-trained three-dimensional image processing model includes image processing channels for identifying the probability of at least one target object; wherein, each image processing channel is used to calculate the probability to be The probability that each voxel in the 3D image belongs to the corresponding target object is processed.
  • the processor executes the computer program, it is realized that the three-dimensional image to be processed includes a slice image sequence obtained based on a CT medical imaging device capturing bones.
  • the processor when the processor executes the computer program, it further implements a training method for a three-dimensional image processing model, including:
  • the training data includes a training three-dimensional image and a label corresponding to the training three-dimensional image, and the label represents the attribute relationship between each voxel in the training three-dimensional image and the target object;
  • the segmentation probability map is processed to meet the preset training cut-off conditions.
  • the deviation information is used to evaluate the prediction accuracy of the three-dimensional image processing model to be trained;
  • the three-dimensional image processing model is iteratively trained by using the deviation information until the obtained deviation information meets the preset training cut-off condition.
  • the processor when the processor executes the computer program, it further implements: calculating a loss function of the segmentation probability map by using corresponding labels to obtain deviation information.
  • the initial training feature image is sequentially down-sampled through the down-sampling layer;
  • the training feature image is sequentially up-sampled through the up-sampling layer to obtain the training segmentation probability map.
  • the training three-dimensional image is input into the feature extraction layer, and before the feature extraction is performed to obtain the initial training feature image, the processor executes the computer program to further realize:
  • Performing second preprocessing on the 3D image to be processed wherein the second preprocessing includes at least one of setting window width and level, resampling, data enhancement, data normalization, and adaptive image size adjustment.
  • At least one of random rotation, random horizontal flip and random cropping is performed on the training 3D image.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
  • the three-dimensional image to be processed is segmented according to the probability to obtain each target object.
  • the target object is obtained according to the segmentation result of the voxel.
  • the pre-trained 3D image processing model is used to process the 3D image to be processed to obtain the probability that each voxel in the 3D image to be processed belongs to the target object, and the computer program is further implemented when the computer program is executed by the processor:
  • Performing first preprocessing on the 3D image to be processed wherein the first preprocessing includes at least one of setting window width and level, resampling, data normalization, and adaptive image size adjustment.
  • the 3D segmentation mask is post-processed to obtain individual object objects.
  • morphological operations include connected domain marking and/or hole filling.
  • the network processing includes: performing neural network-based layer processing on the three-dimensional data represented by the multiple slice images to extract The image features of the three-dimensional image region described by the image; wherein the image features are used to identify the probability that its corresponding voxel belongs to at least one target object.
  • the pre-trained three-dimensional image processing model when the computer program is executed by the processor, further includes image processing channels for identifying the probability of at least one target object; wherein, each image processing channel is used to calculate the probability to be The probability that each voxel in the 3D image belongs to the corresponding target object is processed.
  • the three-dimensional image to be processed includes a sequence of sliced images obtained based on CT medical imaging equipment capturing bones.
  • a training method for a three-dimensional image processing model is further implemented, including:
  • the training data includes a training three-dimensional image and a label corresponding to the training three-dimensional image, and the label represents the attribute relationship between each voxel in the training three-dimensional image and the target object;
  • the segmentation probability map is processed to meet the preset training cut-off conditions.
  • the three-dimensional image processing model is iteratively trained by using the deviation information until the obtained deviation information meets the preset training cut-off condition.
  • the computer program when executed by the processor, it further implements: calculating a loss function of the segmentation probability map by using corresponding labels to obtain deviation information.
  • the initial training feature image is sequentially down-sampled through the down-sampling layer;
  • the training feature image is sequentially up-sampled through the up-sampling layer to obtain the training segmentation probability map.
  • the training three-dimensional image is input into the feature extraction layer, and before the feature extraction is performed to obtain the initial training feature image, the computer program is further implemented when executed by the processor:
  • Performing second preprocessing on the 3D image to be processed wherein the second preprocessing includes at least one of setting window width and level, resampling, data enhancement, data normalization, and adaptive image size adjustment.
  • At least one of random rotation, random horizontal flip and random cropping is performed on the training 3D image.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • DDRSDRAM Double Data Rate SDRAM
  • ESDRAM Enhanced SDRAM
  • SLDRAM Synchronous Chain Synchlink DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种三维图像处理方法、装置、计算机设备和存储介质,涉及医疗技术领域。所述三维图像处理方法包括:获取待处理三维图像;通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率;根据概率对待处理三维图像进行分割以得到各个目标对象。本方法可实现关节骨结构自动化分割,直接对待处理三维图像进行分割,提高分割精度的同时提高工作效率,适用于膝关节或者髋关节的手术置换机器人,提高自动化、智能化程度。

Description

三维图像处理方法、装置、计算机设备和存储介质
相关申请
本申请要求2021年10月21日申请的,申请号为202111228710.6,名称为“三维图像处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及医疗技术领域,尤其涉及一种三维图像处理方法、装置、计算机设备和存储介质。
背景技术
膝骨术前,利用医学影像设备获取患者的包含关节等骨骼的影像图像(如CT图像),计算机利用CT图像序列重建患者腿部骨骼的三维模型。目前,为了从CT图像序列中提取骨骼轮廓,使用如2D网络(U-Net)SVM、基于graph-cut图像分割方法等算法对二维的CT图像进行图像分割,以得到骨骼的图像子序列,再利用骨骼的图像子序列重建相应骨骼的三维模型。
然而,在获取骨骼的三维模型过程中,需对每幅二维图像进行分割。各幅图像分割后是难于自动化重建骨骼的三维模型的。这是因为CT图像序列中的若干幅图像之间由于缺少空间信息,各幅图像的图像分割精度和泛化性均不统一。如此,不仅受CT图像的数量影响,图像分割的速度慢,还可能需要借助人工交互的方式,对各图像分割后的骨骼图像进行修整
由此,依据现有技术对二维的CT图像所提取的如关节区域等关键骨骼的轮廓,再进行三维重建后,所重建的关节模型容易产生骨骼缺失、或异常的不连续的突起等模型缺陷。这是与所分割的CT图像未考虑空间信息、以及所使用的算法相关。如何建立一种分割效率高,且分割精度高的模型是业内亟待解决的问题。
发明内容
基于此,有必要针对上述技术问题,提供一种三维图像处理方法、装置、计算机设备和存储介质。
本申请实施例提供了一种三维图像处理方法,所述三维图像处理方法包括:
获取待处理三维图像;
通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,以得到所述待处理三维图像中每个体素属于至少一个目标对象的概率;
根据所述概率对所述待处理三维图像进行分割以得到各个目标对象。
在其中一个实施例中,所述通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,包括:
获取每个体素的最大的概率所对应的目标对象作为所述体素的分割结果;
根据所述体素的分割结果得到目标对象。
在其中一个实施例中,所述通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,还包括:
在不同通道的目标类别下,获取每个体素的最大的概率所对应的目标对象作为该目标类别下的所述体素的分割结果。
在其中一个实施例中,所述通过预先训练的三维图像处理模型对所述待处理三维图像进行处理包括:
对所述待处理三维图像进行第一预处理以供向所述三维图像处理模型提供输入数据;其中,所述第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸中的至少一项。
在其中一个实施例中,所述根据所述概率对所述待处理三维图像进行分割以得到各个目标对象包括:
根据所述概率对所述待处理三维图像进行处理以得到三维分割掩模图;
对所述三维分割掩模图进行后处理以得到各个目标对象。
在其中一个实施例中,对所述三维分割掩模图进行后处理以得到各个目标对象,包括:
对所述分割掩模图进行形态学操作、重采样以及平滑处理中的至少一种,以得到各个目标对象;其中所述形态学操作包括连通域标记和/或填洞。
在其中一个实施例中,所述通过预先训练的三维图像处理模型对所述待处理三维图像进行处理包括:
将待处理三维图像中相邻的多个切片图像进行至少一层网络处理,其中,所述网络处理包括:对所述多个切片图像所表示的三维数据进行基于神经网络的层处理,以提取包含所述多个切片图像所描述的三维图像区域的图像特征;所述图像特征用于标识其对应的体素属于至少一个目标对象的概率。
在其中一个实施例中,所述预先训练的三维图像处理模型包含用于识别至少一个目标对象的概率的图像处理通道;其中,每一所述图像处理通道用于计算所述待处理三维图像中的各体素属于相应目标对象的概率。
在其中一个实施例中,所述待处理三维图像包含基于CT医学影像设备摄取骨骼而得到的切片图像序列。
在其中一个实施例中,还提供了一种三维图像处理模型的训练方法,包括:
获取训练数据,所述训练数据包括训练三维图像以及所述训练三维图像对应的标签;所述标签表示所述训练三维图像中各体素与目标对象之间的属性关系;
将所述训练三维图像输入待训练的三维图像处理模型,以输出对应所述训练三维图像的分割概率图;
对所述分割概率图进行处理得到符合预设的训练截止条件。
在其中一个实施例中,所述对所述分割概率图进行处理得到符合预设的训练截止条件,包括:
利用相应标签,计算所述分割概率图的偏差信息;其中,所述偏差信息用于评价所述待训练的三维图像处理模型的预测准确性;
利用所述偏差信息迭代地对所述三维图像处理模型进行训练,直至所得到的偏差信息符合预设的训练截止条件。
在其中一个实施例中,所述利用相应标签,计算所述分割概率图的偏差信息,包括:利用相应标签,计算所述分割概率图的损失函数,以得到所述偏差信息。
在其中一个实施例中,所述将所述训练三维图像输入待训练的三维图像处理模型,以输出对应所述训练三维图像的分割概率图包括:
将所述训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像;
通过下采样层依次对所述初始训练特征图像进行下采样;
通过残差卷积块对下采样后的所述初始训练特征图像进行反向残差计算,得到训练特征图像;
通过上采样层依次对所述训练特征图像进行上采样,得到训练分割概率图。
在其中一个实施例中,所述将所述训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像之前,所述训练方法还包括:
对所述训练三维图像进行第二预处理,其中所述第二预处理包括设置窗宽窗位、重采样、数据增强、数据归一化以及自适应调节图像尺寸中的至少一项。
在其中一个实施例中,所述数据增强,包括:
对所述训练三维图像进行随机旋转、随机水平或竖直方向翻转以及随机裁剪中的至少一项。
在其中一个实施例中,所述自适应调节图像尺寸,包括:
对所述训练三维图像进行边缘填充和/或边缘裁剪。
在其中一个实施例中,还提供了一种三维图像处理装置,所述三维图像处理装置包括:
获取单元,用于获取待处理三维图像;
处理单元,用于通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,以得到所述待处理三维图像中每个体素属于目标对象的概率;
分割单元,用于根据所述概率对所述待处理三维图像进行分割以得到各个目标对象。
在其中一个实施例中,一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述实施例中任一项所述的方法的步骤。
在其中一个实施例中,一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述实施例中任一项所述的方法的步骤。
上述三维图像处理方法、装置、计算机设备和存储介质,通过获取待处理三维图像;通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率;根据概率对待处理三维图像进行分割以得到各个目标对象。本申请实现关节骨结构自动化分割,直接对待处理三维图像进行分割,提高分割精度的同时提高工作效率,适用于膝关节或者髋关节的手术置换机器人,提高自动化、智能化程度。
附图说明
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例中三维图像处理装置的应用示意图。
图2为本申请一实施例中三维图像处理方法的流程示意图。
图3为本申请一实施例中三维图像处理方法的对待处理三维图像进行处理流程示意图。
图4为本申请一实施例中三维图像处理方法的窗宽窗位调节示意图。
图5为本申请一实施例中三维图像处理方法的数据重采样示意图。
图6为本申请一实施例中三维图像处理方法的图像分割流程示意图。
图7为本申请一实施例中三维图像处理方法的分割出的关节区域示意图。
图8为本申请一实施例中三维图像处理方法的后处理示意图。
图9为本申请一实施例中三维图像处理方法的模型训练流程示意图。
图10为本申请一实施例中三维图像处理方法的模型训练网络示意图。
图11为本申请一实施例中三维图像处理方法的模型训练步骤示意图。
图12为本申请一实施例中三维图像处理方法的损失函数示意图。
图13为本申请一实施例中三维图像处理方法的图像第二预处理示意图。
图14为本申请一实施例中三维图像处理方法的数据增强示意图。
图15为本申请一实施例中三维图像处理方法的自适应尺寸调节示意图。
图16为本申请一实施例中三维图像处理方法的数据方位调整示意图。
图17为本申请一实施例中三维图像处理装置的结构示意图。
图18为本申请一实施例中计算机设备的内部结构图。
具体实施方式
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使本申请的公开内容更加透彻全面。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的耦合。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。
如图1所示,本申请所提供的一种三维图像处理装置的应用示意图,其中,三维图像处理装置的图像特征提取层获取待处理三维图像;通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率;根据概率对待处理三维图像进行分割以得到各个目标对象。本申请实现关节骨结构自动化分割,直接对待处理三维图像进行分割,提高分割精度的同时提高工作效率,适用于膝关节或者髋关节的手术置换机器人,提高自动化、智能化程度。
在一个实施例中,如图2所示,提供了一种三维图像处理方法,以该方法应用于如图1所示的三维图像处理装置为例进行说明,包括以下步骤:
S202:获取待处理三维图像。
具体地,三维图像处理装置获取待处理三维图像,即进行待处理三维图像的数据收集,这里获取的可以是膝关节CT三维图像,也可以是髋关节CT三维图像。
S204:通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率。
其中,体素(voxel)是体积元素(volumepixel)的简称。是指数字数据于三维空间分割上的最小单位,体素用于三维成像、科学数据与医学影像等领域。
具体地,三维图像处理装置获取待处理三维图像并收集待处理三维图像的数据,将待处理三维图像的数据输入至预先训练的三维图像处理模型,由三维图像处理装置的预处理单元对待处理三维图像进行处理,具体对待处理三维图像进行网络参数前向计算,得到待处理三维图像中每个体素属于目标对象的概率,并根据每个体素属于目标对象的概率生成概率图。
S206:根据概率对待处理三维图像进行分割以得到各个目标对象。
具体地,三维图像处理装置根据概率,对待处理三维图像进行分割以得到各个目标对象。具体为三维图像处理装置向预先训练的三维图像处理模型输入相同图像尺寸的多通道的分割概率图,每个通道表示定义的每个目标类别,找到与每个图像体素对应的概率最大的类别标签即得到粗分割掩模图,也即得到各个目标对象。
在该实施例中,三维图像处理方法通过获取待处理三维图像;通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率;根据概率对待处理三维图像进行分割以得到各个目标对象。本申请实现关节骨结构自动化分割,直接对待处理三维图像进行分割,提高分割精度的同时提高工作效率,适用于膝关节或者髋关节的手术置换机器人,提高自动化、智能化程度。
如图3所示,在本申请一个实施例中,通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率,包括:
S302:获取每一体素的最大的概率所对应的目标对象作为体素的分割结果。
具体地,三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率。三维图像处理模型再获取每一体素,将每一体素的最大的概率所对应的目标对象作为体素的分割结果。
S304:根据体素的分割结果得到目标对象。
具体地,三维图像处理模型再获取每一体素,将每一体素的最大的概率所对应的目标对象作为体素的分割结果,根据体素的分割结果得到目标对象。将最大概率的细三维分割掩模图的关节区域设置为前景,其像素为1,其他设置为背景。
在本实施例中,通过获取每一体素的最大的概率所对应的目标对象作为体素的分割结果,根据体素的分割结果得到目标对象。每一体素的最大的概率所对应的目标对象作为体素的分割结果,实现了关节骨的精准分割和重建,同时提高分割精度。
在一个实施例中,通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,还包括:在不同通道的目标类别下,获取每个体素的最大的概率所对应的目标对象作为该目标类别下的体素的分割结果。
具体地,三维图像处理装置对待处理三维图像进行预处理后,将其输入至预先训练的三维图像处理模型进行前向计算,得到输入相同三维图像尺寸的多通道的分割概率图,每一个通道表示定义的每一个目标类别;然后在不同通道的目标类别下,找到每一个图像体素对应的概率最大的类别标签(通道索引)以得到分割结果。
在一个实施例中,通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率之前,该方法还包括:对待处理三维图像进行第一预处理。第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸中的至少一项。
具体地,三维图像处理装置获取待处理三维图像,在对待处理三维图像进行处理之前,对待处理三维图像进行第一预处理,其中第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸。
如图4所示,其中,设置窗宽窗位是对输入到三维图像处理模型中的待处理三维图像设置特定的窗宽窗位,以此压缩待处理三维图像的HU值范围,实现对待处理三维图像的滤波,有利于三维图像处理模型的处理。对待处理三维图像设置特定的窗宽窗位后,对待处理三维图像进行重采样,以统一不同待处理三维图像数据的分辨率。接下来需要对待处理三维图像进行数据归一化,具体方式在此不限,数据归一化的作用在于统一数据的分布,加速网络收敛。最后一步就是自适应调节待处理三维图像的尺寸,目的是为了符合分割网络对输入图像尺寸的要求,这里的自适应调节包括边缘裁剪和填充两种方式。
具体地,如图5所示,待处理三维图像的数据重采样,以二维图像为例,图中每个黑点,代表的是一个体素。重采样过程是通过插值实现,不改变物理尺寸,但是可以改变图像的分辨率。
在该实施例中,三维图像处理装置获取待处理三维图像,在对待处理三维图像进行处理之前,对待处理三维图像进行第一预处理,其中第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸。通过对待处理三维图像进行第一预处理有利于三维图像处理装置提高数据处理速度,同时加速三维图像处理装置的收敛。
如图6所示,在本申请一个实施例中,根据概率对待处理三维图像进行分割以得到各个目标对象包括:
S402:根据概率对待处理三维图像进行处理以得到三维分割掩模。
具体地,三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率。三维图像处理模型再获取每一体素,将每一体素的最大的概率所对应的目标对象作为体素的分割结果,根据该分割结果对待处理三维图像进行处理以得到三维分割掩模图。该分割结果即为粗分割掩模图,粗三维分割掩模图经过后处理得到细三维分割掩模图。
S404:对三维分割掩模图进行后处理以得到各个目标对象。
具体地,三维图像处理模型根据概率对待处理三维图像进行处理以得到三维分割掩模图,该分割结果即为粗分割掩模图;粗三维分割掩模图经过后处理得到细三维分割掩模图; 对粗三维分割掩模图进行后处理以得到的细三维分割掩模图即为各个目标对象。
在该实施例中,三维图像处理模型根据概率对待处理三维图像进行处理以得到三维分割掩模图,对三维分割掩模进行后处理以得到各个目标对象。由此实现了直接对待处理三维图像进行分割,同时提高分割精度和时间性能,可应用于膝关节或者髋关节的手术置换机器人,提高自动化、智能化程度,提升工作效率。
如图7和图8所示,在本申请一个实施例中,对三维分割掩模图进行后处理以得到各个目标对象,包括:对分割掩模图进行形态学操作、重采样以及平滑处理中的至少一种;形态学操作包括连通域标记和/或填洞。
具体地,三维图像处理模型对三维分割掩模图进行后处理以得到各个目标对象,具体后处理包括:首先,对三维分割掩模图做一些二值化图像的形态学操作,具体对每个三维分割掩模图的分割类别进行连通域标记,并保留最大的一个连通域。然后对三维分割掩模图进行填洞,修复一些分割不全而导致的空洞现象。再然后进行重采样,恢复原始CT图像的分辨率。最后对三维分割掩模图进行平滑处理,对三维分割掩模图可以优化重采样或其他原因可能带来的矢状面或者冠状面的锯齿效果,优化后便可以进行分割以得到目标对象。
在本实施例中,三维图像处理模型对三维分割掩模图进行后处理以得到各个目标对象,具体对分割掩模图进行形态学操作、重采样以及平滑处理中的至少一种;形态学操作包括连通域标记和/或填洞。通过对三维分割掩模图进行后处理的优化流程,使其具有更高的分割精度,提高系统工作效率。
在本申请一个实施例中,通过预先训练的三维图像处理模型对待处理三维图像进行处理包括:将待处理三维图像中相邻的多个切片图像进行至少一层网络处理;其中,网络处理包括:对多个切片图像所表示的三维数据进行基于神经网络的层处理,以提取包含多个切片图像所描述的三维图像区域的图像特征;其中图像特征用于标识其对应的体素属于至少一个目标对象的概率。
具体地,三维图像处理装置通过预先训练的三维图像处理模型对待处理三维图像进行处理。三维图像处理模型将待处理三维图像中相邻的多个切片图像进行至少一层网络处理;在网络处理中,对多个切片图像所表示的三维数据进行基于神经网络的层处理,以此提取包含多个切片图像所描述的三维图像区域的图像特征。提取得到的图像特征用于标识其对应的体素属于至少一个目标对象的概率。
例如,图像特征可以是经抽象后的置信度,其经过归一化处理后,得到相应的概率。又如,图像特征可以是表征目标对象的经抽象的特征值,通过对其可能性评价后,得到相应概率。
在本申请一个实施例中,预先训练的三维图像处理模型包含用于识别至少一个目标对象的概率的图像处理通道;其中,每一图像处理通道用于计算待处理三维图像中的各体素属于相应目标对象的概率。
具体地,预先训练的三维图像处理模型包含用于识别单一目标对象的概率的图像处理通道,还可以是一个可识别多个目标对象的图像处理通道。其中,每一图像处理通道用于计算待处理三维图像中的各体素属于相应目标对象的概率。
在本申请一个实施例中,待处理三维图像包含基于CT医学影像设备摄取骨骼而得到的切片图像序列。如图9、图10和图11所示,在一些实施例中,三维图像处理方法中的三维图像处理模型的训练方法包括:
S502:获取训练数据;训练数据包括用于训练三维图像以及训练三维图像对应的标签;标签表示训练三维图像中各体素与目标对象之间的属性关系。
具体地,三维图像处理模型获取训练数据,其中,训练数据包括用于训练三维图像以及训练三维图像对应的标签;标签表示训练三维图像中各体素与目标对象之间的属性关系。获取到关节CT影像数据之后,首先,按照特定比例分成训练数据和测试数据;训练数据需要由具有相关资质的医生或者医护人员对目标区域(关节骨骼)进行人工标注,得到每个 三维图像的骨骼掩模图。然后,对标注过的训练数据再次按照特定比例进行划分得到训练集和验证集;其中每一组数据包括CT图像和标注的三维分割掩模图。
S504:将训练三维图像输入待训练的三维图像处理模型,以输出对应训练三维图像的分割概率图。
具体地,三维图像处理模型获取训练数据,将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像。
S506:对分割概率图进行处理得到符合预设的训练截止条件。
如图12所示,具体地,根据训练分割概率图以及对应的标签计算损失函数,然后根据损失函数可以利用优化方法进行参数的调整,这里的优化方法可以为Adam方法,梯度下降算法等等,在此不做限定。
损失函数定义为:
L=w 1*l dice+w 3*l focal
其中,各参数分别表示为:w i(i=1,2)为各项权重,l dice和l focal为两种常用的损失函数,这里使用两者加权求和作为总损失函数,w i(i=1,2)为各项权重,在训练过程进行动态调节。
在本申请一个实施例中,对分割概率图进行处理得到符合预设的训练截止条件,包括:利用相应标签,计算分割概率图的偏差信息;其中,偏差信息用于评价待训练的三维图像处理模型的预测准确性。
具体地,三维图像处理模型利用相应标签,计算分割概率图的偏差信息;其中,偏差信息用于评价待训练的三维图像处理模型的预测准确性,直至三维图像处理模型对分割概率图进行处理得到符合预设的训练截止条件为止。
利用偏差信息迭代地对三维图像处理模型进行训练,直至所得到的偏差信息符合预设的训练截止条件。具体地,三维图像处理模型根据损失函数迭代以对三维图像处理模型进行训练,具体经过训练集数据的反复迭代进行参数更新,会使得损失逐渐降低。
在本申请一个实施例中,利用相应标签,计算分割概率图的偏差信息,包括:利用相应标签,计算分割概率图的损失函数,以得到偏差信息。
具体地,在三维图像处理模型利用相应标签,计算分割概率图的偏差信息过程中,计算分割概率图的损失函数,以得到偏差信息。每个迭代周期结束后利用验证集对三维图像处理模型进行验证,以得到一个平均dice系数,作为三维图像处理模型的评价系数。当dice系数达到了预设的期望值,则可以停止训练,得到预训练的三维图像处理模型。
在本申请一个实施例中,将训练三维图像输入待训练的三维图像处理模型,以输出对应训练三维图像的分割概率图,包括:
S602:将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像。
具体地,三维图像处理模型获取训练数据,将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像。
S604:通过下采样层依次对初始训练特征图像进行下采样。
具体地,三维图像处理模型获取训练数据,将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像。通过下采样层依次对初始训练特征图像进行下采样,其中下采样可以采用池化层实现,也可以通过卷积步长(stride=2)实现。
S606:通过残差卷积块对下采样后的初始训练特征图像进行反向残差计算,得到训练特征图像。
具体地,三维图像处理模型通过下采样层依次对初始训练特征图像进行下采样,再通过残差卷积块对下采样后的初始训练特征图像进行反向残差计算,得到训练特征图像。三维图像处理模型网络结构中的卷积块可以为残差卷积块,也可以替代地使用其他结构作为卷积块。
S608:通过上采样层依次对训练特征图像进行上采样,得到训练分割概率图。
具体地,三维图像处理模型通过残差卷积块对下采样后的初始训练特征图像进行反向残差计算,得到训练特征图像,然后再通过上采样层依次对训练特征图像进行上采样,得到训练分割概率图。其中,上采样层可以通过插值实现,也可以通过反卷积实现。
在本实施例中,三维图像处理模型将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像;再通过下采样层依次对初始训练特征图像进行下采样,通过残差卷积块对下采样后的初始训练特征图像进行反向残差计算,得到训练特征图像。再通过上采样层依次对训练特征图像进行上采样,得到训练分割概率图,根据训练分割概率图以及对应的标签计算损失函数,根据损失函数迭代以对三维图像处理模型进行训练。
在本实施例中,利用了大量训练数据对卷积神经网络进行训练,实现关节骨的精准分割和重建。通过多样且充分的数据训练,极大提高算法泛化性,使三维图像处理模型可同时实现对多种类型的骨头进行分割,应用于膝关节或者髋关节的手术置换机器人,提高自动化、智能化程度,提升工作效率。
如图13所示,在其中一个实施例中,将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像之前,还包括:对待处理三维图像进行第二预处理;其中第二预处理包括设置窗宽窗位、重采样、数据增强、数据归一化以及自适应调节图像尺寸中的至少一项。
具体地,三维图像处理装置获取训练三维图像,在进行特征提取得到初始训练特征图像之前,对训练三维图像进行第二预处理;其中第二预处理包括设置窗宽窗位、重采样、数据增强、数据归一化以及自适应调节图像尺寸。
在本实施例中,三维图像处理装置获取训练三维图像,对训练三维图像进行第二预处理,使训练三维图像调整至最佳观测方位,从而提高三维图像处理模型网络学习的能力以及加速收敛。
如图14所示,在本申请一个实施例中,数据增强包括:对训练三维图像进行随机旋转、随机水平或竖直方向翻转以及随机裁剪中的至少一项。
具体地,数据增强主要做了三个步骤:对训练三维图像进行随机旋转;对训练三维图像进行随机水平或竖直方向翻转,其中水平方向旋转是指沿水平方向为轴进行旋转,竖直方向翻转是指以竖直方向为轴进行旋转;对训练三维图像进行随机裁剪。
在本实施例中,三维图像处理模型对训练三维图像进行随机旋转、翻转和裁剪,扩充了训练数据。
如图15所示,在本申请一个实施例中,自适应调节图像尺寸包括:对训练三维图像进行边缘填充和/或边缘裁剪。
具体地,自适应尺寸调节是为了让网络输入的训练三维图像尺寸满足三维图像处理装置的需要。在网络左半边进行了下采样得到不同分辨率级别的图像特征图,右半边利用不同级别特征图进行上采样进行图像恢复,中间由跨层的特征补偿。为了可实现这样的连接结构,图像尺寸必须满足每一次下采样之后都刚好输出为输入训练三维图像尺寸的1/2,因此需要对输入图像进行自适应调节,自适应图像尺寸调节方式可以为两种方式:
1、边缘填充
对训练三维图像不满足输入要求维度进行0值填充,使其满足要求。
2、边缘裁剪
对训练三维图像不满足输入要求维度进行裁剪,使其满足要求,裁剪方式可以是图像两边同时裁剪或者只裁剪一个边缘。
在本实施例中,三维图像处理装置对训练三维图像进行自适应调节图像尺寸,包括:对训练三维图像进行边缘填充和/或边缘裁剪,使输入的训练三维图像数据尺寸满足三维图像处理装置的需要。
如图16所示,预处理最后一个步骤为体数据方位调整,以膝关节数据为例,可以看到三维图像的切片方向从横断面变成了冠状面(或者矢状面),这样做的目的是使得膝关节可 以被更好地全局观测到。图像三维卷积运算的时候,每一次沿切片平面方向的滑动可以尽量覆盖到不同类型的骨骼,有利于网络的学习,加速模型训练的收敛。
应该理解的是,虽然图2至图9的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2至图9的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
如图17所示,在本申请一个实施例中,提供了一种三维图像处理装置,包括:获取模块、处理单元及分割单元,其中:
获取单元,用于获取待处理三维图像;
处理单元,用于通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率;
分割单元,用于根据概率对待处理三维图像进行分割以得到各个目标对象。
在本申请一个实施例中,三维图像处理装置还包括:
获取单元,用于获取每一体素的最大的概率所对应的目标对象作为体素的分割结果;
目标对象单元,用于根据体素的分割结果得到目标对象。
在本申请一个实施例中,通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率之前,所述三维图像处理装置还包括:
第一预处理单元,用于对待处理三维图像进行第一预处理,其中第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸中的至少一项。
在本申请一个实施例中,所述三维图像处理装置还包括:
三维分割掩模单元,用于根据概率对待处理三维图像进行处理以得到三维分割掩模图;
后处理单元,用于对三维分割掩模图进行后处理以得到各个目标对象。
在本申请一个实施例中,所述三维图像处理装置包括:
后处理单元,用于对三维分割掩模图进行形态学操作、重采样以及平滑处理中的至少一种;其中形态学操作包括连通域标记和/或填洞。
在本申请一个实施例中,所述三维图像处理装置还包括:
网络处理单元,用于将待处理三维图像中相邻的多个切片图像进行至少一层网络处理,其中,所述网络处理包括:对所述多个切片图像所表示的三维数据进行基于神经网络的层处理,以提取包含所述多个切片图像所描述的三维图像区域的图像特征;其中所述图像特征用于标识其对应的体素属于至少一个目标对象的概率。
在本申请一个实施例中,所述三维图像处理装置还包含:
图像处理通道单元,用于识别至少一个目标对象的概率对应的图像处理通道;其中,每一所述图像处理通道用于计算待处理三维图像中的各体素属于相应目标对象的概率。
在本申请一个实施例中,所述三维图像处理装置还包含:
切片图像获取单元,用于基于CT医学影像设备摄取骨骼而得到的切片图像序列。在本申请一个实施例中,所述三维图像处理装置包括:
获取单元,用于获取训练数据,其中所述训练数据包括训练三维图像以及训练三维图像对应的标签;
输出单元,用于将训练三维图像输入待训练的三维图像处理模型,以输出对应训练三维图像的分割概率图;
分割概率图处理单元,用于对分割概率图进行处理得到符合预设的训练截止条件。
在本申请一个实施例中,所述三维图像处理装置还包括:
分析单元,用于利用相应标签,计算分割概率图的偏差信息;其中,所述偏差信息用于评价待训练的三维图像处理模型的预测准确性;
偏差信息迭代单元,用于利用偏差信息迭代地对三维图像处理模型进行训练,直至所得到的偏差信息符合预设的训练截止条件。
在本申请一个实施例中,所述三维图像处理装置还包括:
偏差信息单元,用于利用相应标签计算分割概率图的损失函数,以得到偏差信息。
在本申请一个实施例中,所述三维图像处理装置还包括:
特征提取单元,用于将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像;
下采样层单元,用于通过下采样层依次对初始训练特征图像进行下采样;
反向残差计算单元,用于通过残差卷积块对下采样后的初始训练特征图像进行反向残差计算,得到训练特征图像;
上采样层单元,用于通过上采样层依次对训练特征图像进行上采样,得到训练分割概率图。
在本申请一个实施例中,将训练理三维图像输入特征提取层,进行特征提取得到初始训练特征图像之前,所述三维图像处理装置还包括:
第二预处理单元,用于对待处理三维图像进行第二预处;,其中第二预处理包括设置窗宽窗位、重采样、数据增强、数据归一化以及自适应调节图像尺寸中的至少一项。
在本申请一个实施例中,所述三维图像处理装置还包括:
数据增强单元,用于对训练三维图像进行随机旋转、随机水平方向翻转以及随机裁剪中的至少一项。
在本申请一个实施例中,所述三维图像处理装置还包括:
自适应调节图像尺寸单元,用于对训练三维图像进行边缘填充和/或边缘裁剪。
关于三维图像处理装置的具体限定可以参见上文中对于三维图像处理方法的限定,在此不再赘述。上述三维图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在本申请一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图18所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储周期任务分配数据,例如配置文件、理论运行参数和理论偏差值范围、任务属性信息等。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种三维图像处理方法。
领域技术人员可以理解,图18中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,该存储器存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取待处理三维图像;
通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率;
根据概率对待处理三维图像进行分割以得到各个目标对象。
在本申请一个实施例中,处理器执行计算机程序时通过预先训练的三维图像处理模型对待处理三维图像进行处理,以进一步实现:
获取每一体素的最大的概率所对应的目标对象作为体素的分割结果;
根据体素的分割结果得到目标对象。
在本申请一个实施例中,通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率之前,处理器执行计算机程序时进一步实现:
对待处理三维图像进行第一预处理,其中第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸中的至少一项。
在本申请一个实施例中,处理器执行计算机程序以进一步实现:
根据概率对待处理三维图像进行处理以得到三维分割掩模图;
对三维分割掩模图进行后处理以得到各个目标对象。
在本申请一个实施例中,处理器执行计算机程序以进一步实现:
对分割掩模图进行形态学操作、重采样以及平滑处理中的至少一种;其中形态学操作包括连通域标记和/或填洞。
在本申请一个实施例中,处理器执行计算机程序时进一步实现:
将待处理三维图像中相邻的多个切片图像进行至少一层网络处理,其中,网络处理包括:对多个切片图像所表示的三维数据进行基于神经网络的层处理,以提取包含多个切片图像所描述的三维图像区域的图像特征;其中图像特征用于标识其对应的体素属于至少一个目标对象的概率。
在本申请一个实施例中,处理器执行计算机程序时进一步实现:预先训练的三维图像处理模型包含用于识别至少一个目标对象的概率的图像处理通道;其中,每一图像处理通道用于计算待处理三维图像中的各体素属于相应目标对象的概率。
在本申请一个实施例中,处理器执行计算机程序时实现待处理三维图像包含基于CT医学影像设备摄取骨骼而得到的切片图像序列。
在本申请一个实施例中,处理器执行计算机程序时进一步实现一种三维图像处理模型的训练方法,包括:
获取训练数据;其中训练数据包括训练三维图像以及训练三维图像对应的标签,标签表示训练三维图像中各体素与目标对象之间的属性关系;
将训练三维图像输入待训练的三维图像处理模型,以输出对应训练三维图像的分割概率图;
对分割概率图进行处理得到符合预设的训练截止条件。
在本申请一个实施例中,处理器执行计算机程序时进一步实现:
利用相应标签,计算分割概率图的偏差信息;其中,偏差信息用于评价待训练的三维图像处理模型的预测准确性;
利用偏差信息迭代地对三维图像处理模型进行训练,直至所得到的偏差信息符合预设的训练截止条件。
在本申请一个实施例中,处理器执行计算机程序时进一步实现:利用相应标签,计算分割概率图的损失函数,以得到偏差信息。
在本申请一个实施例中,处理器执行计算机程序时进一步实现:
将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像;
通过下采样层依次对初始训练特征图像进行下采样;
通过残差卷积块对下采样后的初始训练特征图像进行反向残差计算,得到训练特征图像;
通过上采样层依次对训练特征图像进行上采样,得到训练分割概率图。
在本申请一个实施例中,将训练理三维图像输入特征提取层,进行特征提取得到初始 训练特征图像之前,处理器执行计算机程序时进一步实现:
对待处理三维图像进行第二预处理,其中第二预处理包括设置窗宽窗位、重采样、数据增强、数据归一化以及自适应调节图像尺寸中的至少一项。
在本申请一个实施例中,处理器执行计算机程序时进一步实现:
对训练三维图像进行随机旋转、随机水平方向翻转以及随机裁剪中的至少一项。
在本申请一个实施例中,处理器执行计算机程序时进一步实现:
对训练三维图像进行边缘填充和/或边缘裁剪。
在本申请一个实施例中,还提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
获取待处理三维图像;
通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率;
根据概率对待处理三维图像进行分割以得到各个目标对象。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
获取每一体素的最大的概率所对应的目标对象作为体素的分割结果;
根据体素的分割结果得到目标对象。
在本申请一个实施例中,通过预先训练的三维图像处理模型对待处理三维图像进行处理,以得到待处理三维图像中每个体素属于目标对象的概率之前,计算机程序被处理器执行时进一步实现:
对待处理三维图像进行第一预处理,其中第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸中的至少一项。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
根据概率对待处理三维图像进行处理以得到三维分割掩模图;
对三维分割掩模图进行后处理以得到各个目标对象。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
对分割掩模图进行形态学操作、重采样以及平滑处理中的至少一种;其中所述形态学操作包括连通域标记和/或填洞。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
将待处理三维图像中相邻的多个切片图像进行至少一层网络处理,其中,网络处理包括:对多个切片图像所表示的三维数据进行基于神经网络的层处理,以提取包含多个切片图像所描述的三维图像区域的图像特征;其中图像特征用于标识其对应的体素属于至少一个目标对象的概率。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现预先训练的三维图像处理模型包含用于识别至少一个目标对象的概率的图像处理通道;其中,每一图像处理通道用于计算待处理三维图像中的各体素属于相应目标对象的概率。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现待处理三维图像包含基于CT医学影像设备摄取骨骼而得到的切片图像序列。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现一种三维图像处理模型的训练方法,包括:
获取训练数据,其中所述训练数据包括训练三维图像以及训练三维图像对应的标签,所述标签表示训练三维图像中各体素与目标对象之间的属性关系;
将训练三维图像输入待训练的三维图像处理模型,以输出对应训练三维图像的分割概率图;
对分割概率图进行处理得到符合预设的训练截止条件。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
利用相应标签,计算分割概率图的偏差信息;其中,偏差信息用于评价待训练的三维 图像处理模型的预测准确性;
利用偏差信息迭代地对三维图像处理模型进行训练,直至所得到的偏差信息符合预设的训练截止条件。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:利用相应标签,计算分割概率图的损失函数,以得到偏差信息。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
将训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像;
通过下采样层依次对初始训练特征图像进行下采样;
通过残差卷积块对下采样后的初始训练特征图像进行反向残差计算,得到训练特征图像;
通过上采样层依次对训练特征图像进行上采样,得到训练分割概率图。
在本申请一个实施例中,将训练理三维图像输入特征提取层,进行特征提取得到初始训练特征图像之前,计算机程序被处理器执行时进一步实现:
对待处理三维图像进行第二预处理,其中第二预处理包括设置窗宽窗位、重采样、数据增强、数据归一化以及自适应调节图像尺寸中的至少一项。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
对训练三维图像进行随机旋转、随机水平方向翻转以及随机裁剪中的至少一项。
在本申请一个实施例中,计算机程序被处理器执行时进一步实现:
对训练三维图像进行边缘填充和/或边缘裁剪。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的耦合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的耦合都进行描述,然而,只要这些技术特征的耦合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (19)

  1. 一种三维图像处理方法,包括:
    获取待处理三维图像;
    通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,以得到所述待处理三维图像中每个体素属于至少一个目标对象的概率;
    根据所述概率对所述待处理三维图像进行分割以得到各个目标对象。
  2. 根据权利要求1所述的三维图像处理方法,其中,所述通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,包括:
    获取每个体素的最大的概率所对应的目标对象作为所述体素的分割结果;
    根据所述体素的分割结果得到目标对象。
  3. 根据权利要求1所述的三维图像处理方法,其中,所述通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,包括:
    在不同通道的目标类别下,获取每个体素的最大的概率所对应的目标对象作为该目标类别下的所述体素的分割结果。
  4. 根据权利要求1所述的三维图像处理方法,其中,通过预先训练的三维图像处理模型对所述待处理三维图像进行处理包括:
    对所述待处理三维图像进行第一预处理以供向所述三维图像处理模型提供输入数据;其中,所述第一预处理包括设置窗宽窗位、重采样、数据归一化以及自适应调节图像尺寸中的至少一项。
  5. 根据权利要求1所述的三维图像处理方法,其中,所述根据所述概率对所述待处理三维图像进行分割以得到各个目标对象包括:
    根据所述概率对所述待处理三维图像进行处理以得到三维分割掩模图;
    对所述三维分割掩模图进行后处理以得到各个目标对象。
  6. 根据权利要求5所述的三维图像处理方法,其中,所述对所述三维分割掩模图进行后处理以得到各个目标对象,包括:
    对所述分割掩模图进行形态学操作、重采样以及平滑处理中的至少一种,以得到各个目标对象;其中所述形态学操作包括连通域标记和/或填洞。
  7. 根据权利要求1所述的三维图像处理方法,其中,所述通过预先训练的三维图像处理模型对所述待处理三维图像进行处理包括:
    将待处理三维图像中相邻的多个切片图像进行至少一层网络处理,其中,所述网络处理包括:对所述多个切片图像所表示的三维数据进行基于神经网络的层处理,以提取包含所述多个切片图像所描述的三维图像区域的图像特征;所述图像特征用于标识其对应的体素属于至少一个目标对象的概率。
  8. 根据权利要求1所述的三维图像处理方法,其中,所述预先训练的三维图像处理模型包含用于识别至少一个目标对象的概率的图像处理通道;其中,每一所述图像处理通道用于计算所述待处理三维图像中的各体素属于相应目标对象的概率。
  9. 根据权利要求1所述的三维图像处理方法,其中,所述待处理三维图像包含基于CT医学影像设备摄取骨骼而得到的切片图像序列。
  10. 一种三维图像处理模型的训练方法,包括:
    获取训练数据,其中所述训练数据包括训练三维图像以及所述训练三维图像对应的标签,所述标签表示所述训练三维图像中各体素与目标对象之间的属性关系;
    将所述训练三维图像输入待训练的三维图像处理模型,以输出对应所述训练三维图像的分割概率图;
    对所述分割概率图进行处理得到符合预设的训练截止条件。
  11. 根据权利要求10所述的三维图像处理模型的训练方法,其中,所述对所述分割概率图进行处理得到符合预设的训练截止条件,包括:
    利用相应标签,计算所述分割概率图的偏差信息;其中,所述偏差信息用于评价所述待训练的三维图像处理模型的预测准确性;
    利用所述偏差信息迭代地对所述三维图像处理模型进行训练,直至所得到的偏差信息符合预设的训练截止条件。
  12. 根据权利要求11所述的三维图像处理模型的训练方法,其中,所述利用相应标签,计算所述分割概率图的偏差信息,包括:利用相应标签,计算所述分割概率图的损失函数,以得到所述偏差信息。
  13. 根据权利要求11所述的三维图像处理模型的训练方法,其中,所述将所述训练三维图像输入待训练的三维图像处理模型,以输出对应所述训练三维图像的分割概率图,包括:
    将所述训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像;
    通过下采样层依次对所述初始训练特征图像进行下采样;
    通过残差卷积块对下采样后的所述初始训练特征图像进行反向残差计算,得到训练特征图像;
    通过上采样层依次对所述训练特征图像进行上采样,得到训练分割概率图。
  14. 根据权利要求11所述的三维图像处理模型的训练方法,其中所述将所述训练三维图像输入特征提取层,进行特征提取得到初始训练特征图像之前,还包括:
    对所述训练三维图像进行第二预处理,所述第二预处理包括设置窗宽窗位、重采样、数据增强、数据归一化以及自适应调节图像尺寸中的至少一项。
  15. 根据权利要求14所述的三维图像处理模型的训练方法,其中,所述数据增强,包括:
    对所述训练三维图像进行随机旋转、随机水平或竖直方向翻转以及随机裁剪中的至少一项。
  16. 根据权利要求14所述的三维图像处理模型的训练方法,其中,所述自适应调节图像尺寸,包括:
    对所述训练三维图像进行边缘填充和/或边缘裁剪。
  17. 一种三维图像处理装置,包括:
    获取单元,用于获取待处理三维图像;
    处理单元,用于通过预先训练的三维图像处理模型对所述待处理三维图像进行处理,以得到所述待处理三维图像中每个体素属于目标对象的概率;
    分割单元,用于根据所述概率对所述待处理三维图像进行分割以得到各个目标对象。
  18. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中所述处理器执行所述计算机程序时实现权利要求1至9或权利要求10至16中任一项所述的方法的步骤。
  19. 一种计算机存储介质,其上存储有计算机程序,其中所述计算机程序被处理器执行时实现权利要求1至9或权利要求10至16中任一项所述的方法的步骤。
PCT/CN2022/126618 2021-10-21 2022-10-21 三维图像处理方法、装置、计算机设备和存储介质 WO2023066364A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111228710.6 2021-10-21
CN202111228710.6A CN113962959A (zh) 2021-10-21 2021-10-21 三维图像处理方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023066364A1 true WO2023066364A1 (zh) 2023-04-27

Family

ID=79465506

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126618 WO2023066364A1 (zh) 2021-10-21 2022-10-21 三维图像处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN113962959A (zh)
WO (1) WO2023066364A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274557A (zh) * 2023-11-20 2023-12-22 首都师范大学 三维图像的数据增强方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962959A (zh) * 2021-10-21 2022-01-21 苏州微创畅行机器人有限公司 三维图像处理方法、装置、计算机设备和存储介质
CN115272206B (zh) * 2022-07-18 2023-07-04 深圳市医未医疗科技有限公司 医学图像处理方法、装置、计算机设备及存储介质
CN116958552A (zh) * 2023-07-25 2023-10-27 强联智创(北京)科技有限公司 血管分割方法、电子设备及存储介质
CN117689683B (zh) * 2024-02-01 2024-05-03 江苏一影医疗设备有限公司 一种双腿膝关节运动状态图像处理方法、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685763A (zh) * 2018-11-12 2019-04-26 浙江德尔达医疗科技有限公司 三维ct多骨块自动分割方法、装置、计算机设备及存储介质
CN111145181A (zh) * 2019-12-25 2020-05-12 华侨大学 一种基于多视角分离卷积神经网络的骨骼ct图像三维分割方法
CN111563902A (zh) * 2020-04-23 2020-08-21 华南理工大学 一种基于三维卷积神经网络的肺叶分割方法及系统
US20210004956A1 (en) * 2018-03-12 2021-01-07 Persimio Ltd. Automated bone segmentation in images
CN113962959A (zh) * 2021-10-21 2022-01-21 苏州微创畅行机器人有限公司 三维图像处理方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210004956A1 (en) * 2018-03-12 2021-01-07 Persimio Ltd. Automated bone segmentation in images
CN109685763A (zh) * 2018-11-12 2019-04-26 浙江德尔达医疗科技有限公司 三维ct多骨块自动分割方法、装置、计算机设备及存储介质
CN111145181A (zh) * 2019-12-25 2020-05-12 华侨大学 一种基于多视角分离卷积神经网络的骨骼ct图像三维分割方法
CN111563902A (zh) * 2020-04-23 2020-08-21 华南理工大学 一种基于三维卷积神经网络的肺叶分割方法及系统
CN113962959A (zh) * 2021-10-21 2022-01-21 苏州微创畅行机器人有限公司 三维图像处理方法、装置、计算机设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274557A (zh) * 2023-11-20 2023-12-22 首都师范大学 三维图像的数据增强方法、装置、电子设备及存储介质
CN117274557B (zh) * 2023-11-20 2024-03-26 首都师范大学 三维图像的数据增强方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113962959A (zh) 2022-01-21

Similar Documents

Publication Publication Date Title
WO2023066364A1 (zh) 三维图像处理方法、装置、计算机设备和存储介质
CN109598728B (zh) 图像分割方法、装置、诊断系统及存储介质
EP3449421B1 (en) Classification and 3d modelling of 3d dento-maxillofacial structures using deep learning methods
EP3591616A1 (en) Automated determination of a canonical pose of a 3d dental structure and superimposition of 3d dental structures using deep learning
CN110599528A (zh) 一种基于神经网络的无监督三维医学图像配准方法及系统
CN111161269B (zh) 图像分割方法、计算机设备和可读存储介质
Li et al. Automatic skull defect restoration and cranial implant generation for cranioplasty
CN111260055A (zh) 基于三维图像识别的模型训练方法、存储介质和设备
WO2008123969A9 (en) Combined feature ensemble mutual information image registration
CN112215755B (zh) 一种基于反投影注意力网络的图像超分辨率重建方法
WO2023202265A1 (zh) 用于伪影去除的图像处理方法、装置、设备、产品和介质
DE102018109802A1 (de) Qualitätsbewertung bei einer automatischen Bildregistrierung
CN113936011A (zh) 基于注意力机制的ct影像肺叶图像分割系统
JP2023552589A (ja) 幾何学的深層学習を使用する歯科スキャンの自動処理
JP2023515367A (ja) モデルへの入力インスタンスの分布外検出
CN116664590B (zh) 基于动态对比增强磁共振图像的自动分割方法及装置
CN114972026A (zh) 图像处理方法和存储介质
KR102476888B1 (ko) 디지털 병리이미지의 인공지능 진단 데이터 처리 장치 및 그 방법
WO2022163402A1 (ja) 学習済みモデルの生成方法、機械学習システム、プログラムおよび医療画像処理装置
CN115359005A (zh) 图像预测模型生成方法、装置、计算机设备和存储介质
CN115081637A (zh) 用于生成合成成像数据的机器学习算法的现场训练
CN113689454A (zh) 基于卷积神经网络的3d ct椎体分割算法
JP7493464B2 (ja) 3dオブジェクトの正準ポーズの自動化判定、および深層学習を使った3dオブジェクトの重ね合わせ
Lugadilu Development of a statistical shape and appearance model of the skull from a South African population
Bakaev et al. Feasibility of Spine Segmentation in ML-Based Recognition of Vertebrae in X-Ray Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882969

Country of ref document: EP

Kind code of ref document: A1