CN111192269B - Model training and medical image segmentation method and device - Google Patents

Model training and medical image segmentation method and device Download PDF

Info

Publication number
CN111192269B
CN111192269B CN202010001039.0A CN202010001039A CN111192269B CN 111192269 B CN111192269 B CN 111192269B CN 202010001039 A CN202010001039 A CN 202010001039A CN 111192269 B CN111192269 B CN 111192269B
Authority
CN
China
Prior art keywords
voxel
vector
medical image
feature extraction
extraction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010001039.0A
Other languages
Chinese (zh)
Other versions
CN111192269A (en
Inventor
杨斌斌
魏东
马锴
郑冶枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010001039.0A priority Critical patent/CN111192269B/en
Publication of CN111192269A publication Critical patent/CN111192269A/en
Application granted granted Critical
Publication of CN111192269B publication Critical patent/CN111192269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application relates to a model training and medical image segmentation method and device, wherein the method comprises the following steps: acquiring a marked medical image; cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model; obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel feature vector of the support set voxel block; determining a vector distance between a voxel feature vector of the predictor set voxel block and the class prototype vector; training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked. By adopting the method, the time and operation consumed by manufacturing a large number of marked medical images during model training can be saved, and the model training efficiency is improved.

Description

Model training and medical image segmentation method and device
Technical Field
The present application relates to the field of medical image processing technology, and in particular, to a method and apparatus for medical image segmentation, a medical image segmentation method and apparatus, a computer readable storage medium, and a computer device.
Background
Medical image segmentation is a relatively important technique in imaging diagnosis. Medical personnel are required to perform patient analysis and study using segmented medical images.
At present, a common medical image segmentation method mainly utilizes semantic segmentation of a convolutional neural network model. Specifically, the medical image can be input into a convolutional neural network model, the convolutional neural network model extracts feature vectors of each voxel block in the medical image, voxel categories of each voxel block are marked according to the feature vectors, and when voxel categories of each voxel block in the medical image are marked, the medical image segmentation is completed.
In order to more accurately perform medical image segmentation, a convolutional neural network model needs to be trained using a large number of labeled medical images as training samples.
However, unlike labeling common images, labeling medical images requires a professional to label a large number of voxels in the medical images one by one, and the manual labeling process generally requires a large amount of manpower and material resources to obtain enough training samples to train the convolutional neural network model for medical image segmentation.
Therefore, the related art image segmentation method has a problem in that the model training efficiency is low.
Disclosure of Invention
Based on this, it is necessary to provide a method and apparatus for medical image segmentation, a medical image segmentation method and apparatus, a computer-readable storage medium and a computer device, aiming at the technical problem that the model training is inefficient.
A model training method for medical image segmentation, comprising:
acquiring a marked medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel dices comprise a support set voxel dices and a prediction collective voxel dices;
cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model;
obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel feature vector of the support set voxel block;
determining a vector distance between a voxel feature vector of the predictor set voxel block and the class prototype vector;
training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
A medical image segmentation method, comprising:
receiving a medical image uploaded by a terminal; the medical images comprise marked medical images and medical images to be marked; the marked medical image comprises a plurality of voxel cut blocks marked with voxel categories; the voxel dices in the marked medical image comprise support set voxel dices and prediction set voxel dices;
cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model;
obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel feature vector of the support set voxel block;
determining a vector distance between a voxel feature vector of the predictor set voxel block and the class prototype vector;
training the feature extraction model according to the vector distance to obtain a trained feature extraction model;
cutting voxels of the medical image to be marked into blocks, and inputting the blocks into the trained feature extraction model to obtain a voxel feature vector to be predicted, which is output by the trained feature extraction model;
determining a vector distance between the voxel feature vector to be predicted and the class prototype vector, and determining a target class prototype vector according to the vector distance between the voxel feature vector to be predicted and the class prototype vector;
Labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled to obtain segmented medical images;
and sending the segmented medical image to the terminal.
A model training apparatus for medical image segmentation, comprising:
the image acquisition module is used for acquiring the marked medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel dices comprise a support set voxel dices and a prediction collective voxel dices;
the input module is used for cutting voxels of the marked medical image into blocks, inputting the blocks into the feature extraction model, and obtaining voxel feature vectors output by the feature extraction model;
the prototype vector determining module is used for obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel characteristic vector of the support set voxel block;
the vector distance determining module is used for determining the vector distance between the voxel characteristic vector of the prediction set voxel block and the category prototype vector;
The training module is used for training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
A medical image segmentation apparatus, comprising:
the image receiving module is used for receiving the medical image uploaded by the terminal; the medical images comprise marked medical images and medical images to be marked; the marked medical image comprises a plurality of voxel cut blocks marked with voxel categories; the voxel dices in the marked medical image comprise support set voxel dices and prediction set voxel dices;
the input module is used for cutting voxels of the marked medical image into blocks, inputting the blocks into the feature extraction model, and obtaining voxel feature vectors output by the feature extraction model;
the prototype vector determining module is used for obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel characteristic vector of the support set voxel block;
the vector distance determining module is used for determining the vector distance between the voxel characteristic vector of the prediction set voxel block and the category prototype vector;
The training module is used for training the feature extraction model according to the vector distance to obtain a trained feature extraction model;
the segmentation module is used for cutting voxels of the medical image to be marked into blocks, inputting the blocks into the trained feature extraction model, and obtaining a voxel feature vector to be predicted, which is output by the trained feature extraction model; determining a vector distance between the voxel feature vector to be predicted and the class prototype vector, and determining a target class prototype vector according to the vector distance between the voxel feature vector to be predicted and the class prototype vector; labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled to obtain segmented medical images;
and the feedback module is used for sending the segmented medical image to the terminal.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring a marked medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel dices comprise a support set voxel dices and a prediction collective voxel dices;
Cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model;
obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel feature vector of the support set voxel block;
determining a vector distance between a voxel feature vector of the predictor set voxel block and the class prototype vector;
training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
acquiring a marked medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel dices comprise a support set voxel dices and a prediction collective voxel dices;
cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model;
Obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel feature vector of the support set voxel block;
determining a vector distance between a voxel feature vector of the predictor set voxel block and the class prototype vector;
training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
The method and apparatus for medical image segmentation, medical image segmentation method and apparatus, computer readable storage medium and computer device described above, obtain class prototype vectors for each voxel class by first employing voxel feature vectors of support set voxel patches in a labeled medical image, and then train a feature extraction model according to vector distances between the voxel feature vectors of prediction set voxel patches in the labeled medical image and the class prototype vectors of each voxel class. Therefore, the feature extraction model is trained by utilizing the relevance of each voxel cut in the voxel feature vector in the same marked medical image, and the feature extraction model can be trained to be converged according to the vector distance between the voxel feature vectors, so that a large number of marked medical images are not required to be used as training samples for training the feature extraction model, the time and operation required for manufacturing a large number of marked medical images during model training are saved, and the model training efficiency is improved.
Drawings
FIG. 1 is a diagram of an application environment for a model training method in one embodiment;
FIG. 2 is a flow chart of a model training method for medical image segmentation in one embodiment;
FIG. 3A is a schematic illustration of an MRI image of a brain of an embodiment;
FIG. 3B is a schematic illustration of MRI image segmentation of the brain according to one embodiment;
FIG. 3C is a schematic illustration of a CT image of a lung, according to one embodiment;
FIG. 3D is a schematic illustration of a lung CT image segmentation, according to one embodiment;
FIG. 4 is a schematic illustration of determining a voxel class of a voxel slab based on vector distances, according to one embodiment;
FIG. 5 is a flowchart of a medical image segmentation method according to an embodiment;
FIG. 6 is a block diagram of a model training apparatus for medical image segmentation in one embodiment;
FIG. 7 is a block diagram illustrating a medical image segmentation apparatus according to an embodiment;
FIG. 8 is a flow chart of a brain MRI image segmentation method according to an embodiment;
FIG. 9 is a schematic diagram of an application scenario of brain MRI image segmentation in one embodiment;
FIG. 10 is a block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
FIG. 1 is a diagram of an application environment for a model training method in one embodiment. Referring to fig. 1, the model training method is applied to a medical image segmentation system. The medical image segmentation system includes a terminal 110 and a server 120. Wherein the terminal 110 and the server 120 are connected through a network. The terminal 110 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
As shown in FIG. 2, in one embodiment, a model training method for medical image segmentation is provided. The present embodiment is mainly exemplified by the application of the method to the server 120 in fig. 1.
Firstly, it should be noted that the model training method of the present application can be applied to application scenarios of medical image segmentation. Image modalities of medical images may generally include computed tomography (CT, computed Tomography), magnetic resonance imaging (MRI, magnetic Resonance Imaging), ultrasound imaging (Ultrasound), and the like. In the above application scenario, the user may log in the medical image processing platform through the terminal 110, the medical image processing platform provides an uploading port of the medical image, and the user may upload the medical images of the same image modality of a group of specific parts of a patient to the server 120 through the uploading port. For example, multiple MRI images of the brain of the patient may be uploaded to the server 120.
The user can annotate one of the medical images through an annotating function provided by the medical image processing platform. For example, for brain MRI images, a user may label the brain tissue structure type of a voxel according to the brain tissue in which each voxel in the brain MRI image is located, thereby completing the labeling of the medical image.
Fig. 3A is a schematic diagram of an MRI image of a brain according to one embodiment. The MRI images of the brain as shown contain brain tissue of various brain tissue structure categories. In the MRI images of the brain which are not segmented, the outline of each brain tissue is unclear and is difficult for a user to analyze and study. The user can label each voxel in the brain MRI image in a manual labeling mode.
Fig. 3B is a schematic diagram of brain MRI image segmentation, according to one embodiment. As shown in the figure, after labeling, the brain MRI image can be segmented according to the labeling, so that the brain MRI image which is clearly visible among the outlines of the brain tissues is obtained, and the analysis and the research of a user are facilitated.
Fig. 3C is a schematic illustration of a CT image of a lung, according to one embodiment. As shown, the CT image of the lung contains lung tissue of various lung tissue structure categories. In an undivided lung CT image, the contours of the individual lung tissues are unclear and difficult for the user to analyze and study. The user can label each voxel in the lung CT image in a manual labeling mode.
Fig. 3D is a schematic illustration of a lung CT image segmentation according to one embodiment. As shown in the figure, after labeling, the lung CT image can be segmented according to the labeling, so that the clear visible lung CT image among the outlines of all lung tissues is obtained, and the analysis and the research of users are facilitated.
The server 120 obtains a set of medical images, one of which is a labeled medical image, and the rest of which are medical images to be labeled. The server 120 may train a preset feature extraction model by using the labeled medical image as a training sample, label the rest of medical images to be labeled by using the trained feature extraction model, and divide the images according to the labeled result.
When the model training method is applied to the application scene of the medical image segmentation, a small amount of marked medical images can be used for training the model for image segmentation, a large amount of training samples are not required to be manufactured, the time and operation for training the model are saved, and the efficiency of model training is improved.
Referring to fig. 2, the model training method specifically includes the following steps:
s202, obtaining a marked medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel cuts includes a support set voxel cut and a prediction set voxel cut.
The marked medical image may be a marked three-dimensional medical image. Such as CT images, MRI images, and the like.
Wherein a voxel may be the smallest unit of Volume in a three-dimensional medical image segmentation, also commonly referred to as Volume element (Volume Pixel). The concept of a voxel is that a pixel is typically a two-dimensional plane and a voxel is typically a three-dimensional cube, relative to a pixel in a two-dimensional image.
The voxel segmentation may be an image block containing one or more voxels segmented in the labeled medical image. For example, a voxel tile may be an image block of 23 pixels by 23 pixels.
The voxel class may be a class to which a voxel included in the voxel block belongs. For example, the MRI image of the brain includes different brain tissues, the different brain tissues have corresponding brain tissue structure categories, and the voxel category of the voxel block is determined according to the brain tissue structure category of the brain tissue where the voxel is located in the image.
Wherein the support set voxel segmentation may be a voxel segmentation of a class prototype vector in the labeled medical image for determining different voxel classes.
The prediction set voxel dicing may be a voxel dicing in the labeled medical image other than the support voxel dicing.
In a specific implementation, the server 120 may receive the manually labeled medical image uploaded by the user through the terminal 110, as the labeled medical image. The server 120 may divide the labeled medical image into a plurality of tiles, each of which may contain one or more voxels in the labeled medical image, thereby obtaining a plurality of voxel tiles.
Since voxels in the labeled medical image have been labeled with corresponding voxel categories, the voxel category corresponding to each voxel patch may be determined. For a voxel segment including a plurality of voxels, the voxel class of the voxel at the center of the segment may be defined as the voxel class of the voxel segment.
The server 120 may employ voxel dicing in the labeled medical image to construct a training sample set for training the feature extraction model.
First, the server 120 may classify the plurality of voxel slices according to their voxel categories to obtain a plurality of homogeneous diced sample sets, where the voxel slices in each homogeneous diced sample set belong to the same voxel category. Multiple similar diced sample sets form a complete training sample set.
The server 120 may then randomly select K homogeneous sliced sample sets from the training sample set. And randomly selecting N individual prime cut blocks from each similar cut block sample set to form a support set. Voxel dicing in support set serves as support collective dicing as described above. The server 120 may randomly select a part of voxel dices from the voxel dices except for the supporting collective pixel dices in the K homogeneous dicing sample sets, to form a prediction set. Voxel dicing in the prediction set is used as the above-mentioned prediction collective voxel dicing.
S204, cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model.
The feature extraction model may be a deep neural network for extracting feature vectors from voxel patches. The feature extraction model can obtain a vector reflecting the feature of the voxel segmentation on the feature space, so that the feature extraction model can realize the mapping of the voxel segmentation from the data space to the feature space.
In a specific implementation, the server 120 may preset a feature extraction model, input the support set voxel cut block and the prediction set voxel cut block in the marked medical image as training samples to the feature extraction model, and the feature extraction model may output feature vectors of each voxel cut block accordingly to obtain respective voxel feature vectors of the support set voxel cut block and the prediction set voxel cut block.
S206, obtaining a category prototype vector corresponding to the voxel category of the support set voxel dices according to the voxel characteristic vector of the support set voxel dices.
The class prototype vector may be a feature vector obtained from voxel feature vectors of voxel patches of a plurality of support sets, the feature vector reflecting common features of voxel patches belonging to the same voxel class.
In a specific implementation, the server 120 may extract a plurality of support collective pixel tiles belonging to the same voxel class, and obtain a class prototype vector of the corresponding voxel class according to the voxel feature vectors of the plurality of support collective voxel tiles. And repeating the process to obtain the category prototype vector of each voxel category.
The above-mentioned specific embodiments of the class prototype vector may be obtained according to voxel feature vectors of a plurality of support set voxel patches, where in one specific embodiment, a target voxel class may be determined, and for the voxel feature vectors of the plurality of support set voxel patches belonging to the target voxel class, a mean value of the plurality of voxel feature vectors is determined, and the mean value is used as the class prototype vector of the target voxel class. In another embodiment, a target voxel class may be determined, and for voxel feature vectors of a plurality of support set voxel patches belonging to the target voxel class, intermediate values of the plurality of voxel feature vectors may be determined, and the intermediate values may be used as class prototype vectors of the target voxel class.
Of course, the person skilled in the art may use other ways to obtain the above-mentioned class prototype vector from the voxel feature vectors of the voxel patches of the plurality of support sets, so that the class prototype vector corresponding to the voxel class may reflect the common features of the voxel feature vectors of the voxel patches of the voxel class.
S208, determining vector distances between voxel feature vectors of the prediction set voxel block and the category prototype vector.
Wherein the vector distance may be used to measure the distance between feature vectors in feature space. The vector distance may be specifically a euclidean distance ((Euclidean Distance), mahalanobis distance (Mahalanobis Distance), or the like.
In particular implementations, server 120 may first determine a target voxel class of a predicted voxel slab and then look up a class prototype vector corresponding to the target voxel class. The server 120 may calculate the distance in the feature space between the voxel feature vector of the prediction set voxel segment and the class prototype vector corresponding to the target voxel class, to obtain the above-mentioned vector distance.
S210, training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
It should be noted that, the vector distance may measure the similarity between the predicted voxel segment and the class prototype vector of the voxel class to which the predicted voxel segment belongs. If the voxel feature vector of a voxel block of a certain prediction set has a larger vector distance between class prototype vectors corresponding to the voxel classes to which the voxel feature vector belongs, the voxel feature vector extracted from the voxel block by the feature extraction model is not accurate, and training is needed until the vector distance between the voxel feature vector of the prediction set voxel block and the class prototype vector corresponding to the voxel class to which the voxel feature vector belongs is smaller, and the vector distance between the class prototype vectors corresponding to other voxel classes is larger.
In a specific implementation, after determining the vector distance, the server 120 may train model parameters such as a weight value and a bias value in the feature extraction model according to the distance of the vector distance, so as to train the model parameters of the feature extraction model after training.
In one embodiment, a vector distance between a voxel feature vector of a predicted voxel segment and a class prototype vector of a plurality of voxel classes may be determined, resulting in a plurality of vector distances. Then, expected vector distances between voxel feature vectors of the prediction collective pixel blocks and class prototype vectors of a plurality of voxel classes are determined, and model parameters of the post-feature extraction model are trained in a random gradient descent optimization mode according to differences between the vector distances and the expected vector distances.
In another embodiment, the prediction probability that the voxel block belongs to each voxel class can be determined according to the vector distance between the voxel feature vector of the prediction collective voxel block and the class prototype vectors of the voxel classes, that is, the probability distribution for predicting that the voxel block belongs to different voxel classes. And determining a cross entropy loss value according to the probability distribution, and training model parameters of the feature extraction model in a random gradient descent optimization mode according to the cross entropy loss value.
Of course, one skilled in the art may also train the feature extraction model in other ways, depending on the vector distance.
In practical application, the feature extraction model may be trained for multiple rounds according to the vector distance until the feature extraction model converges, that is, the vector distance between the voxel feature vector of the prediction set voxel block and the class prototype vector of the voxel class of the prediction set voxel block is smaller, and the vector distance between the voxel feature vector and the class prototype vector of other voxel classes is larger. When the feature extraction model converges, the feature extraction model is the trained feature extraction model.
After obtaining the trained feature extraction model, the server 120 may input the voxel cut blocks of the labeled medical image into the trained feature extraction model, and the trained feature extraction model outputs an optimized voxel feature vector, and may obtain an optimized category prototype vector corresponding to each voxel category according to the optimized voxel feature vector. When the medical image to be marked is segmented, the voxel cut blocks in the medical image to be marked can be input into a trained feature extraction model to obtain voxel feature vectors of the voxel cut blocks to be predicted, vector distances between the voxel feature vectors of the voxel cut blocks to be predicted and optimized class prototype vectors of all voxel classes are calculated, the voxel class with the minimum vector distance is marked as the voxel class of the voxel cut blocks in the medical image to be marked, therefore, all the voxel cut blocks in the medical image to be marked are marked, and finally, the image segmentation of the medical image to be marked is completed according to the marking result.
In the model training method, firstly, voxel feature vectors of support set voxel dices in marked medical images are adopted to obtain category prototype vectors of each voxel category, and then, a feature extraction model is trained according to vector distances between the voxel feature vectors of prediction set voxel dices in the marked medical images and the category prototype vectors of each voxel category. Therefore, the feature extraction model is trained by utilizing the relevance of each voxel cut in the voxel feature vector in the same marked medical image, and the feature extraction model can be trained to be converged according to the vector distance between the voxel feature vectors, so that a large number of marked medical images are not required to be used as training samples for training the feature extraction model, the time and operation required for manufacturing a large number of marked medical images during model training are saved, and the model training efficiency is improved.
In one embodiment, after the step S202, the method further includes:
randomly selecting K target voxel categories from the plurality of voxel categories; k is more than or equal to 1; extracting voxel cut blocks marked as the target voxel type in the marked medical image to obtain K similar cut block sample sets; selecting N supporting set voxels for dicing in the K similar dicing sample sets respectively; n is more than 1; selecting the prediction collective cut from the voxel cuts except the support collective cut in the K cut sample sets;
When the feature extraction model converges, the method further comprises: and returning to the step of randomly selecting K target voxel categories from the voxel categories until the trained feature extraction model is obtained.
In particular implementations, after obtaining the labeled medical image, the server 220 may randomly select K target voxel categories. Then, from each voxel cut of the marked medical image, extracting the voxel cut with the voxel category as the target voxel category to form the similar cut sample set. Thus, K similar block sample sets are obtained, and the voxel types of the voxel blocks in each similar block sample set are the same.
The server 120 may select N voxel tiles from K homogeneous tile sample sets, respectively, to obtain k×n voxel tiles, and form a support set (support set). And cutting each voxel in the support set into blocks, namely cutting the support collective voxels.
The server 120 may select voxel tiles other than support voxel tiles in the K tile sample sets, constituting a prediction set (query set). And cutting each voxel in the prediction set into blocks, namely cutting the prediction collective elements.
The server 120 obtains a support set voxel dicing and a prediction set voxel dicing, trains the feature extraction model by adopting the support set voxel dicing and the prediction set voxel dicing, and when the feature extraction model converges, returns to the step of randomly selecting K target voxel categories to train the feature extraction model by adopting voxel dicing of different voxel categories until the trained feature extraction model is obtained.
The above-described method of selecting N voxel segments from K voxel types and performing model training is generally called small sample learning (few-shot learning). The training of the model by a small number of samples is performed by small sample learning, so that the training effect is ensured, a large number of training samples are not required to be manufactured, and the training efficiency of the model is improved.
It should be further noted that the support set and the prediction set described above are also referred to as Meta-Task in model training. The model training with Meta-Task is also known as Meta Learning (Meta Learning) or Learning how to learn (Learning to learn). In the whole model training process, a sample set is decomposed into a plurality of different Meta-Task, and after the model is trained by adopting one Meta-Task, the model is trained by adopting a new Meta-Task. By adopting different mechanisms of Meta-Task training, the feature extraction model can learn the commonalities of different Meta-Task, namely, the common features of the voxel feature vectors of the voxel dices of different voxel categories, so that when the voxel feature vectors of the voxel dices are extracted, the feature extraction model can still extract the voxel feature vectors accurately reflecting the features of the voxel dices even for the voxel dices of new voxel categories, and the generalization capability of the feature extraction model is improved.
In the model training method, K voxel categories are randomly selected from a plurality of voxel categories, corresponding K similar dicing sample sets are obtained according to the K voxel categories, N voxel dices are selected from the K similar dicing sample sets to serve as supporting collective body dices respectively, prediction collective body dices outside the supporting collective body dices are selected from the K similar dicing sample sets, after the feature extraction model is trained to be converged by adopting the supporting collective body dices and the prediction collective body dices obtained according to the K voxel categories, another K voxel categories are randomly selected to adopt the supporting collective body dices and the prediction collective body dices of different voxel categories, and the next round of training is carried out on the feature extraction model, so that small sample learning and element learning are combined in the training process of the feature extraction model for medical image segmentation, and the generalization capability of the feature extraction model is improved while the training efficiency of the model is improved.
In one embodiment, the supporting collective elements are cut into a plurality of pieces, and the step S206 includes:
determining the number of the support collective elements; counting vector sum of voxel characteristic vectors of a plurality of the support set voxel dices; and calculating the ratio of the vector sum to the number of the cut blocks to obtain the class prototype vector.
In a specific implementation, the server 120 may first count the number of diced blocks of a plurality of support voxel diced blocks belonging to the same voxel class, and count the vector sum of voxel feature vectors of a plurality of support set voxel diced blocks belonging to the same voxel class, and divide the vector sum by the number of diced blocks to obtain class prototype vectors corresponding to the voxel class to which the plurality of support voxel diced blocks belong.
In practical application, the category prototype vector c of the voxel category can be calculated by the following formula k
Wherein, |S k The i represents the number of tiles of the support collective tiles contained in the support set of the kth class voxel class; x is x i Represents an ith voxel cut block; y is i A voxel class representing an ith voxel cut; f (f) φ (x i ) Voxel feature vectors representing the ith voxel segment; c k Class prototype vector representing a kth class of voxel classes。
In the model training method, the vector sum of the voxel feature vectors supporting the voxel dicing and the dicing number of the voxel feature vectors supporting the voxel dicing are counted, the average value of the voxel feature vectors supporting the voxel dicing of the set is calculated according to the vector sum and the dicing number, the category prototype vector of the voxel category can be obtained without a complex calculation mode, the calculation resource is saved, and the model training efficiency is improved.
In one embodiment, the step S208 includes:
determining first feature space coordinates of voxel feature vectors of the prediction set voxel block; determining second feature space coordinates of the class prototype vector; and obtaining the vector distance according to the coordinate distance between the first feature space coordinate and the second feature space coordinate.
In a specific implementation, the server 120 may calculate the vector distance between the voxel feature vector and the class prototype vector by using the euclidean distance method. Specifically, the feature space coordinates of the voxel feature vectors of the prediction voxel segments in the feature space may be determined as the first feature space coordinates described above. The server 120 may also determine feature space coordinates of the class prototype vector in the feature space as the second feature space coordinates described above. The coordinate distance between the first feature space coordinate and the second feature space coordinate may be calculated, resulting in a vector distance between the voxel feature vector and the class prototype vector.
In one embodiment, the vector distance has a plurality, and the step S210 includes:
obtaining a distance characteristic value sum according to the index characteristic values of the vector distances; obtaining class probability distribution of the prediction set voxel cut block according to the ratio of the index eigenvalues of the vector distances to the sum of the distance eigenvalues; training the feature extraction model according to the category probability distribution to obtain the trained feature extraction model; the class probability distribution includes a plurality of candidate voxel classes and their corresponding actual prediction probabilities.
The class probability distribution may be a plurality of candidate voxel classes and their corresponding actual prediction probabilities.
The actual prediction probability may be a probability that the actual predicted voxel block belongs to a specific voxel class.
In a specific implementation, for a plurality of candidate voxel classes, the server 120 calculates vector distances between the voxel feature vectors of the prediction set voxel patches and class prototype vectors of the respective candidate voxel classes, respectively, thereby obtaining a plurality of vector distances. The server 120 may first calculate the opposite numbers of the plurality of vector distances, and perform an exponential operation on the opposite numbers of the plurality of vector distances to obtain an exponential eigenvalue of each vector distance. Then, the sum of the exponential eigenvalues of the plurality of vector distances is calculated to obtain the sum of the distance eigenvalues. And next, obtaining the prediction probability of the prediction collective element cutting block belonging to each candidate voxel class according to the ratio between the index characteristic value of each vector distance and the sum of the distance characteristic values.
In practical application, the category probability distribution p of the prediction collective element block can be obtained through calculation according to the following formula φ
Wherein exp (x) represents an exponential function based on a natural constant e; d (f) φ (x),c k ) Voxel feature vector f representing voxel cut-out x φ (x i ) Class prototype vector c with the kth class voxel class k Vector distance between; y represents the voxel class of voxel tile x; d (f) φ (x),c k′ ) Representing the vector distance of all voxel tiles x in the prediction set. In the formula, the opposite number of the vector distance is calculated, and exp index operation is carried out on the opposite number of the vector distance to obtain an index characteristic value. And obtaining the prediction probability of the voxel block corresponding to the vector distance belonging to a certain voxel class by the ratio of the index characteristic value of the certain vector distance to the cumulative sum of the index characteristic values of the plurality of vector distances.
After the server 120 obtains the class probability distribution, the feature extraction model may be trained according to the class probability distribution. Through multiple rounds of iterative training, when the actual prediction probability of the actual voxel class of the prediction collective block in the class probability distribution reaches saturation or does not decline any more, and when the actual prediction probability of other candidate voxel classes is close to 0%, the representative feature extraction model converges, and the feature extraction model can be used as the feature extraction model after training.
When the vector distance between the voxel feature vector of the prediction set voxel block and the class prototype vector of a certain voxel class is smaller, the actual prediction probability that the prediction set voxel block belongs to a certain candidate voxel class is higher, and the calculation process can be simplified by converting the vector distance into the prediction probability of each candidate voxel class, so that the calculation of the vector distance with larger data volume is avoided.
In the model training method, the class probability distribution is obtained by converting the vector distance into the prediction probability of each candidate voxel class, and the feature extraction model is trained based on the class probability distribution, so that the operation on the vector distance with larger data size can be avoided, the calculation amount for model training is simplified, and the model training efficiency is improved.
In one embodiment, the training the feature extraction model according to the category probability distribution to obtain the trained feature extraction model includes:
determining an actual voxel class of the prediction set voxel block; determining expected prediction probabilities of the candidate voxel categories in the category probability distribution according to the actual voxel categories; calculating the difference value between the actual prediction probability and the expected prediction probability of each candidate voxel class to obtain a plurality of cross entropy loss values; and adjusting model parameters of the feature extraction model according to the cross entropy loss values, returning to the step of cutting voxels of the marked medical image into blocks, and inputting the blocks into the feature extraction model until the cross entropy loss values accord with preset model convergence conditions, and taking the feature extraction model with the model parameters adjusted as the trained feature extraction model.
In particular implementations, the server 120 may determine an actual voxel class of the prediction set voxel segment based on a labeling result of the prediction set voxel segment in the labeled medical image.
After determining the actual voxel class of the prediction collective block, the expected prediction probability for each candidate voxel class in the class probability distribution may be determined. For example, candidate voxel categories A, B and C are included in the category probability distribution, with actual prediction probabilities of 20%, 30%, and 50%, respectively. The actual voxel class is determined to be a, and thus the expected prediction probabilities for candidate voxel classes A, B and C are 100%, 0%, respectively.
The server 120 may adjust model parameters of the feature extraction model by a cross entropy loss function and a random gradient descent algorithm based on the actual prediction probability and the desired prediction probability.
In practical applications, the following cross entropy loss function may be used to calculate the cross entropy loss value:
wherein Q represents the number of blocks of the prediction collective elements in the prediction set; x represents a prediction collective block; k represents the actual voxel class of the prediction set voxel block x; p is p φ Representing a class probability distribution; j (phi) represents the cross entropy loss value of the prediction set voxel cut.
After the model parameters of the feature extraction model are adjusted according to the cross entropy loss value, the above step S204 may be returned to obtain voxel feature vectors updated by the support voxel block and the prediction voxel block, and updated category prototype vectors are obtained according to the voxel feature vectors updated by the support voxel block. And then, according to the vector distance between the voxel feature vector updated by the prediction collective block and the category prototype vector updated, the model parameters of the feature extraction model are adjusted again to obtain an updated feature extraction model, and when the updated feature extraction model accords with the model convergence condition, the updated feature extraction model is used as the trained feature extraction model.
In one embodiment, further comprising:
obtaining a medical image to be marked, cutting voxels of the medical image to be marked into blocks, and inputting the blocks into the trained feature extraction model to obtain a voxel feature vector to be predicted, which is output by the trained feature extraction model; determining a vector distance between the voxel feature vector to be predicted and the class prototype vector; determining a target category prototype vector according to the vector distance between the voxel feature vector to be predicted and the category prototype vector; and labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled.
The medical image to be marked may be an unlabeled medical image. The medical image to be annotated comprises a plurality of voxel slices.
In a specific implementation, after model training, the server 120 may input each voxel segment of the medical image to be labeled into a trained feature extraction model, and the trained feature extraction model outputs voxel feature vectors of each voxel segment. For distinguishing the description, the voxel feature vector of each voxel block of the medical image to be marked is named as the voxel feature vector to be predicted.
The server 120 may calculate a vector distance between a voxel feature vector to be predicted of the voxel segmentation and a class prototype vector of each voxel class, determine a class prototype vector with the minimum vector distance between the voxel feature vector to be predicted, and label a voxel class corresponding to the target class prototype vector for the voxel segmentation of the medical image to be labeled as the target class prototype vector.
FIG. 4 is a schematic diagram of determining a voxel class of a voxel slab based on vector distances, according to one embodiment. As shown in the figure, a class prototype vector 401 of a plurality of voxel classes exists in the feature space, and a voxel feature vector 402 of a voxel segment belonging to the target voxel class has a small vector distance from the class prototype vector 401 of the target voxel class. When the voxel feature vector 403 to be predicted of a certain voxel block in the medical image to be marked is obtained, calculating the vector distance between the voxel feature vector 403 to be predicted and the class prototype vector 401 of each voxel class, and searching the class prototype vector 401 with the minimum vector distance between the voxel feature vector 403 to be predicted, thereby determining the target voxel class corresponding to the class prototype vector 401 as the voxel class of the certain voxel block in the medical image to be marked.
After the labeling of each voxel segment in the medical image to be labeled is completed, the server 120 may perform image segmentation on the medical image to be labeled according to the labeling result.
In one embodiment, as shown in fig. 5, a medical image segmentation method is provided. The present embodiment is mainly exemplified by the application of the method to the server 120 in fig. 1. Referring to fig. 5, the medical image segmentation method specifically includes the following steps:
s502, receiving the medical image uploaded by the terminal; the medical images comprise marked medical images and medical images to be marked; the marked medical image comprises a plurality of voxel cut blocks marked with voxel categories; the voxel dices in the marked medical image comprise support set voxel dices and prediction set voxel dices;
s504, cutting voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model;
s506, obtaining a category prototype vector corresponding to the voxel category of the support set voxel dicer according to the voxel feature vector of the support set voxel dicer;
s508, determining vector distances between voxel feature vectors of the prediction set voxel block and the category prototype vector;
S510, training the feature extraction model according to the vector distance to obtain a trained feature extraction model;
s512, cutting voxels of the medical image to be marked into blocks, and inputting the blocks into the trained feature extraction model to obtain a voxel feature vector to be predicted, which is output by the trained feature extraction model; determining a vector distance between the voxel feature vector to be predicted and the class prototype vector, and determining a target class prototype vector according to the vector distance between the voxel feature vector to be predicted and the class prototype vector; labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled to obtain segmented medical images;
and S514, the segmented medical image is sent to the terminal.
In a specific implementation, a user may log in to the medical image processing platform through the terminal 110, the medical image processing platform provides an uploading port of a medical image, and the user may upload a medical image with the same image modality of a specific portion of a patient to the server 120 through the uploading port. The medical image uploaded by the terminal 110 may include a marked medical image and a medical image to be marked.
The server 120 may obtain a support voxel cut and a prediction voxel cut according to voxel cut in the labeled medical image, and train the feature extraction model by using the support voxel cut and the prediction voxel cut to obtain a trained feature extraction model.
Since the process of training the feature extraction model by the server 120 is described in detail in the above embodiments, the description thereof is omitted here.
The server 120 may input the voxel cut blocks of the medical image to be labeled into the trained feature extraction model, and the trained feature extraction model outputs the voxel feature vector to be predicted of the voxel cut blocks. The server 120 calculates the vector distance between the voxel feature vector to be predicted and the class prototype vector of each voxel class, obtains the target class prototype vector with the minimum vector distance between the voxel feature vector to be predicted, marks the voxel class corresponding to the target class prototype vector as the voxel class of the voxel block in the medical image to be marked, marks each voxel block in the medical image to be marked, and finally completes the image segmentation of the medical image to be marked according to the marking result.
The server 120 performs image segmentation on the medical image to be marked after marking to obtain a segmented medical image, and feeds back the segmented medical image to the terminal 110.
In the medical image segmentation method, firstly, voxel feature vectors of support set voxel dices in marked medical images are adopted to obtain category prototype vectors of each voxel category, and then, a feature extraction model is trained according to vector distances between the voxel feature vectors of prediction set voxel dices in the marked medical images and the category prototype vectors of each voxel category. Therefore, the feature extraction model is trained by utilizing the relevance of each voxel cut block in the voxel feature vector in the same marked medical image, and the feature extraction model can be trained to be converged according to the vector distance between the voxel feature vectors, so that a large number of marked medical images are not required to be used as training samples for training the feature extraction model, the time and operation consumed by a user for manufacturing a large number of marked medical images are saved, and the medical image segmentation efficiency is improved.
It should be understood that, although the steps in the flowcharts of fig. 2 and 5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 and 5 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
As shown in fig. 6, in one embodiment, there is provided a model training apparatus for medical image segmentation, comprising:
an image acquisition module 602, configured to acquire a labeled medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel dices comprise a support set voxel dices and a prediction collective voxel dices;
the input module 604 is configured to segment the voxels of the labeled medical image, input the segments to a feature extraction model, and obtain voxel feature vectors output by the feature extraction model;
a prototype vector determining module 606, configured to obtain, according to the voxel feature vector of the support set voxel block, a class prototype vector corresponding to the voxel class of the support set voxel block;
a vector distance determination module 608 for determining a vector distance between a voxel feature vector of the prediction set voxel block and the class prototype vector;
the training module 610 is configured to train the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
In one embodiment, the vector distance has a plurality, and the training module 610 is specifically configured to:
obtaining a distance characteristic value sum according to the index characteristic values of the vector distances; obtaining class probability distribution of the prediction set voxel cut block according to the ratio of the index eigenvalues of the vector distances to the sum of the distance eigenvalues; the category probability distribution comprises a plurality of candidate voxel categories and corresponding actual prediction probabilities thereof; and training the feature extraction model according to the category probability distribution to obtain the trained feature extraction model.
In one embodiment, the training module 610 is further specifically configured to:
determining an actual voxel class of the prediction set voxel block; determining expected prediction probabilities of the candidate voxel categories in the category probability distribution according to the actual voxel categories; calculating the difference value between the actual prediction probability and the expected prediction probability of each candidate voxel class to obtain a plurality of cross entropy loss values; and adjusting model parameters of the feature extraction model according to the cross entropy loss values, returning to the step of cutting voxels of the marked medical image into blocks, and inputting the blocks into the feature extraction model until the feature extraction model converges.
In one embodiment, the support collective dice have a plurality of support collective dice, and the prototype vector determination module 606 is specifically configured to:
determining the number of the support collective elements; counting vector sum of voxel characteristic vectors of a plurality of the support set voxel dices; and calculating the ratio of the vector sum to the number of the cut blocks to obtain the class prototype vector.
In one embodiment, the vector distance determination module 608 is specifically configured to:
determining first feature space coordinates of voxel feature vectors of the prediction set voxel block; determining second feature space coordinates of the class prototype vector; and obtaining the vector distance according to the coordinate distance between the first feature space coordinate and the second feature space coordinate.
In one embodiment, the apparatus further comprises:
the image segmentation module is used for acquiring a medical image to be marked, cutting voxels of the medical image to be marked into blocks, inputting the blocks into the trained feature extraction model, and obtaining a voxel feature vector to be predicted, which is output by the trained feature extraction model; determining a vector distance between the voxel feature vector to be predicted and the class prototype vector; determining a target category prototype vector according to the vector distance between the voxel feature vector to be predicted and the category prototype vector; and labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled.
In one embodiment, the apparatus further comprises:
the category selection module is used for randomly selecting K target voxel categories from the voxel categories; k is more than or equal to 1;
the sample set construction module is used for extracting voxel cut blocks marked as the target voxel category in the marked medical image to obtain K similar cut block sample sets;
the support set construction module is used for selecting N support set voxel dicing blocks from the K similar dicing sample sets respectively; n is more than 1;
a prediction set construction module, configured to select a prediction collective tile from among voxel tiles in the K cut sample sets except for the support collective tile;
and the return module is used for returning to the step of randomly selecting K target voxel categories from the plurality of voxel categories until the trained feature extraction model is obtained.
As shown in fig. 7, in one embodiment, there is provided a medical image segmentation apparatus including:
the image receiving module 702 is configured to receive a medical image uploaded by the terminal; the medical images comprise marked medical images and medical images to be marked; the marked medical image comprises a plurality of voxel cut blocks marked with voxel categories; the voxel dices in the marked medical image comprise support set voxel dices and prediction set voxel dices;
The input module 704 is configured to segment the voxels of the labeled medical image, input the segments to a feature extraction model, and obtain voxel feature vectors output by the feature extraction model;
a prototype vector determining module 706, configured to obtain, according to the voxel feature vector of the support set voxel block, a class prototype vector corresponding to the voxel class of the support set voxel block;
a vector distance determination module 708 for determining a vector distance between a voxel feature vector of the prediction set voxel block and the class prototype vector;
the training module 710 is configured to train the feature extraction model according to the vector distance to obtain a trained feature extraction model;
the segmentation module 712 is configured to segment the voxels of the medical image to be labeled, input the segments to the trained feature extraction model, and obtain a voxel feature vector to be predicted output by the trained feature extraction model; determining a vector distance between the voxel feature vector to be predicted and the class prototype vector, and determining a target class prototype vector according to the vector distance between the voxel feature vector to be predicted and the class prototype vector; labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled to obtain segmented medical images;
And a feedback module 714, configured to send the segmented medical image to the terminal.
For specific limitations of the model training and the medical image segmentation apparatus, reference may be made to the above limitations of the model training and the medical image segmentation method, and no further description is given here. The above model training and the modules in the medical image segmentation device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The model training and medical image segmentation device provided by the above can be used for executing the model training and medical image segmentation method provided by any embodiment, and has corresponding functions and beneficial effects.
Fig. 8 is a flowchart of a brain MRI image segmentation method according to an embodiment. As shown in the figure, the medical image segmentation method can specifically comprise the following steps:
s802, receiving an annotated brain MRI image and a plurality of brain MRI images to be annotated, which are uploaded by a terminal;
s804, extracting a plurality of voxel cut blocks from the marked brain MRI image, and determining the brain tissue structure type of the extracted voxel cut blocks according to the marking of the brain MRI image;
S806, using the extracted voxels to cut into blocks, constructing a complete training sample set D= { (x, y) };
s808, selecting voxel cut blocks of K brain tissue structure categories from the training sample set D= { (x, y) } to obtain K similar cut block sample sets { N } 1 ,...,N K };
S810, respectively extracting N individual prime segments from the K similar segment sample sets to form a support set S k
S812, extracting the rest voxel cut blocks in the K similar cut block sample sets to form a prediction set Q k
S814, inputting the voxel cut blocks in the support set and the prediction set into a preset feature extraction model to obtain respective feature vectors of the voxel cut blocks in the support set and the prediction set;
s816, obtaining prototype vectors c of a plurality of brain tissue structure categories according to the feature vectors of the voxel diced in the support set k
S818, respectively calculating Euclidean distances between voxel cutting blocks in the prediction set and prototype vectors of each brain tissue structure type;
s820, obtaining the probability that the voxel block in the prediction set is predicted to belong to each brain tissue structure type according to the Euclidean distance between the voxel block in the prediction set and the prototype vector of each brain tissue structure type;
s822, calculating a cross entropy loss value according to the probability that the voxel block in the prediction set is predicted to belong to each brain tissue structure class and the actual brain tissue structure class of the voxel block in the prediction set;
S824, obtaining an intermediate feature extraction model according to the weight value and the bias value in the feature extraction model after cross entropy loss value training;
s826, converging the intermediate feature extraction model, and taking the intermediate feature extraction model as a trained feature extraction model;
s828, extracting a plurality of voxel segments from the brain MRI image to be marked, inputting the extracted voxel segments to a trained feature extraction model, and outputting feature vectors of the voxel segments by the trained feature extraction model;
s830, calculating Euclidean distance between the feature vector output by the feature extraction model after training and the prototype vector of each brain tissue structure type;
s832, a prototype vector with the minimum Euclidean distance between the prototype vector and the feature vector output by the feature extraction model after training is taken as a target prototype vector;
s834, labeling voxel cut blocks in the brain MRI image to be labeled according to the brain tissue structure category corresponding to the target prototype vector;
s836, according to labeling results of the voxel segmentation in the brain MRI image to be labeled, performing image segmentation on the brain MRI image to be labeled, and returning the brain MRI image subjected to the image segmentation to the terminal for display by the terminal.
Fig. 9 is a schematic diagram of an application scenario of brain MRI image segmentation in one embodiment. As shown, the labeled brain MRI image 904 is obtained by manually labeling each voxel in the brain MRI image 902 by the user. The feature extraction model is trained using, as training samples, individual voxel patches in the labeled brain MRI images 904 labeled with voxel categories. After training is completed, the brain MRI image 906 to be labeled is input into a feature extraction model, the feature extraction model can output feature vectors of each voxel cut in the brain MRI image 906 to be labeled, the voxel type of the voxel cut is determined and labeled according to the euclidean distance between the output feature vector and prototype vectors of different voxel types, and the brain MRI image 906 is segmented according to the label, and the segmented brain MRI image 908 is output.
FIG. 10 illustrates an internal block diagram of a computer device in one embodiment. The computer device may be specifically the server 120 of fig. 1. As shown in fig. 10, the computer device includes a processor, a memory, a network interface, an input device, and a display screen connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by the processor, causes the processor to implement a model training, medical image segmentation method. The internal memory may also store a computer program which, when executed by the processor, causes the processor to perform the model training and the medical image segmentation methods. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 10 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the model training and medical image segmentation apparatus provided by the present application may be implemented in the form of a computer program, which may be executed on a computer device as shown in fig. 10. The memory of the computer device may store the various program modules that make up the model training, medical image segmentation apparatus, such as the image acquisition module 602, the input module 604, the prototype vector determination module 606, the vector distance determination module 608, and the training module 610 shown in fig. 6. The computer program of each program module causes the processor to execute the steps in the model training, medical image segmentation method of each embodiment of the present application described in the present specification.
For example, the computer device shown in fig. 10 may perform voxel segmentation of the labeled medical image through the input module 604 in the model training apparatus shown in fig. 6, and input the voxel segmentation into a feature extraction model, so as to obtain a voxel feature vector output by the feature extraction model.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the model training, medical image segmentation method described above. The steps of the model training and medical image segmentation method may be the steps of the model training and medical image segmentation method of each embodiment.
In one embodiment, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the model training, medical image segmentation method described above. The steps of the model training and medical image segmentation method may be the steps of the model training and medical image segmentation method of each embodiment.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (18)

1. A model training method for medical image segmentation, comprising:
acquiring a marked medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel dices comprise a support set voxel dices and a prediction collective voxel dices; the support set voxel dicing is voxel dicing used for determining category prototype vectors of different voxel categories in the marked medical image; the prediction set voxel dicing is voxel dicing except for supporting collective voxel dicing in the marked medical image;
cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model;
Obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel feature vector of the support set voxel block;
determining a vector distance between a voxel feature vector of the predictor set voxel block and the class prototype vector;
training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
2. The method of claim 1, wherein the vector distance has a plurality, the training the feature extraction model according to the vector distance results in a trained feature extraction model, comprising:
obtaining a distance characteristic value sum according to the index characteristic values of the vector distances;
obtaining class probability distribution of the prediction set voxel cut block according to the ratio of the index eigenvalues of the vector distances to the sum of the distance eigenvalues; the category probability distribution comprises a plurality of candidate voxel categories and corresponding actual prediction probabilities thereof;
and training the feature extraction model according to the category probability distribution to obtain the trained feature extraction model.
3. The method of claim 2, wherein the training the feature extraction model according to the class probability distribution to obtain the trained feature extraction model comprises:
determining an actual voxel class of the prediction set voxel block;
determining expected prediction probabilities of the candidate voxel categories in the category probability distribution according to the actual voxel categories;
obtaining a plurality of cross entropy loss values according to the actual prediction probability and the expected prediction probability of each candidate voxel class;
and adjusting model parameters of the feature extraction model according to the cross entropy loss values, returning to the step of cutting voxels of the marked medical image into blocks, and inputting the blocks into the feature extraction model until the feature extraction model converges.
4. The method of claim 1, wherein the support set voxel dicing has a plurality of support set voxel dicing voxel feature vectors, the deriving a class prototype vector corresponding to a voxel class of the support set voxel dicing comprising:
determining the number of the support collective elements;
counting vector sum of voxel characteristic vectors of a plurality of the support set voxel dices;
And calculating the ratio of the vector sum to the number of the cut blocks to obtain the class prototype vector.
5. The method of claim 1, wherein the determining a vector distance between a voxel feature vector of the prediction set voxel tile and the class prototype vector comprises:
determining first feature space coordinates of voxel feature vectors of the prediction set voxel block;
determining second feature space coordinates of the class prototype vector;
and obtaining the vector distance according to the coordinate distance between the first feature space coordinate and the second feature space coordinate.
6. The method as recited in claim 1, further comprising:
obtaining a medical image to be marked, cutting voxels of the medical image to be marked into blocks, and inputting the blocks into the trained feature extraction model to obtain a voxel feature vector to be predicted, which is output by the trained feature extraction model;
determining a vector distance between the voxel feature vector to be predicted and the class prototype vector;
determining a target category prototype vector according to the vector distance between the voxel feature vector to be predicted and the category prototype vector;
And labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled.
7. The method of claim 3, wherein after the acquiring the annotated medical image, the method further comprises:
randomly selecting K target voxel categories from the plurality of voxel categories; k is more than or equal to 1;
extracting voxel cut blocks marked as the target voxel type in the marked medical image to obtain K similar cut block sample sets;
selecting N supporting set voxels for dicing in the K similar dicing sample sets respectively; n is more than 1;
selecting the prediction collective cut from the voxel cuts except the support collective cut in the K cut sample sets;
when the feature extraction model converges, the method further comprises:
and returning to the step of randomly selecting K target voxel categories from the voxel categories until the trained feature extraction model is obtained.
8. A medical image segmentation method, comprising:
receiving a medical image uploaded by a terminal; the medical images comprise marked medical images and medical images to be marked; the marked medical image comprises a plurality of voxel cut blocks marked with voxel categories; the voxel dices in the marked medical image comprise support set voxel dices and prediction set voxel dices; the support set voxel dicing is voxel dicing used for determining category prototype vectors of different voxel categories in the marked medical image; the prediction set voxel dicing is voxel dicing except for supporting collective voxel dicing in the marked medical image;
Cutting the voxels of the marked medical image into blocks, and inputting the blocks into a feature extraction model to obtain voxel feature vectors output by the feature extraction model;
obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel feature vector of the support set voxel block;
determining a vector distance between a voxel feature vector of the predictor set voxel block and the class prototype vector;
training the feature extraction model according to the vector distance to obtain a trained feature extraction model;
cutting voxels of the medical image to be marked into blocks, and inputting the blocks into the trained feature extraction model to obtain a voxel feature vector to be predicted, which is output by the trained feature extraction model;
determining a vector distance between the voxel feature vector to be predicted and the class prototype vector, and determining a target class prototype vector according to the vector distance between the voxel feature vector to be predicted and the class prototype vector;
labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled to obtain segmented medical images;
And sending the segmented medical image to the terminal.
9. A model training apparatus for medical image segmentation, comprising:
the image acquisition module is used for acquiring the marked medical image; the annotated medical image includes a plurality of voxel segments; the voxel slices are marked with voxel categories; the plurality of voxel dices comprise a support set voxel dices and a prediction collective voxel dices; the support set voxel dicing is voxel dicing used for determining category prototype vectors of different voxel categories in the marked medical image; the prediction set voxel dicing is voxel dicing except for supporting collective voxel dicing in the marked medical image;
the input module is used for cutting voxels of the marked medical image into blocks, inputting the blocks into the feature extraction model, and obtaining voxel feature vectors output by the feature extraction model;
the prototype vector determining module is used for obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel characteristic vector of the support set voxel block;
the vector distance determining module is used for determining the vector distance between the voxel characteristic vector of the prediction set voxel block and the category prototype vector;
The training module is used for training the feature extraction model according to the vector distance to obtain a trained feature extraction model; the trained feature extraction model is used for image segmentation of the medical image to be marked.
10. The apparatus of claim 9, wherein the vector distance has a plurality, the training module further to: obtaining a distance characteristic value sum according to the index characteristic values of the vector distances; obtaining class probability distribution of the prediction set voxel cut block according to the ratio of the index eigenvalues of the vector distances to the sum of the distance eigenvalues; the category probability distribution comprises a plurality of candidate voxel categories and corresponding actual prediction probabilities thereof; and training the feature extraction model according to the category probability distribution to obtain the trained feature extraction model.
11. The apparatus of claim 10, wherein the training module is further to: determining an actual voxel class of the prediction set voxel block; determining expected prediction probabilities of the candidate voxel categories in the category probability distribution according to the actual voxel categories; obtaining a plurality of cross entropy loss values according to the actual prediction probability and the expected prediction probability of each candidate voxel class; and adjusting model parameters of the feature extraction model according to the cross entropy loss values, returning to the step of cutting voxels of the marked medical image into blocks, and inputting the blocks into the feature extraction model until the feature extraction model converges.
12. The apparatus of claim 9, wherein the support collective dice have a plurality, the prototype vector determination module further to: determining the number of the support collective elements; counting vector sum of voxel characteristic vectors of a plurality of the support set voxel dices; and calculating the ratio of the vector sum to the number of the cut blocks to obtain the class prototype vector.
13. The apparatus of claim 9, wherein the vector distance determination module is further to: determining first feature space coordinates of voxel feature vectors of the prediction set voxel block; determining second feature space coordinates of the class prototype vector; and obtaining the vector distance according to the coordinate distance between the first feature space coordinate and the second feature space coordinate.
14. The device according to claim 9, wherein the model training device for medical image segmentation further comprises an image segmentation module, configured to obtain a medical image to be labeled, and input a voxel block of the medical image to be labeled to the trained feature extraction model, so as to obtain a voxel feature vector to be predicted output by the trained feature extraction model; determining a vector distance between the voxel feature vector to be predicted and the class prototype vector; determining a target category prototype vector according to the vector distance between the voxel feature vector to be predicted and the category prototype vector; and labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled.
15. The apparatus of claim 9, wherein the model training apparatus for medical image segmentation further comprises a class selection module for randomly selecting K target voxel classes among a plurality of the voxel classes; k is more than or equal to 1; the sample set construction module is used for extracting voxel cut blocks marked as the target voxel category in the marked medical image to obtain K similar cut block sample sets; the support set construction module is used for selecting N support set voxel dicing blocks from the K similar dicing sample sets respectively; n is more than 1; a prediction set construction module, configured to select a prediction collective tile from among voxel tiles in the K cut sample sets except for the support collective tile; and the return module is used for returning to the step of randomly selecting K target voxel categories from the plurality of voxel categories until the trained feature extraction model is obtained.
16. A medical image segmentation apparatus, comprising:
the image receiving module is used for receiving the medical image uploaded by the terminal; the medical images comprise marked medical images and medical images to be marked; the marked medical image comprises a plurality of voxel cut blocks marked with voxel categories; the voxel dices in the marked medical image comprise support set voxel dices and prediction set voxel dices; the support set voxel dicing is voxel dicing used for determining category prototype vectors of different voxel categories in the marked medical image; the prediction set voxel dicing is voxel dicing except for supporting collective voxel dicing in the marked medical image;
The input module is used for cutting voxels of the marked medical image into blocks, inputting the blocks into the feature extraction model, and obtaining voxel feature vectors output by the feature extraction model;
the prototype vector determining module is used for obtaining a category prototype vector corresponding to the voxel category of the support set voxel block according to the voxel characteristic vector of the support set voxel block;
the vector distance determining module is used for determining the vector distance between the voxel characteristic vector of the prediction set voxel block and the category prototype vector;
the training module is used for training the feature extraction model according to the vector distance to obtain a trained feature extraction model;
the segmentation module is used for cutting voxels of the medical image to be marked into blocks, inputting the blocks into the trained feature extraction model, and obtaining a voxel feature vector to be predicted, which is output by the trained feature extraction model; determining a vector distance between the voxel feature vector to be predicted and the class prototype vector, and determining a target class prototype vector according to the vector distance between the voxel feature vector to be predicted and the class prototype vector; labeling the voxel cut blocks of the medical image to be labeled as voxel categories corresponding to the target category prototype vectors, and performing image segmentation on the labeled medical image to be labeled to obtain segmented medical images;
And the feedback module is used for sending the segmented medical image to the terminal.
17. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of any one of claims 1 to 8.
18. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 8.
CN202010001039.0A 2020-01-02 2020-01-02 Model training and medical image segmentation method and device Active CN111192269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010001039.0A CN111192269B (en) 2020-01-02 2020-01-02 Model training and medical image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010001039.0A CN111192269B (en) 2020-01-02 2020-01-02 Model training and medical image segmentation method and device

Publications (2)

Publication Number Publication Date
CN111192269A CN111192269A (en) 2020-05-22
CN111192269B true CN111192269B (en) 2023-08-22

Family

ID=70709672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010001039.0A Active CN111192269B (en) 2020-01-02 2020-01-02 Model training and medical image segmentation method and device

Country Status (1)

Country Link
CN (1) CN111192269B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429460B (en) * 2020-06-12 2020-09-22 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium
CN111815764B (en) * 2020-07-21 2022-07-05 西北工业大学 Ultrasonic three-dimensional reconstruction method based on self-supervision 3D full convolution neural network
CN112150471B (en) * 2020-09-23 2023-09-05 创新奇智(上海)科技有限公司 Semantic segmentation method and device based on few samples, electronic equipment and storage medium
CN112561921B (en) * 2020-11-10 2024-07-26 联想(北京)有限公司 Image segmentation method and device
CN112950774A (en) * 2021-04-13 2021-06-11 复旦大学附属眼耳鼻喉科医院 Three-dimensional modeling device, operation planning system and teaching system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410185A (en) * 2018-10-10 2019-03-01 腾讯科技(深圳)有限公司 A kind of image partition method, device and storage medium
CN109872333A (en) * 2019-02-20 2019-06-11 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and storage medium
CN110148192A (en) * 2019-04-18 2019-08-20 上海联影智能医疗科技有限公司 Medical image imaging method, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11024025B2 (en) * 2018-03-07 2021-06-01 University Of Virginia Patent Foundation Automatic quantification of cardiac MRI for hypertrophic cardiomyopathy

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410185A (en) * 2018-10-10 2019-03-01 腾讯科技(深圳)有限公司 A kind of image partition method, device and storage medium
CN109872333A (en) * 2019-02-20 2019-06-11 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and storage medium
CN110148192A (en) * 2019-04-18 2019-08-20 上海联影智能医疗科技有限公司 Medical image imaging method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111192269A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111192269B (en) Model training and medical image segmentation method and device
CN111429460B (en) Image segmentation method, image segmentation model training method, device and storage medium
CN110321920A (en) Image classification method, device, computer readable storage medium and computer equipment
Cheng et al. A data-driven point cloud simplification framework for city-scale image-based localization
CN111291813B (en) Image labeling method, device, computer equipment and storage medium
CN102629376A (en) Image registration
US20220253977A1 (en) Method and device of super-resolution reconstruction, computer device and storage medium
US20240148321A1 (en) Predicting Body Composition from User Images Using Deep Learning Networks
CN111507285A (en) Face attribute recognition method and device, computer equipment and storage medium
CN115861248A (en) Medical image segmentation method, medical model training method, medical image segmentation device and storage medium
CN109410189B (en) Image segmentation method, and image similarity calculation method and device
CN112102235B (en) Human body part recognition method, computer device, and storage medium
CN111091010A (en) Similarity determination method, similarity determination device, network training device, network searching device and storage medium
CN113254687B (en) Image retrieval and image quantification model training method, device and storage medium
CN111210465A (en) Image registration method and device, computer equipment and readable storage medium
CN115272250B (en) Method, apparatus, computer device and storage medium for determining focus position
CN116797607A (en) Image segmentation method and device
US20240119750A1 (en) Method of generating language feature extraction model, information processing apparatus, information processing method, and program
CN111191065B (en) Homologous image determining method and device
CN110688516A (en) Image retrieval method, image retrieval device, computer equipment and storage medium
CN113962990B (en) Chest CT image recognition method and device, computer equipment and storage medium
CN112801908B (en) Image denoising method and device, computer equipment and storage medium
CN112669450B (en) Human body model construction method and personalized human body model construction method
CN114241198A (en) Method, device, equipment and storage medium for obtaining local imagery omics characteristics
CN112802012A (en) Pathological image detection method, pathological image detection device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant