CN115131361A - Training of target segmentation model, focus segmentation method and device - Google Patents

Training of target segmentation model, focus segmentation method and device Download PDF

Info

Publication number
CN115131361A
CN115131361A CN202211068494.8A CN202211068494A CN115131361A CN 115131361 A CN115131361 A CN 115131361A CN 202211068494 A CN202211068494 A CN 202211068494A CN 115131361 A CN115131361 A CN 115131361A
Authority
CN
China
Prior art keywords
sample
image
training
sub
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211068494.8A
Other languages
Chinese (zh)
Inventor
尹芳
邓小宁
马杰
郭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North Health Medical Big Data Technology Co ltd
Original Assignee
North Health Medical Big Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North Health Medical Big Data Technology Co ltd filed Critical North Health Medical Big Data Technology Co ltd
Priority to CN202211068494.8A priority Critical patent/CN115131361A/en
Publication of CN115131361A publication Critical patent/CN115131361A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for training a target segmentation model and segmenting a focus, which relate to the technical field of artificial intelligence, and the method for training the target segmentation model comprises the following steps: pre-training the initial model by using self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model; carrying out supervised training on the pre-trained self-supervised model based on the sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image; a sample feature vector group is obtained based on the sample image. The training and focus segmentation method and device for the target segmentation model can obtain a pixel-level sample feature vector group with large data volume based on a small number of sample images, and can improve the training efficiency of the target segmentation model on the basis of not reducing the segmentation accuracy of the target segmentation model through pixel-level self-supervision contrast learning and supervised training.

Description

Training of target segmentation model, focus segmentation method and device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for training a target segmentation model and segmenting a focus.
Background
With the continuous development of artificial intelligence technology, the target segmentation of images based on artificial intelligence technology has become a research hotspot.
In the prior art, a target segmentation model can be trained based on supervised learning, and then a target in an image can be segmented based on the trained target segmentation model. The segmentation accuracy of the target segmentation model obtained based on supervised learning training is positively correlated with the data volume of the labeled training data.
However, after the training data is acquired, the training data is labeled, so that the labor cost and the time cost required to be invested are high, and the model training efficiency is low. With the advent of self-supervised learning methods, model training can be made to not rely entirely on labeled training data. However, since the segmentation problem of medical images (e.g. segmentation of lesions) is more complicated than the classification problem of medical images or other object segmentation problems, the accuracy of medical image segmentation based on the self-supervised learning method in the prior art is not high. Therefore, how to improve the training efficiency of the target segmentation model on the basis of not reducing the segmentation accuracy of the target segmentation model is a technical problem to be solved in the field.
Disclosure of Invention
The invention provides a method and a device for training a target segmentation model and segmenting a focus, which are used for overcoming the defect of low training efficiency of the target segmentation model in the prior art and improving the training efficiency of the target segmentation model on the basis of not reducing the segmentation accuracy of the target segmentation model.
The invention provides a training method of a target segmentation model, which comprises the following steps:
pre-training the initial model by using self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model;
carrying out supervised training on the pre-trained self-supervised model based on a sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image;
wherein the group of sample feature vectors is obtained based on the sample image.
According to the training method of the target segmentation model provided by the invention, the sample feature vector group is obtained based on the following steps:
grid division is carried out on a sample image, and the sample image is divided into a plurality of sample sub-images;
carrying out random data enhancement on each sample sub-image to obtain a derivative image of each sample sub-image;
performing feature extraction on each sample sub-image and the derivative image of each sample sub-image to obtain a sample feature image;
and performing grid division on each sample characteristic image in the sample characteristic image set, dividing each sample characteristic image into a plurality of sample characteristic sub-images, and further acquiring the sample characteristic vector group formed by each sample characteristic sub-image.
According to the training method of the target segmentation model provided by the invention, the pre-training is performed on the initial model by using the self-supervision contrast learning based on the sample feature vector group to obtain the pre-training self-supervision model, and the method comprises the following steps:
determining a positive sample pair and a negative sample in the sample feature vector group based on a mapping sample image of the sample feature sub-image and a mapping position of the sample feature sub-image in the mapping sample image;
pre-training the initial model by utilizing self-supervision contrast learning based on the positive sample pair and the negative sample to obtain a pre-trained self-supervision model;
and the mapping sample image is a sample image which has a mapping relation with the sample characteristic sub-image.
According to the training method of the target segmentation model provided by the invention, the determining of the positive sample pairs and the negative samples in the sample feature vector group based on the mapping sample image of the sample feature sub-image and the mapping position of the sample feature sub-image in the mapping sample image comprises:
and under the condition that the mapping sample images of any two sample feature sub-images in the sample feature vector group are the same and the mapping positions of the any two sample feature sub-images in the mapping sample images of any two sample feature sub-images are the same, determining the any two sample feature sub-images as a positive sample pair, and determining each sample feature sub-image except the any two sample feature sub-images in the sample feature vector group as a negative sample.
According to the training method of the target segmentation model provided by the invention, the sample feature vector group is obtained based on the following steps:
carrying out random data enhancement on the sample image to obtain a derivative image of the sample image;
performing feature extraction on the sample image and a derivative image of the sample image to obtain a sample feature image set;
and performing grid division on each sample characteristic image in the sample characteristic image set, dividing each sample characteristic image into a plurality of sample characteristic sub-images, and further acquiring the sample characteristic vector group formed by each sample characteristic sub-image.
According to the training method of the target segmentation model provided by the invention, the sample image is a medical image; the target segmentation result of the sample image is a focus segmentation result of the sample image;
accordingly, the trained target segmentation model can be used for carrying out focus segmentation on the input image.
The invention also provides a lesion segmentation method, which comprises the following steps:
acquiring a target medical image;
inputting the target medical image into a focus segmentation model, and acquiring a focus segmentation result of the target medical image output by the focus segmentation model;
the lesion segmentation model is obtained by training based on the training method of the target segmentation model.
The invention also provides a training device of the target segmentation model, which comprises:
the pre-training module is used for pre-training the initial model by utilizing self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model;
the supervised training module is used for carrying out supervised training on the pre-trained self-supervised model based on a sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image;
wherein the group of sample feature vectors is obtained based on the sample image.
The present invention also provides a lesion segmentation apparatus, comprising:
the image acquisition module is used for acquiring a target medical image;
the focus segmentation module is used for inputting the target medical image into a focus segmentation model and acquiring a focus segmentation result of the target medical image output by the focus segmentation model;
wherein, the focus segmentation model is obtained by training based on the training method of the target segmentation model.
The present invention also provides an electronic device, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements a training method for the target segmentation model as described in any one of the above and/or the lesion segmentation method when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of training a target segmentation model as described in any of the above, and/or a method of segmentation of a lesion.
The invention provides a method and a device for training a target segmentation model and segmenting a focus, which pre-train an initial model by utilizing self-supervision contrast learning based on a sample feature vector group to obtain a pre-trained self-supervision model, then based on a sample image and a target segmentation result of the sample image, the pre-trained self-supervised model is supervised trained to obtain a trained lesion segmentation model, a sample feature vector group is obtained based on a sample image, a pixel-level sample feature vector group with larger data volume can be obtained based on a small amount of sample images, the labor cost and the time cost required for obtaining the sample image and labeling the sample image can be reduced, and the pre-trained self-supervised model can be subjected to pixel-level self-supervised contrast learning and supervised training, on the basis of not reducing the segmentation accuracy of the target segmentation model, the training efficiency of the target segmentation model is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a training method of a target segmentation model provided by the present invention;
FIG. 2 is a second flowchart illustrating a method for training a target segmentation model according to the present invention;
FIG. 3 is a schematic flow chart of a lesion segmentation method according to the present invention;
FIG. 4 is a schematic structural diagram of a training apparatus for a target segmentation model provided in the present invention;
FIG. 5 is a schematic structural diagram of a lesion segmentation apparatus according to the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
It should be noted that, with the continuous development of the artificial intelligence technology, the focus segmentation of the medical image based on the artificial intelligence technology has become a research hotspot.
In general, the lesion segmentation model may be trained based on supervised learning, and then the lesion in the medical image may be segmented based on the trained lesion segmentation model. The segmentation accuracy of the focus segmentation model obtained based on supervised learning training is positively correlated with the data volume of the labeled training data.
However, in the medical field, training data is not easy to obtain, and after the training data is obtained, a doctor with rich experience is required to label the training data, so that the required labor cost and time cost are high, and the model training efficiency is low. Therefore, how to improve the training efficiency of the lesion segmentation model on the basis of not reducing the segmentation accuracy of the lesion segmentation model is a technical problem to be solved in the field.
The thinking and the method based on the pre-training model depend on a large amount of information contained in the pre-training model, can effectively shorten the time for establishing the model, and realize the rapid establishment of the target model. For example, a Bert model (Bidirectional encorder from transformations) under medical natural language processing, the adoption of the pre-training model can enable the Bert model to obtain higher recognition accuracy only by a small amount of labeled training data. Compared with other technical fields, the application of the pre-training model in the medical field can realize the training of the model without completely depending on the labeled training data.
With the advent of self-supervised learning methods, pre-trained models can be made to not fully rely on labeled training data. However, most of the conventional self-supervised learning methods are designed to solve the problem of medical image classification (e.g. detection and/or classification of lesions), and the problem of medical image segmentation (e.g. segmentation of lesions) is more complicated than the problem of medical image classification, so that it is difficult to directly use a pre-training model to realize the segmentation of medical images.
Moreover, the segmentation of the lesion in the medical image is usually at a pixel level, which is different from lesion detection (the lesion position is marked by a rectangular frame, etc.), the segmentation of the lesion needs to be predicted at a pixel level, a pixel pen needs to be used for smearing training data when labeling the training data, and a trained lesion segmentation model also needs to be predicted at a pixel level when calculating, so that more labor cost and time cost need to be invested when labeling the training data.
In view of the above, the present invention provides a training method for a target segmentation model. The training method of the target segmentation model provided by the invention can improve the effect of the pre-training model on the target segmentation task through pixel-level self-supervision learning, and can perform supervised training on the pre-training model based on a small amount of marked training data, so that the segmentation accuracy of the trained target segmentation model can be further improved, and the training efficiency of the target segmentation model can be improved on the basis of not reducing the segmentation accuracy of the target segmentation model.
Fig. 1 is a schematic flowchart of a training method of a target segmentation model according to the present invention. The training method of the object segmentation model of the present invention is described below with reference to fig. 1. As shown in fig. 1, the method includes: step 101, pre-training an initial model by using self-supervision contrast learning based on a sample feature vector group to obtain a pre-training self-supervision model.
Wherein the sample feature vector group is obtained based on the sample image.
The execution subject of the embodiment of the present invention is a training device of the target segmentation model.
Specifically, the initial model in the embodiment of the present invention may be built based on a common framework of the self-supervised learning, for example: MOCO framework or SimCLR framework, etc. The MOCO framework is a test framework for simulating interface requests and transmissions, including but not limited to get requests, post requests, cookies, and headers, and is an open-source test framework on github. The SimCLR frame is a contrast learning frame based on visual representation, and has the advantages of excellent performance, simple structure, no need of a special net rack or a storage library and the like.
The sample image may be a generic image acquired based on an image acquisition device.
The sample image may also be an unlabeled medical image that is pre-acquired based on the medical imaging system. The medical imaging system may include, but is not limited to, a projection X-ray imaging system, an X-ray computed tomography imaging system, a radionuclide imaging system, an ultrasound imaging system, a magnetic resonance imaging system, and the like. Accordingly, the specimen image may include, but is not limited to, a fundus image, a pathology image, and the like.
It should be noted that the number of sample images is plural in the embodiment of the present invention. The specific number of sample images is not limited in the embodiment of the present invention.
It should be noted that the target in the embodiment of the present invention may be determined according to actual situations, for example: the target may be a lesion, a human body, or other object. The above object is not particularly limited in the embodiments of the present invention.
The target segmentation result of the sample image can be obtained by manually labeling the sample image.
Based on the content of the above embodiments, the sample image is a medical image; the target segmentation result of the sample image is the focus segmentation result of the sample image;
accordingly, the trained target segmentation model can be used for carrying out focus segmentation on the input image.
In the following, a method for training a target segmentation model according to the present invention will be described by taking a lesion segmentation model for performing lesion segmentation on an input image as an example.
Based on the sample image, a sample feature vector group can be obtained by image processing, feature extraction and other methods. The embodiment of the present invention does not limit the specific manner of obtaining the sample feature vector group based on the sample image.
Based on the content of the foregoing embodiments, the sample feature vector group is obtained based on the following steps: and grid division is carried out on the sample image, and the sample image is divided into a plurality of sample sub-images.
FIG. 2 is a second flowchart illustrating a method for training a target segmentation model according to the present invention. As shown in FIG. 2, after training begins, unlabeled sample images may be acquired first.
Alternatively, a plurality of unlabeled sample images may be determined in historical medical images that have been acquired by the medical imaging system.
In the embodiment of the present invention, each sample image may be uniformly grid-divided based on a first preset parameter, and each sample image may be divided into a plurality of sample sub-images.
Wherein the first preset parameter may be determined based on a priori knowledge and/or actual conditions. The specific value of the first preset parameter is not limited in the embodiment of the invention.
Alternatively, the first preset parameter may be 9 × 9. Accordingly, each sample image may be uniformly grid-divided based on the first preset parameter, and each sample image may be divided into 9 × 9 sample sub-images.
It should be noted that each sample sub-image obtained by grid-dividing any sample image has a mapping relationship with the sample image.
And carrying out random data enhancement on each sample sub-image to obtain a derivative image of each sample sub-image.
Specifically, after dividing the sample image into 9 × 9 sample sub-images, for each sample sub-image, random data enhancement may be performed on each sample sub-image, and an image obtained by performing random data enhancement on each original sample image may be used as a derivative image of each original sample image.
Optionally, the random data enhancement in the embodiment of the present invention may include at least one of color transformation, random noise addition, mixup-like enhancement, and cutmix data enhancement.
The mixup type enhancement refers to a data enhancement mode of proportionally mixing two random samples and proportionally distributing classification results; the cutmix data enhancement refers to a data enhancement mode of removing a part of area, filling pixels but randomly filling area pixel values of other data in a training set, and distributing classification results according to a certain proportion.
It should be noted that the number of the derived images of any sample sub-image may be one or more.
Alternatively, the number of derived images of any sample sub-image may be determined based on a second preset parameter. The second preset parameter may be 64 in the embodiment of the present invention, and accordingly, the number of the derived images of any sample sub-image may be 64.
It should be noted that random data enhancement is performed on each sample sub-image having a mapping relationship with any sample image, and the derived image of each sample sub-image obtained has a mapping relationship with the sample image.
And performing feature extraction on each sample sub-image and the derivative image of each sample sub-image to obtain a sample feature image set.
Specifically, after the derivative image of each sample sub-image is obtained, each sample sub-image and the derivative image of each sample sub-image may be input to the feature extractor, and a sample feature image set composed of sample feature images output by the feature extractor may be obtained.
It should be noted that the feature extractor may perform feature extraction on each sample sub-image and the derivative image of each sample sub-image, and further may acquire a feature map of each sample sub-image and output the feature map as a sample feature image, and acquire a feature map of the derivative image of each sample sub-image and output the feature map as a sample feature image.
Alternatively, the feature extractor may be a trained Deep residual network (ResNet) or VGG network model.
It should be noted that feature extraction is performed on each sample sub-image corresponding to any sample image and a derivative image of each sample sub-image, and each obtained sample feature image has a mapping relationship with the sample image.
And performing grid division on each sample characteristic image in the sample characteristic image set, dividing each sample characteristic image into a plurality of sample characteristic sub-images, and further acquiring a sample characteristic vector group formed by each sample characteristic sub-image.
Specifically, after the sample feature image set is acquired, for each sample feature image in the sample feature image set, grid division may be performed on each sample feature image based on a third preset parameter, so as to divide each sample feature image into a plurality of sample feature sub-images.
Wherein the third preset parameter may be determined based on a priori knowledge and/or actual conditions. The specific value of the third preset parameter is not limited in the embodiment of the invention.
Alternatively, the third preset parameter may be 9 × 9. Accordingly, the each sample feature image may be uniformly grid-divided based on the third preset parameter, and the each sample feature image may be divided into 9 × 9 sample feature sub-images.
After each sample feature image is divided into a plurality of sample feature sub-images, a sample feature vector group composed of all the sample feature sub-images can be obtained. The features in each sample feature image in the sample feature vector group described above may reach the pixel level.
It should be noted that, each sample feature sub-image obtained by grid-dividing each sample feature image having a mapping relationship with any sample image has a mapping relationship with the sample image, and the sample image may be referred to as a mapping sample image of each sample feature sub-image.
In the embodiment of the invention, 9 multiplied by 9 sample characteristic images can be obtained based on each sample image, each sample characteristic image can be divided into 9 multiplied by 9 sample characteristic sub-images, namely 81 sample characteristic sub-images can be obtained based on each sample image, so that a sample characteristic vector group formed by each sample characteristic sub-image can be obtained based on each sample image, a sample characteristic vector group with larger data volume can be obtained more efficiently and more accurately based on a small number of sample images, and the initial model is pre-trained by using self-supervision contrast learning based on the sample characteristic vector group, so that the efficiency of pre-training can be improved, the calculation precision of a trained pre-trained self-supervision model can be improved, and the segmentation accuracy of the trained target segmentation model can be further improved.
Based on the content of each embodiment, pre-training the initial model by using self-supervised contrast learning based on the sample feature vector group to obtain a pre-trained self-supervised model, including: positive and negative sample pairs are determined in the sample feature vector group based on the mapped sample image of the sample feature sub-image and the mapped location of the sample feature sub-image in the mapped sample image.
And mapping the sample image into a sample image having a mapping relation with the sample characteristic sub-image.
Specifically, the mapping position of any sample feature sub-image in the mapping sample image of the sample feature sub-image may be determined based on the position of the sample feature sub-image in the sample feature image having a mapping relationship with the sample feature sub-image and the position of the sample sub-image having a mapping relationship with the sample feature image in the sample image having a mapping relationship with the sample sub-image.
For example: the position identification of any sample feature sub-image in the sample feature image having a mapping relationship with the sample feature sub-image is (1,1), the sample feature image is obtained by a derivative image of a certain sample sub-image, the position identification of the sample sub-image in the sample image having a mapping relationship with the sample sub-image is (5,5), and further, the mapping position of the sample feature sub-image in the mapping sample image of the sample feature sub-image can be determined based on the position identification.
Based on the mapping sample image of the sample feature sub-image and the mapping position of the sample feature sub-image in the mapping sample image, condition judgment can be performed, and a positive sample pair and a negative sample can be determined in the sample feature vector group based on the condition judgment result.
Based on the content of the foregoing embodiments, determining a positive sample pair and a negative sample in the sample feature vector group based on the mapping sample image of the sample feature sub-image and the mapping position of the sample feature sub-image in the mapping sample image includes: and under the condition that the mapping sample images of any two sample feature sub-images in the sample feature vector group are the same and the mapping positions of any two sample feature sub-images in the mapping sample images of any two sample feature sub-images are the same, determining any two sample feature sub-images as a positive sample pair, and determining each sample feature sub-image except any two sample feature sub-images in the sample feature vector group as a negative sample.
Specifically, two sample sub-images with the same mapping position in the sample feature vector group may be determined as a positive sample pair, and each sample feature sub-image except the two sample sub-images in the sample feature vector group may be determined as a negative sample.
And pre-training the initial model by using self-supervision contrast learning based on the positive sample pair and the negative sample to obtain a pre-training self-supervision model.
Specifically, after determining the positive sample pair and the negative sample in the sample feature vector group, the initial model may be pre-trained using the self-supervised contrast learning based on the positive sample pair and the negative sample, so that a pre-trained self-supervised model may be obtained.
And 102, performing supervised training on the pre-trained self-supervised model based on the sample image and the target segmentation result of the sample image to obtain a trained target segmentation model for performing target segmentation on the input image.
Specifically, as shown in fig. 2, after the pre-trained self-supervised model is obtained, the pre-trained self-supervised model may be supervised-trained by using the sample image as a sample and using the lesion segmentation result of the sample image as a sample label, so as to obtain a trained lesion segmentation model.
After the trained lesion segmentation model is obtained, the image can be input into the trained lesion segmentation model, and the trained lesion segmentation model can segment a lesion in the input image, so that a lesion segmentation result of the input image output by the trained lesion segmentation model can be obtained.
The embodiment of the invention performs pre-training on an initial model by utilizing self-supervision contrast learning based on a sample feature vector group to obtain a pre-trained self-supervision model, performs supervised training on the pre-trained self-supervision model based on a sample image and a target segmentation result of the sample image to obtain a well-trained focus segmentation model, and the sample feature vector group is obtained based on the sample image, so that a pixel-level sample feature vector group with large data volume can be obtained based on a small number of sample images, the labor cost and the time cost required for obtaining the sample image and labeling the sample image can be reduced, and the training efficiency of the target segmentation model can be improved on the basis of not reducing the segmentation accuracy of the target segmentation model by pixel-level self-supervision contrast learning and supervised training.
As an alternative embodiment, the sample feature vector group is obtained based on the following steps: and carrying out random data enhancement on the sample image to obtain a derivative image of the sample image.
Specifically, random data enhancement may be performed on each sample image, and an image obtained by performing random data enhancement on each sample image may be used as a derivative image of each sample image.
Optionally, the random data enhancement in the embodiment of the present invention may include at least one of color transformation, random noise addition, mixup-like enhancement, and cutmix data enhancement.
It should be noted that the number of derived images of any sample sub-image may be one or more.
Alternatively, the number of derived images of any one sample image may be determined based on a fourth preset parameter. The fourth preset parameter in the embodiment of the present invention may be 64, and accordingly, the number of the derived images of any sample image may be 64.
It should be noted that random data enhancement is performed on any sample image, and the derived image of the sample image obtained has a mapping relationship with the sample image.
And performing feature extraction on the sample image and the derivative image of the sample image to obtain a sample feature image set.
Specifically, after obtaining the derivative image of each sample image, each sample image and the derivative image of each sample image may be input to the feature extractor, and a sample feature image set composed of sample feature images output by the feature extractor may be obtained.
The feature extractor may perform feature extraction on each sample image and a derivative image of each sample image, and further may obtain a feature image of each sample image and output the feature image as a sample feature image, and obtain a feature map of the derivative image of each sample image and output the feature map as a sample feature image.
It should be noted that each sample feature image obtained by performing feature extraction on any sample image and a derivative image of the sample image has a mapping relationship with the sample image.
And performing grid division on each sample characteristic image in the sample characteristic image set, dividing each sample characteristic image into a plurality of sample characteristic sub-images, and further acquiring a sample characteristic vector group formed by each sample characteristic sub-image.
Specifically, after the sample feature image set is acquired, for each sample feature image in the sample feature image set, grid division may be performed on each sample feature image based on a fifth preset parameter, so as to divide each sample feature image into a plurality of sample feature sub-images.
Wherein the fifth preset parameter may be determined based on a priori knowledge and/or actual conditions. The specific value of the fifth preset parameter is not limited in the embodiment of the invention.
Alternatively, the fifth preset parameter may be 81 × 81. Accordingly, the each sample feature image may be uniformly grid-divided based on the fifth preset parameter, and the each sample feature image may be divided into 81 × 81 sample feature sub-images.
After each sample feature image is divided into a plurality of sample feature sub-images, a sample feature vector group composed of all the sample feature sub-images can be obtained. The features in each sample feature image in the sample feature vector group described above may reach the pixel level.
It should be noted that, each sample feature sub-image obtained by grid-dividing each sample feature image having a mapping relationship with any sample image has a mapping relationship with the sample image, and the sample image may be referred to as a mapping sample image of each sample feature sub-image.
After the sample feature vector group is obtained based on the method, the initial model may be pre-trained by using the self-supervised contrast learning based on the sample feature vector group based on the method in the embodiments, so as to obtain a pre-trained self-supervised model.
According to the embodiment of the invention, random data enhancement is performed on the sample image, after the derived image of the sample image is obtained, the characteristics of the sample image and the derived image of the sample image are extracted, a sample characteristic image set is obtained, each sample characteristic image in the sample characteristic image set is subjected to grid division, each sample characteristic image is divided into a plurality of sample characteristic sub-images, and then a sample characteristic vector group formed by each sample characteristic sub-image is obtained.
Based on the content of each embodiment, based on the sample feature vector group, the block vector group comparison loss function adopted when the initial model is pre-trained by using the self-supervision comparison learning is specifically as follows:
Figure 494231DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 188386DEST_PATH_IMAGE002
representing a block vector group versus loss function value;
Figure 401193DEST_PATH_IMAGE003
representing the second in the sample image
Figure 503053DEST_PATH_IMAGE003
A derivative image of the sample sub-images;
Figure 775902DEST_PATH_IMAGE004
is shown as
Figure 832720DEST_PATH_IMAGE005
A value of a contrast loss function for a derived image of the sample sub-images;
Figure 98485DEST_PATH_IMAGE006
is shown as
Figure 799725DEST_PATH_IMAGE007
A spatial vector of spatial positions;
Figure 758323DEST_PATH_IMAGE008
representing the division of the sample image
Figure 646644DEST_PATH_IMAGE005
A derivative image of each sample sub-image other than the sample sub-images;
Figure 981679DEST_PATH_IMAGE009
representing a hyper-parameter;
Figure 271846DEST_PATH_IMAGE010
a feature vector representing a positive sample pair;
Figure 666925DEST_PATH_IMAGE011
a feature vector representing a negative example.
It should be noted that, based on the sample feature vector group, the training goal of pre-training the initial model by using the self-supervised contrast learning is to minimize the comparison loss function value of the block vector group.
Fig. 3 is a schematic flow chart of a lesion segmentation method according to the present invention. The lesion segmentation method of the present invention is described below with reference to fig. 3. As shown in fig. 3, the method includes: step 301, acquiring a target medical image.
The main execution unit of the embodiment of the present invention is a lesion segmentation apparatus.
Specifically, the target medical image is an execution object of the lesion segmentation method provided by the present invention, and based on the lesion segmentation method provided by the present invention, a lesion in the target medical image can be segmented.
The embodiment of the invention can acquire the target medical image based on a medical imaging system.
Step 302, inputting the target medical image into a lesion segmentation model, and obtaining a lesion segmentation result of the target medical image output by the lesion segmentation model.
The lesion segmentation model is obtained by training based on the training method of the target segmentation model.
Specifically, after the target medical image is acquired, the target medical image may be input into the lesion segmentation model.
The focus segmentation model can label pixels covered by the focus in the target medical image, and then the labeled target medical image can be output as a focus identification image of the target medical image.
The lesion segmentation model in the embodiment of the present invention is obtained by training based on the training method of the target segmentation model in any one of the embodiments. For a specific training process of the lesion segmentation model, the contents in the above embodiments may be referred to, and details are not repeated in the embodiments of the present invention.
According to the embodiment of the invention, the focus segmentation result of the target medical image is obtained based on the trained focus segmentation model, so that the focus in the target medical image can be segmented more efficiently and more accurately, the labor cost and time cost required for segmenting the focus in the target medical image can be reduced, and data support can be provided for disease diagnosis.
Fig. 4 is a schematic structural diagram of a training apparatus for a target segmentation model provided in the present invention. The following describes the training apparatus of the object segmentation model provided by the present invention with reference to fig. 4, and the training apparatus of the object segmentation model described below and the training method of the object segmentation model provided by the present invention described above may be referred to correspondingly. As shown in fig. 4, the apparatus includes: a pre-training module 401 and a supervised training module 402.
And the pre-training module 401 is configured to pre-train the initial model by using self-supervised contrast learning based on the sample feature vector group to obtain a pre-trained self-supervised model.
And the supervised training module 402 is configured to perform supervised training on the pre-trained self-supervised model based on the sample image and the lesion segmentation result of the sample image to obtain a trained target segmentation model, so as to perform target segmentation on the input image.
Wherein the sample feature vector group is obtained based on the sample image.
Specifically, pre-training module 401 and supervised training module 402 are electrically connected.
Optionally, the training device of the target segmentation model may further include a vector group generation module.
The vector group generation module is used for carrying out grid division on the sample image and dividing the sample image into a plurality of sample sub-images; performing random data enhancement on each sample sub-image to obtain a derivative image of each sample sub-image; extracting the characteristics of each sample sub-image and the derivative image of each sample sub-image to obtain a sample characteristic image; and performing grid division on each sample characteristic image in the sample characteristic image set, dividing each sample characteristic image into a plurality of sample characteristic sub-images, and further acquiring a sample characteristic vector group formed by each sample characteristic sub-image.
Optionally, the pre-training module 401 may be specifically configured to determine a positive sample pair and a negative sample in the sample feature vector group based on the mapping sample image of the sample feature sub-image and the mapping position of the sample feature sub-image in the mapping sample image; pre-training the initial model by using self-supervision contrast learning based on the positive sample pair and the negative sample to obtain a pre-training self-supervision model; and mapping the sample image into a sample image having a mapping relation with the sample characteristic sub-image.
Optionally, the pre-training module 401 may comprise a sample determination unit.
The sample determining unit is used for determining any two sample feature sub-images as a positive sample pair and determining each sample feature sub-image except any two sample feature sub-images in the sample feature vector group as a negative sample under the condition that the mapping sample images of any two sample feature sub-images in the sample feature vector group are the same and the mapping positions of any two sample feature sub-images in the mapping sample images of any two sample feature sub-images are the same.
Optionally, the vector group generating module may be further configured to perform random data enhancement on the sample image to obtain a derivative image of the sample image; carrying out feature extraction on the sample image and a derivative image of the sample image to obtain a sample feature image set; and performing grid division on each sample characteristic image in the sample characteristic image set, dividing each sample characteristic image into a plurality of sample characteristic sub-images, and further acquiring a sample characteristic vector group formed by each sample characteristic sub-image.
The training device for the target segmentation model in the embodiment of the invention performs pre-training on an initial model by using self-supervised contrast learning based on the sample feature vector group to obtain a pre-trained self-supervised model, and performs supervised training on the pre-trained self-supervised model based on the sample image and a target segmentation result of the sample image to obtain a trained focus segmentation model.
Fig. 5 is a schematic structural diagram of a lesion segmentation apparatus according to the present invention. The lesion segmentation apparatus provided in the present invention will be described with reference to fig. 5, and the lesion segmentation apparatus described below and the lesion segmentation method provided in the present invention described above may be referred to in correspondence. As shown in fig. 5, the apparatus includes: an image acquisition module 501 and a lesion segmentation module 502.
An image obtaining module 501, configured to obtain a target medical image.
The lesion segmentation module 502 is configured to input the target medical image into a lesion segmentation model, and obtain a lesion segmentation result of the target medical image output by the lesion segmentation model.
The lesion segmentation model is obtained by training based on the training method of the target segmentation model.
Specifically, the image acquisition module 501 is electrically connected to the lesion segmentation module 502.
The focus segmentation device in the embodiment of the invention obtains the focus segmentation result of the target medical image based on the trained focus segmentation model, can more efficiently and accurately segment the focus in the target medical image, can reduce the labor cost and time cost required for segmenting the focus in the target medical image, and can provide data support for disease diagnosis.
Fig. 6 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 6: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a training method for a target segmentation model and/or to perform a lesion segmentation method. The training method of the target segmentation model comprises the following steps: pre-training the initial model by using self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model; carrying out supervised training on the pre-trained self-supervised model based on the sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image; wherein the sample feature vector group is obtained based on the sample image. The lesion segmentation method comprises the following steps: acquiring a target medical image; inputting the target medical image into a focus segmentation model, and acquiring a focus segmentation result of the target medical image output by the focus segmentation model; the lesion segmentation model is obtained by training based on the training method of the target segmentation model.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which comprises a computer program, which can be stored on a non-transitory computer-readable storage medium, and which, when executed by a processor, is capable of executing the method for training a segmentation model of an object provided by the above methods, and/or a method for lesion segmentation. The training method of the target segmentation model comprises the following steps: pre-training the initial model by using self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model; carrying out supervised training on the pre-trained self-supervised model based on the sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image; wherein the sample feature vector group is obtained based on the sample image. The lesion segmentation method comprises the following steps: acquiring a target medical image; inputting the target medical image into a focus segmentation model, and acquiring a focus segmentation result of the target medical image output by the focus segmentation model; the lesion segmentation model is obtained by training based on the training method of the target segmentation model.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements a training method for a target segmentation model provided by performing the above methods, and/or a lesion segmentation method. The training method of the target segmentation model comprises the following steps: pre-training the initial model by using self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model; carrying out supervised training on the pre-trained self-supervised model based on the sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image; wherein the sample feature vector group is obtained based on the sample image. The lesion segmentation method comprises the following steps: acquiring a target medical image; inputting the target medical image into a focus segmentation model, and acquiring a focus segmentation result of the target medical image output by the focus segmentation model; the lesion segmentation model is obtained by training based on the training method of the target segmentation model.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A method for training a target segmentation model, comprising:
pre-training the initial model by using self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model;
carrying out supervised training on the pre-trained self-supervised model based on a sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image;
wherein the group of sample feature vectors is obtained based on the sample image;
the sample feature vector group is obtained based on the following steps:
grid division is carried out on a sample image, and the sample image is divided into a plurality of sample sub-images;
performing random data enhancement on each sample sub-image to obtain a derivative image of each sample sub-image;
performing feature extraction on each sample sub-image and the derivative image of each sample sub-image to obtain a sample feature image;
grid division is carried out on each sample characteristic image in the sample characteristic image set, each sample characteristic image is divided into a plurality of sample characteristic sub-images, and then the sample characteristic vector group formed by each sample characteristic sub-image is obtained;
alternatively, the sample feature vector group is obtained based on the following steps:
carrying out random data enhancement on the sample image to obtain a derivative image of the sample image;
performing feature extraction on the sample image and a derivative image of the sample image to obtain a sample feature image set;
and performing grid division on each sample feature image in the sample feature image set, dividing each sample feature image into a plurality of sample feature sub-images, and further acquiring the sample feature vector group formed by each sample feature sub-image.
2. The method for training the target segmentation model according to claim 1, wherein the pre-training the initial model by using the self-supervised contrast learning based on the sample feature vector group to obtain the pre-trained self-supervised model comprises:
determining a positive sample pair and a negative sample in the sample feature vector group based on a mapping sample image of the sample feature sub-image and a mapping position of the sample feature sub-image in the mapping sample image;
pre-training the initial model by utilizing self-supervision contrast learning based on the positive sample pair and the negative sample to obtain a pre-trained self-supervision model;
and the mapping sample image is a sample image which has a mapping relation with the sample characteristic sub-image.
3. The method for training the object segmentation model according to claim 2, wherein the determining the positive sample pairs and the negative samples in the sample feature vector group based on the mapping sample image of the sample feature sub-image and the mapping positions of the sample feature sub-image in the mapping sample image comprises:
and under the condition that the mapping sample images of any two sample feature sub-images in the sample feature vector group are the same and the mapping positions of the any two sample feature sub-images in the mapping sample images of any two sample feature sub-images are the same, determining the any two sample feature sub-images as a positive sample pair, and determining each sample feature sub-image except the any two sample feature sub-images in the sample feature vector group as a negative sample.
4. A method for training a target segmentation model according to any one of claims 1 to 3, wherein the sample image is a medical image; the target segmentation result of the sample image is a focus segmentation result of the sample image;
accordingly, the trained target segmentation model can be used for carrying out focus segmentation on the input image.
5. A method of lesion segmentation, comprising:
acquiring a target medical image;
inputting the target medical image into a focus segmentation model, and acquiring a focus segmentation result of the target medical image output by the focus segmentation model;
wherein the lesion segmentation model is trained based on the training method of the target segmentation model according to any one of claims 1 to 4.
6. An apparatus for training an object segmentation model, comprising:
the pre-training module is used for pre-training the initial model by utilizing self-supervision contrast learning based on the sample feature vector group to obtain a pre-training self-supervision model;
the supervised training module is used for carrying out supervised training on the pre-trained self-supervised model based on a sample image and a target segmentation result of the sample image to obtain a trained target segmentation model for carrying out target segmentation on an input image;
wherein the group of sample feature vectors is obtained based on the sample image;
the sample feature vector group is obtained based on the following steps:
grid division is carried out on a sample image, and the sample image is divided into a plurality of sample sub-images;
carrying out random data enhancement on each sample sub-image to obtain a derivative image of each sample sub-image;
performing feature extraction on each sample sub-image and the derivative image of each sample sub-image to obtain a sample feature image;
grid division is carried out on each sample characteristic image in the sample characteristic image set, each sample characteristic image is divided into a plurality of sample characteristic sub-images, and then the sample characteristic vector group formed by each sample characteristic sub-image is obtained;
alternatively, the sample feature vector group is obtained based on the following steps:
carrying out random data enhancement on the sample image to obtain a derivative image of the sample image;
performing feature extraction on the sample image and a derivative image of the sample image to obtain a sample feature image set;
and performing grid division on each sample characteristic image in the sample characteristic image set, dividing each sample characteristic image into a plurality of sample characteristic sub-images, and further acquiring the sample characteristic vector group formed by each sample characteristic sub-image.
7. A lesion segmentation apparatus, comprising:
the image acquisition module is used for acquiring a target medical image;
the focus segmentation module is used for inputting the target medical image into a focus segmentation model and acquiring a focus segmentation result of the target medical image output by the focus segmentation model;
wherein the lesion segmentation model is trained based on the training method of the target segmentation model according to any one of claims 1 to 4.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs a method of training a target segmentation model according to any one of claims 1 to 4 and/or a method of lesion segmentation according to claim 5.
9. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a method for training a target segmentation model according to any one of claims 1 to 4 and/or a method for lesion segmentation according to claim 5.
CN202211068494.8A 2022-09-02 2022-09-02 Training of target segmentation model, focus segmentation method and device Pending CN115131361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211068494.8A CN115131361A (en) 2022-09-02 2022-09-02 Training of target segmentation model, focus segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211068494.8A CN115131361A (en) 2022-09-02 2022-09-02 Training of target segmentation model, focus segmentation method and device

Publications (1)

Publication Number Publication Date
CN115131361A true CN115131361A (en) 2022-09-30

Family

ID=83386983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211068494.8A Pending CN115131361A (en) 2022-09-02 2022-09-02 Training of target segmentation model, focus segmentation method and device

Country Status (1)

Country Link
CN (1) CN115131361A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115458128A (en) * 2022-11-10 2022-12-09 北方健康医疗大数据科技有限公司 Method, device and equipment for generating digital human body image based on key points

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011427A (en) * 2021-03-17 2021-06-22 中南大学 Remote sensing image semantic segmentation method based on self-supervision contrast learning
CN113128591A (en) * 2021-04-14 2021-07-16 中山大学 Rotation robust point cloud classification method based on self-supervision learning
CN113192062A (en) * 2021-05-25 2021-07-30 湖北工业大学 Arterial plaque ultrasonic image self-supervision segmentation method based on image restoration
CN114091572A (en) * 2021-10-26 2022-02-25 上海瑾盛通信科技有限公司 Model training method and device, data processing system and server
CN114387454A (en) * 2022-01-07 2022-04-22 东南大学 Self-supervision pre-training method based on region screening module and multi-level comparison

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011427A (en) * 2021-03-17 2021-06-22 中南大学 Remote sensing image semantic segmentation method based on self-supervision contrast learning
CN113128591A (en) * 2021-04-14 2021-07-16 中山大学 Rotation robust point cloud classification method based on self-supervision learning
CN113192062A (en) * 2021-05-25 2021-07-30 湖北工业大学 Arterial plaque ultrasonic image self-supervision segmentation method based on image restoration
CN114091572A (en) * 2021-10-26 2022-02-25 上海瑾盛通信科技有限公司 Model training method and device, data processing system and server
CN114387454A (en) * 2022-01-07 2022-04-22 东南大学 Self-supervision pre-training method based on region screening module and multi-level comparison

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115458128A (en) * 2022-11-10 2022-12-09 北方健康医疗大数据科技有限公司 Method, device and equipment for generating digital human body image based on key points
CN115458128B (en) * 2022-11-10 2023-03-24 北方健康医疗大数据科技有限公司 Method, device and equipment for generating digital human body image based on key points

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN111161275B (en) Method and device for segmenting target object in medical image and electronic equipment
US20210118144A1 (en) Image processing method, electronic device, and storage medium
TW202014984A (en) Image processing method, electronic device, and storage medium
Lu et al. Deep texture and structure aware filtering network for image smoothing
CN109978037A (en) Image processing method, model training method, device and storage medium
CN111127527B (en) Method and device for realizing lung nodule self-adaptive matching based on CT image bone registration
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
CN110570352A (en) image labeling method, device and system and cell labeling method
CN110110667B (en) Processing method and system of diatom image and related components
CN110807775A (en) Traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence and storage medium
CN110767292A (en) Pathological number identification method, information identification method, device and information identification system
CN115359066B (en) Focus detection method and device for endoscope, electronic device and storage medium
CN112949654A (en) Image detection method and related device and equipment
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN113592783A (en) Method and device for accurately quantifying basic indexes of cells in corneal confocal image
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
CN115131361A (en) Training of target segmentation model, focus segmentation method and device
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
CN110910409B (en) Gray image processing method, device and computer readable storage medium
CN112801238B (en) Image classification method and device, electronic equipment and storage medium
KR102476888B1 (en) Artificial diagnostic data processing apparatus and its method in digital pathology images
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
Zhang et al. Multi-scale network with the deeper and wider residual block for MRI motion artifact correction
CN113643263A (en) Identification method and system for upper limb bone positioning and forearm bone fusion deformity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220930