CN110503654A - A kind of medical image cutting method, system and electronic equipment based on generation confrontation network - Google Patents
A kind of medical image cutting method, system and electronic equipment based on generation confrontation network Download PDFInfo
- Publication number
- CN110503654A CN110503654A CN201910707712.XA CN201910707712A CN110503654A CN 110503654 A CN110503654 A CN 110503654A CN 201910707712 A CN201910707712 A CN 201910707712A CN 110503654 A CN110503654 A CN 110503654A
- Authority
- CN
- China
- Prior art keywords
- image
- level
- pixel
- segmented
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000011218 segmentation Effects 0.000 claims abstract description 132
- 239000002775 capsule Substances 0.000 claims abstract description 130
- 238000012549 training Methods 0.000 claims abstract description 72
- 239000000284 extract Substances 0.000 claims abstract description 14
- 238000002372 labelling Methods 0.000 claims description 111
- 230000006870 function Effects 0.000 claims description 62
- 239000013598 vector Substances 0.000 claims description 50
- 238000003709 image segmentation Methods 0.000 claims description 42
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 238000009792 diffusion process Methods 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 22
- 230000004913 activation Effects 0.000 claims description 20
- 238000005457 optimization Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 19
- 230000015654 memory Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 13
- 238000005192 partition Methods 0.000 claims description 12
- 239000004576 sand Substances 0.000 claims description 12
- 238000005295 random walk Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 8
- 210000002569 neuron Anatomy 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 230000004807 localization Effects 0.000 claims description 4
- 230000009977 dual effect Effects 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 13
- 238000002598 diffusion tensor imaging Methods 0.000 description 12
- 238000003745 diagnosis Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 7
- 201000010099 disease Diseases 0.000 description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000002059 diagnostic imaging Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 210000004072 lung Anatomy 0.000 description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 210000000278 spinal cord Anatomy 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 239000012530 fluid Substances 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000004064 recycling Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 238000012285 ultrasound imaging Methods 0.000 description 2
- 208000003174 Brain Neoplasms Diseases 0.000 description 1
- 206010028570 Myelopathy Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000005865 ionizing radiation Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 210000004291 uterus Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
- G06T2207/30012—Spine; Backbone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of based on the medical image cutting method, system and the electronic equipment that generate confrontation network.Firstly, how research generator extracts the Pixel-level feature of different classes of high quality graphic, and carries out structured features expression using capsule model, and then realize the generation of Pixel-level mark sample;Secondly the suitable arbiter of building, for differentiating the true and false for generating Pixel-level mark sample, and design suitable error majorized function, it will differentiate that result is fed back respectively in the model of generator and arbiter, pass through continuous dual training, the sample generative capacity and discriminating power of generator and arbiter is respectively increased, finally generates Pixel-level using trained generator and marks sample, realizes the Pixel-level segmentation of image level mark medical image.The application is effectively reduced dependence of the parted pattern to Pixel-level labeled data, can improve the efficiency for generating sample and authentic specimen dual training and effectively realize that high-precision pixel-level image is divided.
Description
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a medical image segmentation method and system based on a generative countermeasure network, and an electronic device.
Background
With the rapid development of medical imaging technology, medical images have been widely and deeply applied in clinical medicine. According to statistics, tens of millions of cases are diagnosed and treated by medical images in every year around the world. In the conventional method of diagnosis and treatment based on medical image, a physician reads and recognizes medical image data and makes a judgment on diagnosis and treatment of a disease. The diagnosis and treatment mode is very inefficient, the individual difference is large, doctors are easy to miss diagnosis and misdiagnose according to personal experience, the doctor is tired when the doctor reads the film for a long time, and the film reading accuracy rate is reduced. With the rise of artificial intelligence, the machine is used for screening and judging image data in advance, key suspicious regions are marked, and then the images are sent to doctors for diagnosis and treatment, so that the workload of the doctors can be greatly reduced, and the results are comprehensive, stable and efficient. Therefore, artificial intelligence has an important application prospect in the field of medical imaging.
The medical imaging technology comprises two parts of medical imaging and medical image processing. Common medical Imaging techniques are mainly MRI (Magnetic Resonance Imaging), computed tomography Imaging (CT), Positron Emission Tomography (PET), ultrasound Imaging (US) and X-ray Imaging. Different imaging technologies have advantages in different applications of disease diagnosis and treatment, and selection of a corresponding imaging technology for diagnosis and treatment purposes of a specific disease is gradually formed in specific clinical applications. For example, magnetic resonance imaging can have excellent resolution for soft tissue imaging, has no harm of ionizing radiation and the like, and has wide application in diagnosis and treatment of parts such as the brain, the uterus and the like. The basic flow of the application of deep learning to medical image processing is shown in fig. 1.
In the conventional medical image segmentation task based on generation of the countermeasure network, in order to be able to sufficiently train the neural network and achieve a high accuracy result, a large amount of relevant medical image data needs to be prepared, and the medical image data needs to be labeled manually pixel by pixel. For example, to study the segmentation of tumor regions in the human brain, it is necessary to manually label the corresponding brain tumor images. Medical diseases are various, corresponding medical images are also various, deep learning is utilized to carry out segmentation on medical images, the medical images corresponding to each disease need manual labeling, and a large amount of manpower and material resources are consumed. Even the largest common data set can only provide pixel-level annotation samples of limited semantic categories. High quality data is scarce in medical image data sets, severely limiting the accuracy of semantic segmentation models.
A Generative Adaptive Networks (GAN) is a deep learning model, and is one of the most promising methods for unsupervised learning in complex distribution in recent years. The model passes through (at least) two modules in the framework: the mutual game learning of the Generative Model (Generative Model) and the Discriminative Model (Discriminative Model) yields a reasonably good output. The existing image segmentation model based on the generation countermeasure network can be applied to the segmentation of object images across categories, but in the field of medical images, the model has the problems of insufficient feature extraction, large calculation amount of countermeasure training and the like.
Disclosure of Invention
The application provides a medical image segmentation method, a medical image segmentation system and electronic equipment based on a generation countermeasure network, and aims to solve at least one of the technical problems in the prior art to a certain extent.
In order to solve the above problems, the present application provides the following technical solutions:
a medical image segmentation method based on generation of a countermeasure network comprises the following steps:
step a: respectively acquiring pixel-level labeling samples of other medical images and image-level labeling samples of medical images to be segmented;
step b: training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented, wherein the generation countermeasure network comprises a generator and a discriminator;
step c: the generator extracts pixel-level features of pixel-level annotation samples of other medical images, processes the image-level annotation samples of the medical images to be segmented through the pixel-level features to generate the pixel-level annotation samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level annotation samples;
step d: inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
step e: inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in step c, the generator includes a capsule network module and a region positioning network, and the generating of the segmentation prediction samples of the medical image to be segmented specifically includes:
step b 1: pre-training a capsule network module through pixel-level annotation samples of other medical images to obtain a semantic label-free sample, and processing the image-level annotation sample of the image to be segmented through the semantic label-free sample to distinguish the background and the effective segmentation area of the image-level annotation sample of the image to be segmented;
step b 2: inputting the image-level labeling sample of the image to be segmented into a capsule network module after pre-training, and outputting a reconstructed image of the image-level labeling sample of the image to be segmented through the capsule network module;
step b 3: the region positioning network utilizes the characteristic extraction of the convolution layer to generate a characteristic graph containing position information of an image-level labeling sample of an image to be segmented, and adopts a global average pooling layer to apply weight (w)1,w2…,wn) Carrying out weighted average on the feature map to obtain an area positioning feature map of an image-level labeling sample of the image to be segmented;
step b 4: and executing a self-diffusion algorithm according to the reconstructed image and the region positioning characteristic graph, determining a region pixel point segmentation line, and obtaining a segmentation prediction sample of the image-level annotation sample of the image to be segmented.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in step b2, the capsule network module includes a convolution layer, a PrimaryCaps layer, a DigitCaps layer, and a decoding layer, and the capsule network module records the direction and position information of the edge pixels of the image-level labeled sample segmentation region of the image to be segmented by using the output vector of a single capsule neuron, extracts the classified probability value by using the nonlinear activation function of the vector, determines the segmentation region and the background of the image-level labeled sample of the image to be segmented, calculates the edge loss, and outputs the reconstructed image of the image-level labeled sample of the image to be segmented.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in step b4, the performing a self-diffusion algorithm according to the reconstructed image and the region localization feature map specifically includes: and diffusing pixel points in the area with the larger activation value in the area positioning feature map by using a random walk self-diffusion algorithm, calculating the Gaussian distance from each pixel to the input point on the image by using the input point of the area positioning feature map, selecting an optimal path from the Gaussian distances, obtaining a segmentation line of the pixel points in the area, and finally generating a segmentation prediction sample.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step d, the discriminator comprises a Cascade Cascade module, a Capsule network module and a parameter optimization module, and the training of the discriminator for 'generation-confrontation' specifically comprises:
step d 1: extracting pixels with wrong labels, key pixels with confidence degrees lower than a set threshold value and corresponding ground struts in the segmentation prediction sample through a Cascade Cascade module, and filtering the pixels with correct labels and confidence degrees higher than the set threshold value;
step d 2: processing the extracted key pixels and the corresponding grountruth through a Capsule network module, and generating errors;
step d 3: the parameter optimization module optimizes the network parameters of the generator and the discriminator by using errors generated by the Capsule network module; wherein for a given partitioned prediction sample { I }f,Lf*And the corresponding true labeled samples { I }f,LfIntegral of the networkThe error function is:
in the above formula, θSAnd thetapParameters representing generator and arbiter, respectively, JbRepresenting a binary cross-entropy loss function, OsAnd OpRepresenting the output of the generator and the arbiter, respectively, when the input comes from the true annotated sample { If,LfAnd partition prediction samples { I }f,Lf*And when the pixel point type is judged to be true, judging whether the pixel point type is true or false by outputting 1 and 0.
Another technical scheme adopted by the embodiment of the application is as follows: a medical image segmentation system based on a generation countermeasure network comprises a sample acquisition module and the generation countermeasure network,
a sample collection module: the image segmentation method comprises the steps of acquiring pixel-level annotation samples of other medical images and image-level annotation samples of medical images to be segmented respectively;
training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented;
the generation countermeasure network comprises a generator and a discriminator, wherein the generator carries out pixel-level feature extraction on pixel-level labeling samples of other medical images, processes image-level labeling samples of medical images to be segmented through the pixel-level features to generate pixel-level labeling samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level labeling samples;
inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the generator comprises a pre-training module, a capsule network module, a region positioning network module and a sample generating module:
a pre-training module: the capsule network module is pre-trained through pixel-level annotation samples of other medical images to obtain a semantic label-free sample, and the image-level annotation sample of the image to be segmented is processed through the semantic label-free sample to distinguish the background and the effective segmentation area of the image-level annotation sample of the image to be segmented;
a capsule network module: the capsule network module is used for inputting the image-level labeling sample of the image to be segmented into the capsule network module after pre-training is finished, and outputting a reconstructed image of the image-level labeling sample of the image to be segmented through the capsule network module;
area positioning network: feature map containing position information for generating image-level annotation samples of an image to be segmented by using feature extraction of convolutional layers, and weighting (w) by using global average pooling layer1,w2…,wn) Carrying out weighted average on the feature map to obtain an area positioning feature map of an image-level labeling sample of the image to be segmented;
a sample generation module: and the segmentation prediction module is used for executing a self-diffusion algorithm according to the reconstructed image and the region positioning characteristic graph, determining a segmentation line of a region pixel point, and obtaining a segmentation prediction sample of an image-level annotation sample of the image to be segmented.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the capsule network module comprises a convolution layer, a Primarycaps layer, a Digitcaps layer and a decoding layer, the capsule network module adopts the output vector of a single capsule neuron, records the direction and the position information of the edge pixel of the image-level labeling sample segmentation area of the image to be segmented, extracts the classification probability value by adopting the nonlinear activation function of the vector, determines the segmentation area and the background of the image-level labeling sample of the image to be segmented, calculates the edge loss and outputs the reconstructed image of the image-level labeling sample of the image to be segmented.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the sample generation module executes a self-diffusion algorithm according to the reconstructed image and the region positioning characteristic map, and specifically comprises the following steps: and diffusing pixel points in the area with the larger activation value in the area positioning feature map by using a random walk self-diffusion algorithm, calculating the Gaussian distance from each pixel to the input point on the image by using the input point of the area positioning feature map, selecting an optimal path from the Gaussian distances, obtaining a segmentation line of the pixel points in the area, and finally generating a segmentation prediction sample.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the discriminator comprises a Cascade Cascade module, a Capsule network module and a parameter optimization module:
cascade of Cascade modules: the method comprises the steps of extracting pixels with wrong labels, key pixels with confidence degrees lower than a set threshold value and corresponding ground route in the segmentation prediction sample, and filtering the pixels with correct labels and confidence degrees higher than the set threshold value;
a Capsule network module: the method is used for processing the extracted key pixels and the corresponding ground route and generating errors;
a parameter optimization module: the device is used for optimizing the network parameters of the generator and the discriminator by utilizing the error generated by the Capsule network module; wherein for a given partitioned prediction sample { I }f,Lf*And the corresponding true labeled samples { I }f,LfThe overall error function of the network is:
in the above formula, θSAnd thetapParameters representing generator and arbiter, respectively, JbRepresenting a binary cross-entropy loss function, OsAnd OpRepresenting the output of the generator and the arbiter, respectively, when the input comes from the true annotated sample { If,LfAnd partition prediction samples { I }f,Lf*And when the pixel point type is judged to be true, judging whether the pixel point type is true or false by outputting 1 and 0.
The embodiment of the application adopts another technical scheme that: an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the following operations of the above-described medical image segmentation method based on generation of a countermeasure network:
step a: respectively acquiring pixel-level labeling samples of other medical images and image-level labeling samples of medical images to be segmented;
step b: training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented, wherein the generation countermeasure network comprises a generator and a discriminator;
step c: the generator extracts pixel-level features of pixel-level annotation samples of other medical images, processes the image-level annotation samples of the medical images to be segmented through the pixel-level features to generate the pixel-level annotation samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level annotation samples;
step d: inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
step e: inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
Compared with the prior art, the embodiment of the application has the advantages that: the medical image segmentation method, the system and the electronic equipment based on the generation of the countermeasure network optimize the deep convolutional neural network through the fusion Capsule mechanism, fuse the ideas of the Capsule network and the cascade waterfall, generate a new training image sample under the condition that the quantity of the medical image sample is small, realize the semantic segmentation of the medical image data with low quality and only image level labels, transfer the learned segmentation knowledge from the full labeling data of pixel level labeling to the weak labeling data of image level, thereby improving the model characteristic expression capability, expanding the applicability of the medical image labeling sample, effectively reducing the dependence of the segmentation model on the pixel level labeling data, having the characteristics of less network information redundancy and sufficient feature extraction, and under the premise of a small quantity of pixel level labeling samples, improving the efficiency of the countertraining of the generated sample and a real sample, and can effectively realize high-precision pixel-level image segmentation.
Drawings
FIG. 1 is a basic flow diagram of the application of deep learning to medical image processing;
FIG. 2 is a flowchart of a medical image segmentation method based on generation of a countermeasure network according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a structure of a generation countermeasure network according to an embodiment of the present application;
FIG. 4 is a schematic diagram of the structure of a capsule network module;
FIG. 5 is a schematic network structure diagram of a regional positioning network;
FIG. 6 is a schematic structural diagram of a medical image segmentation system based on generation of a countermeasure network according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a hardware device of a medical image segmentation method based on generation of a countermeasure network according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In order to solve the defects of the prior art, the method for segmenting the medical image based on the generation countermeasure network improves the generation countermeasure network by fusing a capsule mechanism, firstly, the generator is researched to extract pixel-level characteristics of high-quality images of different categories, and a capsule model is utilized to carry out structural characteristic representation, so that the generation of a pixel-level labeled sample is realized; and secondly, constructing a proper discriminator for discriminating the authenticity of the generated pixel-level labeling sample, designing a proper error optimization function, feeding discrimination results back to models of the generator and the discriminator respectively, improving the sample generation capacity and discrimination capacity of the generator and the discriminator respectively through continuous confrontation training, and finally generating the pixel-level labeling sample by utilizing the trained generator to realize the pixel-level segmentation of the image-level labeling medical image. In the following embodiments, the present application is described in detail by taking only the medical image segmentation of Cervical Spondylotic Myelopathy (CSM) as an example, and the type of the disease source is not limited to a single case, and can be extended to the image segmentation scenes of multiple cases, such as brain MRI image segmentation, etc. Aiming at image segmentation of different cases, only training samples of corresponding cases need to be collected in a data collection stage, and the training samples are replaced in a generator of the model.
Please refer to fig. 2, which is a flowchart of a medical image segmentation method based on generation of a countermeasure network according to an embodiment of the present application. The medical image segmentation method based on the generation countermeasure network comprises the following steps:
step 100: respectively acquiring pixel level annotation samples and CSM image level annotation samples of other medical images;
in step 100, the other medical images include medical images of the lung and other parts, and the acquisition mode of the labeled sample specifically includes: collection of a few fully labeled CSM image samples { I }f,Lf,TfA total of 500 DTI samples (diffusion tensor imaging) with an image size of 28x28, 60 CSM groups of 60 cases, wherein 27 men and 33 women are 33, the ages of 20-71, and the average age of 45, comprise pixel-level annotation samples and image-level annotation samples { L }f,Tf}; pixel-level labeling samples { I } of other medical images (e.g., human lungs)O,LO8000 DTI samples of (Dacron-Dacro,the image size was 28x 28. The method comprises the steps of respectively obtaining the CSM image and DTI samples of other medical images after correcting deformation and selecting a threshold value, determining a region of interest (ROI) on the DTI samples, and placing the ROI in the center of a spinal cord lesion area, so that the size of the DTI samples is uniform, and the influence of spinal fluid and artifacts is avoided.
Please refer to fig. 3, which is a schematic structural diagram of a generation countermeasure network according to an embodiment of the present application. The generation countermeasure network of the embodiment of the application comprises a generator and a discriminator, wherein the generator is responsible for generating pixel-level labeling data of the medical image, and the discriminator is responsible for refining the generated labels. The generator comprises a capsule network module and a regional positioning network, and is used for labeling samples { I ] of other medical images at pixel levelO,LOThe CSM image level labeling samples are used as pre-training samples of the capsule network module, and the CSM image level labeling samples are marked with samples (L)f,TfAnd the training samples are used as the training samples of the area positioning network.
Step 200: pre-training a capsule network module through pixel-level labeled samples of other medical images to obtain a semantic label-free sample, and processing a CSM image-level labeled sample through the semantic label-free sample to distinguish the background and effective segmentation areas of the CSM image-level labeled sample;
in step 200, the capsule network module employs a migratable semantic segmentation model that can transfer learned segmentation knowledge from pixel-level labeling of fully labeled data to image-level labeling of weakly labeled data. In practical application, high-quality medical images are difficult to acquire, so that the method and the device enable new data to be matched with data of a target domain through a trained model, acquire the characteristics of pixel-level segmentation, and can also realize high-precision image segmentation under the condition of a small sample size. The pre-training process of the capsule network module specifically comprises the following steps:
step 201: processing pixel-level labeling samples of other medical images to convert the pixel-level labeling samples into semanteme-free segmentation image labels;
in step 201, the semantic-free segmentation image labeling is to label only the effective segmentation region of the sample and the background, and not to label the shape of the sample. The network trained by the training data learns that the knowledge for distinguishing the object from the background belongs to wide high-order features, and the image is distinguished by the object from the background, so the wide high-order features have strong universality and can be easily migrated to other different tasks, the knowledge learned from the medical image labeled at a high-quality pixel level can be migrated to the medical image labeled at a low-quality image level, and the high-quality image and the low-quality image do not need to have direct relevance.
Step 202: processing the CSM image-level labeled sample according to the semanteme-free segmentation image label of other medical images to generate a semanteme-free label sample { I) of the CSM image-level labeled sampleO,LOAnd obtaining pixel-level semantic-free labels L by filtering semantic informationO;
In step 202, the purpose of obtaining pixel-level semantic label-free data is to distinguish the effective segmentation region and background of the data, so that the learned knowledge can be more easily migrated in the strongly and weakly labeled data.
Step 300: annotating CSM image level with samples { Lf,TfInputting the capsule network module after the pre-training is finished, and outputting a reconstructed image of the CSM image-level labeling sample by the capsule network module;
in step 300, the capsule network module records the direction and position information of the edge pixels of the CSM image-level labeling sample segmentation region by using the output vector of a single capsule neuron, extracts the classification probability value by using the nonlinear activation function of the vector, determines the segmentation region and the background of the CSM image-level labeling sample, calculates the edge loss and outputs the reconstructed image of the CSM image-level labeling sample. According to the method and the device, parameter information of instantiated pixels of the capsule network module is utilized, the position and angle information of the segmented region is recorded by the activated vector, and the sharpening degree of the boundary region of the segmented region can be effectively improved.
The structure of the capsule network module is shown in fig. 4. The model comprises a convolution layer, a Primarycaps layer, a Digitcaps layer and a decoding layer, wherein each capsule represents a function, activated vectors are output, and the length of the vectors represents the correct probability of a region dividing line searched by the capsule. The functions of each layer of the capsule network module are respectively as follows:
and (3) rolling layers: by labeling CSM image level with samples { Lf,TfConvolution operation is carried out to obtain primary characteristics of the spinal disc morphology, the pressed position and the like. Taking the CSM image with the input size of 28x28 as an example, the convolutional layer has 256 convolutional kernels of 9x9x1 with the step size of 1, and a feature tensor of 20x20x256 is output through feature extraction of the convolutional layer by using the Relu activation function.
PrimaryCaps layer: the capsule comprises 32 main capsules, receives primary characteristics obtained by the convolutional layer, generates a combination of the characteristics for each capsule and expresses the combination in a vectorization mode, and each dimension of a vector represents information such as the direction, the position and the like of the characteristics. Each primary capsule applies 8 convolution kernels of 9x9x256 to the input tensor of 20x20x256, which is a feature tensor of 6x6x8x32, due to the 32 primary capsules.
DigitCaps layer: each digital capsule corresponds to a vector output by a Primarycaps layer, each digital capsule receives a tensor of 6x6x8x32 as input, output variables of each main capsule are mapped into the multi-layer digital capsules in a nested mode by adopting a dynamic route, key features of the vectors are activated, and 8-dimensional input space is mapped into 16-dimensional capsule output space through a weight matrix of 8x 16.
A decoding layer: the decoding layer is the final full-connection layer of the network, comprises a Relu function and a Sigmoid function, receives a correct 16-dimensional vector output by the Digitcaps layer, is used as multiple characteristics expressed by an input learning capsule output vector, calculates edge loss, and learns to reconstruct an image with the size of 28x28 with the same pixels as an input image.
In the embodiment of the present application, the loss function of the capsule network module is as follows:
for a given CSM image level annotation sample { If,Tf} the corresponding loss function is:
in the formula (1), OL((If,Tf);θL) Representing the output of the capsule network module, thetaLWeights and parameters representing network training, JbThe binary cross entropy loss function of the element in brackets is shown.
Application of Capsule network Module to CSM image-level labeled samples { If,TfAnd outputting a semantic-free rough segmentation graph M ═ OL((If,Tf);θL)。
S400: tagging CSM image level with samples { I over a regional positioning networkf,TfPredicting pixel level labels and outputting an area positioning feature map;
in step 400, the network structure of the area location network is shown in fig. 5. The regional positioning network generates a feature map containing position information by using the feature extraction of the convolutional layer, and adopts a global average pooling layer to apply weight (w)1,w2…,wn) And carrying out weighted average on the feature map to obtain a region positioning feature map. The region with larger activation value in the region positioning characteristic map is most probably the damaged region segmentation position of the cervical vertebra of the spinal cord. The area positioning network fully utilizes the hot spot area of the primary characteristic positioning characteristic diagram obtained after the convolutional layer operation of the training sample, and the parameter quantity of the network can be reduced by recycling the characteristic diagram due to the fact that the convolutional neural network has a plurality of parameters needing to be trained, so that the training efficiency of the model is higher.
Step 500: the sample generation module executes a self-diffusion algorithm according to the reconstructed image output by the capsule network and the area positioning characteristic image output by the area positioning network, determines the partition line of the area pixel point, and obtains a rough partition prediction sample { I }f,Lf*};
In step 500, in order to output the segmentation map M containing the semantic information as the segmentation sample labeled at the pixel level, a Random Walk (Random Walk) idea is applied to the region with the larger activation value, the pixel points are diffused through a self-diffusion algorithm of the Random Walk, the input points of the region positioning feature map are utilized, the gaussian distance from each pixel to the input point on the image is calculated, the optimal path is selected from the input points, the segmentation line of the pixel points of the region is obtained, and the classification with the larger activation value is performed from each classificationPoint diffusion to finally generate a coarser partitioned prediction sample { I }f,lf*}。
Given a CSM image-level annotation sample { If,Tf-converting it into superpixels p ═ p1, p 2., pN }, the images being described by an undirected graph model G, where each node corresponds to a particular superpixel, and then performing a self-diffusion algorithm on the undirected graph model G. Defining an objective function of the self-diffusion process of the category on the basis of the rough segmentation graph M:
in equation (2), q ═ q1, q 2.., qN ] denotes a label vector of all superpixels p, and if pi ∈ a, qi is fixed to 1, otherwise its initial value is 0.
Zi,j=exp(-||F(pi)-F(pj)||/2σ2) (3)
In the formula (3), Zi,jRepresenting the gaussian distance between two adjacent superpixels.
Through the above operations, semantic segmentation of the CSM image with only image-level annotations can be realized by using a small amount of high-quality image with pixel-level annotations.
Step 600: partitioned prediction samples { I } to be output by the generatorf,Lf*And true labeled samples { I }f,LfInputting the data into a discriminator to carry out 'generation-confrontation' training, and optimizing the discriminator;
in step 600, the discriminator uses the capsule network module to record the direction and position information of the segmentation region, improve the sharpening degree of the boundary region of the segmentation region, extract key region pixels which are difficult to correctly classify in the image by using the cascade mode, filter out simple and clear flat region pixels, and perform 'generation-confrontation' training by using the processed image until nash equilibrium is formed between the generator and the discriminator, and the discriminator cannot distinguish the segmentation prediction sample { I) generated by the generator from the imagef,Lf*Whether it is the true annotated sample { I }f,LfAnd finishing the training of generating the confrontation network.
As shown in fig. 3, in the embodiment of the present application, the discriminator includes a Cascade module, a Capsule network module, and a parameter optimization module; the specific functions of each module are as follows:
cascade of Cascade modules: a key pixel for taking charge of extracting a segmentation prediction sample; in the image segmentation process, the labeling difficulty of each pixel is different, a flat background area can be easily distinguished, but the boundary pixels of an object and the background area are difficult to distinguish. The prior network structure puts the pixels into the network for processing, which causes unnecessary redundancy of the network. The method adopts the concept of Cascade Cascade, and treats the pixels differently, so as to focus on key pixel regions which are difficult to classify. Extracting the key pixels with the labeling errors and the confidence coefficient lower than a certain threshold value and the corresponding ground route in the segmentation prediction samples generated by the generator, and filtering the pixels with the correct labeling and high confidence coefficient. Therefore, the pixels input to the next stage of training are only key pixels which are not easy to distinguish, redundant information in the network can be reduced, and the working efficiency of the network is improved.
The Capsule network module is responsible for processing the extracted key pixels and the corresponding ground route and generating errors; specifically, the functions of the Capsule network module include:
step 610: extracting local features; respectively inputting the key pixels and the corresponding ground truth into the corresponding convolution layers as input; then, a plurality of convolution layers are utilized to respectively convolve the input key pixels and the corresponding ground pixels, and a segmentation prediction sample { I } is extractedf,Lf*Low-level features in (j); wherein the activation function of the convolutional layer is a ReLU function.
Step 611: extracting high-dimensional features; by constructing a PrimaryCaps layer, inputting the extracted low-level features into the PrimaryCaps layer to obtain high-dimensional feature vectors containing spatial position information; constructing Digitcaps layers, nesting and mapping output variables in the Primarycaps layers into the Digitcaps layers by adopting dynamic routing, constructing high-level features which can represent all input features most currently, and inputting the high-level features into the next layer;
the calculation between the PrimaryCaps layer and the DigitCaps layer involved in step 611 is as follows:
let u be the feature vector extracted after convolution of the key pixel and the corresponding group channeliLow-level feature vector uiAs input to the Primary caps layer, with a weight matrix WijMultiplying to obtain a prediction vectorWherein:
and weighted sum S can be obtained by linear combination between prediction vectorsjWeight coefficient of cijWherein:
obtaining a weighted sum SjThen, S is compressed by a compression functionjLimiting the vector length to obtain an output vector VjWherein:
in equation (6), the first half is the input vector SjThe second half of the scaling scale of (1) is SjThe unit vector of (2). While calculating SjIn the course of (1), coefficient cijIs a constant value cijThe calculation formula of (2) is as follows:
in the formula (7), bijIs a constant value, bijB of the last iterationijValue of and VjAndis obtained by summing the products of, i.e., bijThe updating method comprises the following steps:
step 612: the discriminator puts the high-level feature vector V output by the Digitcaps layer into the decoding layer, and finally outputs the discrimination result of the authenticity of the image through a plurality of full connection layers; the method specifically comprises the following steps: if the output result is 0, judging the input image to be false, and indicating that the input image is judged to be a fake image; if the output result is 1, the judgment is true, which indicates that the input image is mixed with the judger successfully.
A parameter optimization module: and optimizing the network parameters of the generator and the discriminator by using the error generated by the Capsule network module, so that the generator can output a more optimized segmentation result.
Prediction samples for a given partition { If,Lf*And the corresponding true labeled samples { I }f,LfThe overall error function of the network is as follows:
in formula (9), θSAnd thetapParameters representing generator and arbiter, respectively, JbRepresenting a binary cross-entropy loss function, OsAnd OpRespectively representing the output of the generator and the arbiter. When the input comes from a true annotated sample { If,LfAnd partition prediction samples { I }f,Lf*And when the pixel points are true, outputting 1 and 0 to mark the truth of the pixel point types.
In the embodiment of the present application, the process of parameter optimization includes two parts:
step 620: fixed generator parameter θSOptimizing the discriminator parameter thetap(ii) a In the countermeasure training process, the generator parameter θ is first fixedSThe divided prediction samples generated by the generator are sent to a discriminator,judging authenticity by a discriminator, and adjusting a discriminator parameter theta by a back propagation algorithm by using a discriminator error functionpAnd self-discrimination capability is improved. The error function corresponding to the discriminator is:
in the training process, the parameters of the discriminator are continuously optimized, the discrimination capability is continuously enhanced, and the generated image of the generator is more and more easily distinguished, so that the next stage is entered.
Step 621: fixed discriminator parameter thetapOptimization of the Generator parameter θS(ii) a The network brings the discrimination result of the discriminator into the error function of the generator, and adjusts the parameter theta of the generator through a back propagation algorithmSThe generator is caused to generate higher quality segmentation results, and thus the generator generates more accurate results to confuse the discriminator. And the error function for the generator is:
repeating the two optimization steps, and finally forming Nash balance between the generator and the discriminator, wherein the discriminator cannot distinguish the image from the segmentation prediction sample { I) output by the generatorf,Lf*Whether it is the true annotated sample { I }f,LfAnd fifthly, generating the confrontation network to finish training.
Step 700: and inputting the CSM image labeled at the image level into a trained generated countermeasure network, and outputting a pixel-level segmentation image of the CSM image through the generated countermeasure network.
Please refer to fig. 6, which is a schematic structural diagram of a medical image segmentation system based on generation of a countermeasure network according to an embodiment of the present application. The medical image segmentation system based on the generation countermeasure network comprises a sample acquisition module and the generation countermeasure network, the image sample acquired by the sample acquisition module is trained to generate the countermeasure network, the generation countermeasure network comprises a generator and a discriminator, the generator utilizes a capsule model to carry out structural feature representation, and further generation of a pixel-level labeled sample is achieved, the discriminator is used for distinguishing authenticity of the generated pixel-level labeled sample, a proper error optimization function is designed, discrimination results are respectively fed back to models of the generator and the discriminator, sample generation capacity and discrimination capacity of the generator and the discriminator are respectively improved through continuous countermeasure training, finally the trained generator is utilized to generate the pixel-level labeled sample, and pixel-level segmentation of the image-level labeled medical image is achieved. Specifically, the method comprises the following steps:
a sample collection module: the system comprises a processor, a display and a display module, wherein the processor is used for respectively acquiring pixel level annotation samples and CSM image level annotation samples of other medical images; wherein, other medical images include the medical image of parts such as lung, and the acquisition mode of mark sample specifically is: collection of a few fully labeled CSM image samples { I }f,Lf,TfA total of 500 DTI samples (diffusion tensor imaging) with an image size of 28x28, 60 CSM groups of 60 cases, wherein 27 men and 33 women are 33, the ages of 20-71, and the average age of 45, comprise pixel-level annotation samples and image-level annotation samples { L }f,Tf}; pixel-level labeling samples { I } of other medical images (e.g., human lungs)O,LO8000 DTI samples of image size 28x 28. The method comprises the steps of respectively obtaining the CSM image and DTI samples of other medical images after correcting deformation and selecting a threshold value, determining a region of interest (ROI) on the DTI samples, and placing the ROI in the center of a spinal cord lesion area, so that the size of the DTI samples is uniform, and the influence of spinal fluid and artifacts is avoided.
The generator comprises a pre-training module, a capsule network module, a region positioning network module and a sample generating module, wherein the functions of the modules are as follows:
a pre-training module: the method is used for pre-training a capsule network module through pixel-level labeled samples of other medical images to obtain semantic label-free samples, processing CSM image-level labeled samples through the semantic label-free samples, and distinguishing backgrounds and effective segmentation areas of the CSM image-level labeled samples; the capsule network module adopts a migratable semantic segmentation model, and the model can transfer the learned segmentation knowledge from the pixel-level annotation of the fully-annotated data to the image-level annotation of the weakly-annotated data. In practical application, high-quality medical images are difficult to acquire, so that the method and the device enable new data to be matched with data of a target domain through a trained model, acquire the characteristics of pixel-level segmentation, and can also realize high-precision image segmentation under the condition of a small sample size. The pre-training process of the capsule network module specifically comprises the following steps:
1. processing pixel-level labeling samples of other medical images to convert the pixel-level labeling samples into semanteme-free segmentation image labels; the semantic-free segmentation image labeling only distinguishes the effective segmentation area of the sample from the background, but does not distinguish the shape of the sample. The network trained by the training data learns that the knowledge for distinguishing the object from the background belongs to wide high-order features, and the image is distinguished by the object from the background, so the wide high-order features have strong universality and can be easily migrated to other different tasks, the knowledge learned from the medical image labeled at a high-quality pixel level can be migrated to the medical image labeled at a low-quality image level, and the high-quality image and the low-quality image do not need to have direct relevance.
2. Processing the CSM image-level labeled sample according to the semanteme-free segmentation image label of other medical images to generate a semanteme-free label sample { I) of the CSM image-level labeled sampleO,LOAnd obtaining pixel-level semantic-free labels L by filtering semantic informationO(ii) a The purpose of obtaining the semantic-free label at the pixel level is to distinguish an effective segmentation area and a background of data, so that learned knowledge can be transferred in the strongly and weakly labeled data more easily.
A capsule network module: for labelling CSM image level with samples { Lf,TfInputting the capsule network module after the pre-training is finished, and outputting a reconstructed image of the CSM image-level labeling sample by the capsule network module; wherein, the capsule network module adopts the output vector of a single capsule neuron and records the edge pixels of the CSM image-level labeling sample segmentation regionAnd direction and position information, extracting the classified probability value by adopting a vector nonlinear activation function, determining the segmentation region and the background of the CSM image-level labeling sample, calculating the edge loss and outputting a reconstructed image of the CSM image-level labeling sample. According to the method and the device, parameter information of instantiated pixels of the capsule network module is utilized, the position and angle information of the segmented region is recorded by the activated vector, and the sharpening degree of the boundary region of the segmented region can be effectively improved.
The capsule network module comprises a convolution layer, a Primarycaps layer, a Digitcaps layer and a decoding layer, wherein each capsule represents a function, activated vectors are output, and the length of the vectors represents the correct probability of the region dividing line searched by the capsule. The functions of each layer of the capsule network module are respectively as follows:
and (3) rolling layers: by labeling CSM image level with samples { Lf,TfConvolution operation is carried out to obtain primary characteristics of the spinal disc morphology, the pressed position and the like. Taking the CSM image with the input size of 28x28 as an example, the convolutional layer has 256 convolutional kernels of 9x9x1 with the step size of 1, and a feature tensor of 20x20x256 is output through feature extraction of the convolutional layer by using the Relu activation function.
PrimaryCaps layer: the capsule comprises 32 main capsules, receives primary characteristics obtained by the convolutional layer, generates a combination of the characteristics for each capsule and expresses the combination in a vectorization mode, and each dimension of a vector represents information such as the direction, the position and the like of the characteristics. Each primary capsule applies 8 convolution kernels of 9x9x256 to the input tensor of 20x20x256, which is a feature tensor of 6x6x8x32, due to the 32 primary capsules.
DigitCaps layer: each digital capsule corresponds to a vector output by a Primarycaps layer, each digital capsule receives a tensor of 6x6x8x32 as input, output variables of each main capsule are mapped into the multi-layer digital capsules in a nested mode by adopting a dynamic route, key features of the vectors are activated, and 8-dimensional input space is mapped into 16-dimensional capsule output space through a weight matrix of 8x 16.
A decoding layer: the decoding layer is the final full-connection layer of the network, comprises a Relu function and a Sigmoid function, receives a correct 16-dimensional vector output by the Digitcaps layer, is used as multiple characteristics expressed by an input learning capsule output vector, calculates edge loss, and learns to reconstruct an image with the size of 28x28 with the same pixels as an input image.
In the embodiment of the present application, the loss function of the capsule network module is as follows:
for a given CSM image level annotation sample { If,Tf} the corresponding loss function is:
in the formula (1), OL((If,Tf);θL) Representing the output of the capsule network module, thetaLWeights and parameters representing network training, JbThe binary cross entropy loss function of the element in brackets is shown.
Application of Capsule network Module to CSM image-level labeled samples { If,TfAnd outputting a semantic-free rough segmentation graph M ═ OL((If,Tf);θL)。
A regional positioning network module: for annotating CSM image level samples { If,TfPredicting pixel level labels and outputting an area positioning feature map; the area positioning network utilizes the feature extraction of the convolutional layer to generate a feature map containing position information, and adopts a global average pooling layer to apply weight (w)1,w2…,wn) And carrying out weighted average on the feature map to obtain a region positioning feature map. The region with larger activation value in the region positioning characteristic map is most probably the damaged region segmentation position of the cervical vertebra of the spinal cord. The area positioning network fully utilizes the hot spot area of the primary characteristic positioning characteristic diagram obtained after the convolutional layer operation of the training sample, and the parameter quantity of the network can be reduced by recycling the characteristic diagram due to the fact that the convolutional neural network has a plurality of parameters needing to be trained, so that the training efficiency of the model is higher.
A sample generation module: the method is used for executing a self-diffusion algorithm according to the reconstructed image output by the capsule network module and the area positioning characteristic image output by the area positioning network, determining the partition line of the area pixel point and obtaining a rough partition prediction sample { I }f,Lf*}; in order to output a segmentation graph M containing semantic information as a segmentation sample labeled at a pixel level, a random walk (random walk) idea is applied to a region with a larger activation value, pixel points are diffused through a random walk self-diffusion algorithm, the input points of a region positioning feature graph are utilized, the Gaussian distance from each pixel to the input points on an image is calculated, an optimal path is selected from the Gaussian distances, segmentation lines of the pixel points in the region are obtained, diffusion is performed from each classification point with a larger activation value, and finally a rough segmentation prediction sample { I is generatedf,Lf*}。
Given a CSM image-level annotation sample { If,Tf-converting it into superpixels p ═ p1, p 2., pN }, the images being described by an undirected graph model G, where each node corresponds to a particular superpixel, and then performing a self-diffusion algorithm on the undirected graph model G. Defining an objective function of the self-diffusion process of the category on the basis of the rough segmentation graph M:
in equation (2), q ═ q1, q 2.., qN ] denotes a label vector of all superpixels p, and if pi ∈ a, qi is fixed to 1, otherwise its initial value is 0.
Zi,j=exp(-||F(pi)-F(pj)||/2σ2) (3)
In the formula (3), Zi,jRepresenting the gaussian distance between two adjacent superpixels.
Through the above operations, semantic segmentation of the CSM image with only image-level annotations can be realized by using a small amount of high-quality image with pixel-level annotations.
Segmented prediction samples { I } to be generated by a generatorf,Lf*And true labeled samples { I }f,LfInputting the image into a discriminator to carry out confrontation training, recording the direction and position information of the segmentation area by the discriminator by utilizing a capsule network, improving the sharpening degree of the boundary area of the segmentation area, and extracting the image which is difficult to be correctly classified by utilizing a cascade modeKey region pixels of the class are filtered, simple and clear flat region pixels are filtered, and processed images are used for performing 'generation-confrontation' training until Nash equilibrium is formed between a generator and a discriminator, and the discriminator cannot distinguish that the images are segmentation prediction samples { I ] generated by the generatorf,Lf*Whether it is the true annotated sample { I }f,LfAnd finishing the training of generating the confrontation network.
Specifically, the discriminator comprises a Cascade Cascade module, a Capsule network module and a parameter optimization module; the specific functions of each module are as follows:
cascade of Cascade modules: a key pixel for taking charge of extracting a segmentation prediction sample; in the image segmentation process, the labeling difficulty of each pixel is different, a flat background area can be easily distinguished, but the boundary pixels of an object and the background area are difficult to distinguish. The prior network structure puts the pixels into the network for processing, which causes unnecessary redundancy of the network. The method adopts the concept of Cascade Cascade, and treats the pixels differently, so as to focus on key pixel regions which are difficult to classify. Pixels with wrong labels and pixels with confidence degrees lower than a certain threshold value in the segmentation prediction samples generated by the generator are extracted, and pixels with correct labels and high confidence degrees are filtered out. Therefore, the pixels input to the next stage of training are only key pixels which are not easy to distinguish, redundant information in the network can be reduced, and the working efficiency of the network is improved.
The Capsule network module is responsible for processing the extracted key pixels and generating errors; specifically, the functions of the Capsule network module include:
1. extracting local features; partitioned prediction samples { I } to be output by the generatorf,Lf*Filtering by a Cascade Cascade module, extracting key pixels and corresponding ground pixel, and inputting the key pixels and the corresponding ground pixel as input into corresponding convolution layers respectively; then, a plurality of convolution layers are utilized to respectively convolve the input key pixels and the corresponding ground pixels, and a segmentation prediction sample { I } is extractedf,Lf*Low-level features in (j); wherein the activation of the convolutional layerThe number is the ReLU function.
2. Extracting high-dimensional features; by constructing a PrimaryCaps layer, inputting the extracted low-level features into the PrimaryCaps layer to obtain high-dimensional feature vectors containing spatial position information; constructing Digitcaps layers, nesting and mapping output variables in the Primarycaps layers into the Digitcaps layers by adopting dynamic routing, constructing high-level features which can represent all input features most currently, and inputting the high-level features into the next layer;
among the above, the calculation method between the PrimaryCaps layer and the DigitCaps layer is as follows:
let u be the feature vector extracted after convolution of the key pixel and the corresponding group channeliLow-level feature vector uiAs input to the Primary caps layer, with a weight matrix WijMultiplying to obtain a prediction vectorWherein:
and weighted sum S can be obtained by linear combination between prediction vectorsjWeight coefficient of cijWherein:
obtaining a weighted sum SjThen, S is compressed by a compression functionjLimiting the vector length to obtain an output vector VjWherein:
in equation (6), the first half is the input vector SjThe second half of the scaling scale of (1) is SjThe unit vector of (2). While calculating SjIn the course of (1), coefficient cijIs a constant value cijThe calculation formula of (2) is as follows:
in the formula (7), bijIs a constant value, bijB of the last iterationijValue of and VjAndis obtained by summing the products of, i.e., bijThe updating method comprises the following steps:
3. the discriminator puts the high-level feature vector V output by the Digitcaps layer into the decoding layer, and finally outputs the discrimination result of the authenticity of the image through a plurality of full connection layers; the method specifically comprises the following steps: if the output result is 0, judging the input image to be false, and indicating that the input image is judged to be a fake image; if the output result is 1, the judgment is true, which indicates that the input image is mixed with the judger successfully.
A parameter optimization module: the method is used for optimizing the network parameters of the generator and the discriminator by utilizing the errors generated by the Capsule network module, so that the generator can output a more optimized segmentation result.
Prediction samples for a given partition { If,Lf*And the corresponding true labeled samples { I }f,LfThe overall error function of the network is as follows:
in formula (9), θSAnd thetapParameters representing generator and arbiter, respectively, JbRepresenting a binary cross-entropy loss function, OsAnd OpRespectively representing the output of the generator and the arbiter. When the input comes from a true annotated sample { If,LfAnd partition prediction samples { I }f,Lf*And when the pixel points are true, outputting 1 and 0 to mark the truth of the pixel point types.
In the embodiment of the present application, the process of parameter optimization includes two parts:
1. fixed generator parameter θSOptimizing the discriminator parameter thetap(ii) a In the countermeasure training process, the generator parameter θ is first fixedSThe divided prediction samples generated by the generator are sent to a discriminator, the discriminator judges the truth and the truth, and the discriminator parameter theta is adjusted by a back propagation algorithm by utilizing a discriminator error functionpAnd self-discrimination capability is improved. The error function corresponding to the discriminator is:
in the training process, the parameters of the discriminator are continuously optimized, the discrimination capability is continuously enhanced, and the generated image of the generator is more and more easily distinguished, so that the next stage is entered.
2. Fixed discriminator parameter thetapOptimization of the Generator parameter θS(ii) a The network brings the discrimination result of the discriminator into the error function of the generator, and adjusts the parameter theta of the generator through a back propagation algorithmSThe generator is caused to generate higher quality segmentation results, and thus the generator generates more accurate results to confuse the discriminator. And the error function for the generator is:
repeating the two optimization steps, and finally forming Nash balance between the generator and the discriminator, wherein the discriminator cannot distinguish the image from the segmentation prediction sample { I) output by the generatorf,Lf*Whether it is the true annotated sample { I }f,LfAnd fifthly, generating the confrontation network to finish training.
And inputting the CSM image to be segmented marked at the image level into a trained generated countermeasure network, and outputting the pixel-level segmented image of the CSM image to be segmented through the generated countermeasure network.
Fig. 7 is a schematic structural diagram of a hardware device of a medical image segmentation method based on generation of a countermeasure network according to an embodiment of the present application. As shown in fig. 3, the device includes one or more processors and memory. Taking a processor as an example, the apparatus may further include: an input system and an output system.
The processor, memory, input system, and output system may be connected by a bus or other means, as exemplified by the bus connection in fig. 3.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules. The processor executes various functional applications and data processing of the electronic device, i.e., implements the processing method of the above-described method embodiment, by executing the non-transitory software program, instructions and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processing system over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input system may receive input numeric or character information and generate a signal input. The output system may include a display device such as a display screen.
The one or more modules are stored in the memory and, when executed by the one or more processors, perform the following for any of the above method embodiments:
step a: respectively acquiring pixel-level labeling samples of other medical images and image-level labeling samples of medical images to be segmented;
step b: training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented, wherein the generation countermeasure network comprises a generator and a discriminator;
step c: the generator extracts pixel-level features of pixel-level annotation samples of other medical images, processes the image-level annotation samples of the medical images to be segmented through the pixel-level features to generate the pixel-level annotation samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level annotation samples;
step d: inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
step e: inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Embodiments of the present application provide a non-transitory (non-volatile) computer storage medium having stored thereon computer-executable instructions that may perform the following operations:
step a: respectively acquiring pixel-level labeling samples of other medical images and image-level labeling samples of medical images to be segmented;
step b: training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented, wherein the generation countermeasure network comprises a generator and a discriminator;
step c: the generator extracts pixel-level features of pixel-level annotation samples of other medical images, processes the image-level annotation samples of the medical images to be segmented through the pixel-level features to generate the pixel-level annotation samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level annotation samples;
step d: inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
step e: inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
Embodiments of the present application provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the following:
step a: respectively acquiring pixel-level labeling samples of other medical images and image-level labeling samples of medical images to be segmented;
step b: training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented, wherein the generation countermeasure network comprises a generator and a discriminator;
step c: the generator extracts pixel-level features of pixel-level annotation samples of other medical images, processes the image-level annotation samples of the medical images to be segmented through the pixel-level features to generate the pixel-level annotation samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level annotation samples;
step d: inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
step e: inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
The medical image segmentation method, the system and the electronic equipment based on the generation of the countermeasure network optimize the deep convolutional neural network through the fusion Capsule mechanism, fuse the ideas of the Capsule network and the cascade waterfall, generate a new training image sample under the condition that the quantity of the medical image sample is small, realize the semantic segmentation of the medical image data with low quality and only image level labels, transfer the learned segmentation knowledge from the full labeling data of pixel level labeling to the weak labeling data of image level, thereby improving the model characteristic expression capability, expanding the applicability of the medical image labeling sample, effectively reducing the dependence of the segmentation model on the pixel level labeling data, having the characteristics of less network information redundancy and sufficient feature extraction, and under the premise of a small quantity of pixel level labeling samples, improving the efficiency of the countertraining of the generated sample and a real sample, and can effectively realize high-precision pixel-level image segmentation.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (11)
1. A medical image segmentation method based on a generation countermeasure network is characterized by comprising the following steps:
step a: respectively acquiring pixel-level labeling samples of other medical images and image-level labeling samples of medical images to be segmented;
step b: training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented, wherein the generation countermeasure network comprises a generator and a discriminator;
step c: the generator extracts pixel-level features of pixel-level annotation samples of other medical images, processes the image-level annotation samples of the medical images to be segmented through the pixel-level features to generate the pixel-level annotation samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level annotation samples;
step d: inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
step e: inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
2. The method for medical image segmentation based on generation of confrontation network according to claim 1, wherein in the step c, the generator comprises a capsule network module and a region localization network, and the generation of the segmentation prediction samples of the medical image to be segmented by the generator comprises in particular:
step b 1: pre-training a capsule network module through pixel-level annotation samples of other medical images to obtain a semantic label-free sample, and processing the image-level annotation sample of the image to be segmented through the semantic label-free sample to distinguish the background and the effective segmentation area of the image-level annotation sample of the image to be segmented;
step b 2: inputting the image-level labeling sample of the image to be segmented into a capsule network module after pre-training, and outputting a reconstructed image of the image-level labeling sample of the image to be segmented through the capsule network module;
step b 3: the region positioning network utilizes the characteristic extraction of the convolution layer to generate a characteristic graph containing position information of an image-level labeling sample of an image to be segmented, and adopts a global average pooling layer to apply weight (w)1,w2…,wn) Carrying out weighted average on the feature map to obtain an area positioning feature map of an image-level labeling sample of the image to be segmented;
step b 4: and executing a self-diffusion algorithm according to the reconstructed image and the region positioning characteristic graph, determining a region pixel point segmentation line, and obtaining a segmentation prediction sample of the image-level annotation sample of the image to be segmented.
3. The medical image segmentation method based on generation countermeasure network of claim 2, wherein in the step b2, the capsule network module comprises a convolution layer, a PrimaryCaps layer, a DigitCaps layer and a decoding layer, the capsule network module records the direction and position information of the edge pixel of the image-level labeled sample segmentation region of the image to be segmented by using the output vector of a single capsule neuron, extracts the classified probability value by using the non-linear activation function of the vector, determines the segmentation region and the background of the image-level labeled sample of the image to be segmented, calculates the edge loss and outputs the reconstructed image of the image-level labeled sample of the image to be segmented.
4. The method for medical image segmentation based on generation of confrontation networks according to claim 2, wherein in step b4, the performing of the self-diffusion algorithm according to the reconstructed image and the region localization feature map specifically comprises: and diffusing pixel points in the area with the larger activation value in the area positioning feature map by using a random walk self-diffusion algorithm, calculating the Gaussian distance from each pixel to the input point on the image by using the input point of the area positioning feature map, selecting an optimal path from the Gaussian distances, obtaining a segmentation line of the pixel points in the area, and finally generating a segmentation prediction sample.
5. The medical image segmentation method based on generation of confrontation network according to any one of claims 1 to 4, wherein in the step d, the discriminator comprises a Cascade Cascade module, a Capsule network module and a parameter optimization module, and the training of "generation-confrontation" performed by the discriminator specifically comprises:
step d 1: extracting pixels with wrong labels, key pixels with confidence degrees lower than a set threshold value and corresponding ground struts in the segmentation prediction sample through a Cascade Cascade module, and filtering the pixels with correct labels and confidence degrees higher than the set threshold value;
step d 2: processing the extracted key pixels and the corresponding ground channel through a Capsule network module, and generating errors;
step d 3: the parameter optimization module optimizes the network parameters of the generator and the discriminator by using errors generated by the Capsule network module; wherein for a given partitioned prediction sample { I }f,Lf*And the corresponding true labeled samples { I }f,LfThe overall error function of the network is:
in the above formula, θSAnd thetapParameters representing generator and arbiter, respectively, JbRepresenting a binary cross-entropy loss function, OsAnd OpRepresenting the output of the generator and the arbiter, respectively, when the input comes from the true annotated sample { If,LfAnd partition prediction samples { I }f,Lf*And when the pixel point type is judged to be true, judging whether the pixel point type is true or false by outputting 1 and 0.
6. A medical image segmentation system based on a generation countermeasure network is characterized by comprising a sample acquisition module and the generation countermeasure network,
a sample collection module: the image segmentation method comprises the steps of acquiring pixel-level annotation samples of other medical images and image-level annotation samples of medical images to be segmented respectively;
training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented;
the generation countermeasure network comprises a generator and a discriminator, wherein the generator carries out pixel-level feature extraction on pixel-level labeling samples of other medical images, processes image-level labeling samples of medical images to be segmented through the pixel-level features to generate pixel-level labeling samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level labeling samples;
inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
7. The generation-confrontation-network-based medical image segmentation system of claim 6 wherein the generator comprises a pre-training module, a capsule network module, a region-locating network module, and a sample generation module:
a pre-training module: the capsule network module is pre-trained through pixel-level annotation samples of other medical images to obtain a semantic label-free sample, and the image-level annotation sample of the image to be segmented is processed through the semantic label-free sample to distinguish the background and the effective segmentation area of the image-level annotation sample of the image to be segmented;
a capsule network module: the capsule network module is used for inputting the image-level labeling sample of the image to be segmented into the capsule network module after pre-training is finished, and outputting a reconstructed image of the image-level labeling sample of the image to be segmented through the capsule network module;
area positioning network: feature map containing position information for generating image-level annotation samples of an image to be segmented by using feature extraction of convolutional layers, and weighting (w) by using global average pooling layer1,w2…,wn) Carrying out weighted average on the feature map to obtain an area positioning feature map of an image-level labeling sample of the image to be segmented;
a sample generation module: and the segmentation prediction module is used for executing a self-diffusion algorithm according to the reconstructed image and the region positioning characteristic graph, determining a segmentation line of a region pixel point, and obtaining a segmentation prediction sample of an image-level annotation sample of the image to be segmented.
8. The generation-based countermeasure network medical image segmentation system of claim 7, wherein the capsule network module comprises a convolution layer, a PrimaryCaps layer, a DigitCaps layer and a decoding layer, the capsule network module records direction and position information of edge pixels of image-level labeled sample segmentation regions of the image to be segmented by using output vectors of single capsule neurons, extracts classified probability values by using a non-linear activation function of the vectors, determines segmentation regions and backgrounds of the image-level labeled samples of the image to be segmented, calculates edge loss and outputs a reconstructed image of the image-level labeled samples of the image to be segmented.
9. The system for medical image segmentation based on generation of a countermeasure network according to claim 7, wherein the sample generation module performs a self-diffusion algorithm based on the reconstructed image and the region localization feature map, specifically comprising: and diffusing pixel points in the area with the larger activation value in the area positioning feature map by using a random walk self-diffusion algorithm, calculating the Gaussian distance from each pixel to the input point on the image by using the input point of the area positioning feature map, selecting an optimal path from the Gaussian distances, obtaining a segmentation line of the pixel points in the area, and finally generating a segmentation prediction sample.
10. The medical image segmentation system based on generation of confrontation networks according to any one of claims 6 to 9, characterized in that the discriminator comprises a Cascade Cascade module, a Capsule network module and a parameter optimization module:
cascade of Cascade modules: the method comprises the steps of extracting pixels with wrong labels, key pixels with confidence degrees lower than a set threshold value and corresponding ground route in the segmentation prediction sample, and filtering the pixels with correct labels and confidence degrees higher than the set threshold value;
a Capsule network module: the method is used for processing the extracted key pixels and the corresponding ground route and generating errors;
a parameter optimization module: the device is used for optimizing the network parameters of the generator and the discriminator by utilizing the error generated by the Capsule network module; wherein for a given partitioned prediction sample { I }f,Lf*And the corresponding true labeled samples { I }f,LfThe overall error function of the network is:
in the above formula, θSAnd thetapParameters representing generator and arbiter, respectively, JbRepresenting a binary cross-entropy loss function, OsAnd OpRepresenting the output of the generator and the arbiter, respectively, when the input comes from the true annotated sample { If,LfAnd partition prediction samples { I }f,Lf*And when the pixel point type is judged to be true, judging whether the pixel point type is true or false by outputting 1 and 0.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the following operations of any one of the above 1 to 5 of the medical image segmentation method based on generation of countermeasure networks:
step a: respectively acquiring pixel-level labeling samples of other medical images and image-level labeling samples of medical images to be segmented;
step b: training a generation countermeasure network based on a capsule network through the pixel-level labeling samples of the other medical images and the image-level labeling samples of the medical images to be segmented, wherein the generation countermeasure network comprises a generator and a discriminator;
step c: the generator extracts pixel-level features of pixel-level annotation samples of other medical images, processes the image-level annotation samples of the medical images to be segmented through the pixel-level features to generate the pixel-level annotation samples of the medical images to be segmented, and generates segmentation prediction samples of the medical images to be segmented based on the pixel-level annotation samples;
step d: inputting the segmentation prediction sample generated by the generator and the real labeling sample of the image to be segmented into a discriminator together for generation-confrontation training, judging the authenticity of the segmentation prediction sample, and optimizing the generator and the discriminator according to an error function to obtain a trained generation confrontation network;
step e: inputting the medical image to be segmented marked at the image level into a trained generation countermeasure network, and outputting the pixel-level segmented image of the medical image to be segmented through the generation countermeasure network.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910707712.XA CN110503654B (en) | 2019-08-01 | 2019-08-01 | Medical image segmentation method and system based on generation countermeasure network and electronic equipment |
PCT/CN2019/125428 WO2021017372A1 (en) | 2019-08-01 | 2019-12-14 | Medical image segmentation method and system based on generative adversarial network, and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910707712.XA CN110503654B (en) | 2019-08-01 | 2019-08-01 | Medical image segmentation method and system based on generation countermeasure network and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110503654A true CN110503654A (en) | 2019-11-26 |
CN110503654B CN110503654B (en) | 2022-04-26 |
Family
ID=68586980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910707712.XA Active CN110503654B (en) | 2019-08-01 | 2019-08-01 | Medical image segmentation method and system based on generation countermeasure network and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110503654B (en) |
WO (1) | WO2021017372A1 (en) |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160441A (en) * | 2019-12-24 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Classification method, computer device, and storage medium |
CN111275686A (en) * | 2020-01-20 | 2020-06-12 | 中山大学 | Method and device for generating medical image data for artificial neural network training |
CN111383215A (en) * | 2020-03-10 | 2020-07-07 | 图玛深维医疗科技(北京)有限公司 | Focus detection model training method based on generation of confrontation network |
CN111383217A (en) * | 2020-03-11 | 2020-07-07 | 深圳先进技术研究院 | Visualization method, device and medium for evaluation of brain addiction traits |
CN111429464A (en) * | 2020-03-11 | 2020-07-17 | 深圳先进技术研究院 | Medical image segmentation method, medical image segmentation device and terminal equipment |
CN111436936A (en) * | 2020-04-29 | 2020-07-24 | 浙江大学 | CT image reconstruction method based on MRI |
CN111598900A (en) * | 2020-05-18 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Image region segmentation model training method, segmentation method and device |
CN111798471A (en) * | 2020-07-27 | 2020-10-20 | 中科智脑(北京)技术有限公司 | Training method of image semantic segmentation network |
CN111899251A (en) * | 2020-08-06 | 2020-11-06 | 中国科学院深圳先进技术研究院 | Copy-move type forged image detection method for distinguishing forged source and target area |
CN111932555A (en) * | 2020-07-31 | 2020-11-13 | 商汤集团有限公司 | Image processing method and device and computer readable storage medium |
CN111951274A (en) * | 2020-07-24 | 2020-11-17 | 上海联影智能医疗科技有限公司 | Image segmentation method, system, readable storage medium and device |
CN112150478A (en) * | 2020-08-31 | 2020-12-29 | 温州医科大学 | Method and system for constructing semi-supervised image segmentation framework |
WO2021017372A1 (en) * | 2019-08-01 | 2021-02-04 | 中国科学院深圳先进技术研究院 | Medical image segmentation method and system based on generative adversarial network, and electronic equipment |
CN112420205A (en) * | 2020-12-08 | 2021-02-26 | 医惠科技有限公司 | Entity recognition model generation method and device and computer readable storage medium |
CN112507950A (en) * | 2020-12-18 | 2021-03-16 | 中国科学院空天信息创新研究院 | Method and device for generating confrontation type multi-task multi-element sample automatic labeling |
CN112560925A (en) * | 2020-12-10 | 2021-03-26 | 中国科学院深圳先进技术研究院 | Complex scene target detection data set construction method and system |
CN112686906A (en) * | 2020-12-25 | 2021-04-20 | 山东大学 | Image segmentation method and system based on uniform distribution migration guidance |
CN112749791A (en) * | 2021-01-22 | 2021-05-04 | 重庆理工大学 | Link prediction method based on graph neural network and capsule network |
CN112837338A (en) * | 2021-01-12 | 2021-05-25 | 浙江大学 | Semi-supervised medical image segmentation method based on generation countermeasure network |
CN112890766A (en) * | 2020-12-31 | 2021-06-04 | 山东省千佛山医院 | Breast cancer auxiliary treatment equipment |
CN112990044A (en) * | 2021-03-25 | 2021-06-18 | 北京百度网讯科技有限公司 | Method and device for generating image recognition model and image recognition |
WO2021120961A1 (en) * | 2019-12-16 | 2021-06-24 | 中国科学院深圳先进技术研究院 | Brain addiction structure map evaluation method and apparatus |
CN113052840A (en) * | 2021-04-30 | 2021-06-29 | 江苏赛诺格兰医疗科技有限公司 | Processing method based on low signal-to-noise ratio PET image |
CN113160243A (en) * | 2021-03-24 | 2021-07-23 | 联想(北京)有限公司 | Image segmentation method and electronic equipment |
CN113223010A (en) * | 2021-04-22 | 2021-08-06 | 北京大学口腔医学院 | Method and system for fully automatically segmenting multiple tissues of oral cavity image |
WO2021179205A1 (en) * | 2020-03-11 | 2021-09-16 | 深圳先进技术研究院 | Medical image segmentation method, medical image segmentation apparatus and terminal device |
WO2021184799A1 (en) * | 2020-03-19 | 2021-09-23 | 中国科学院深圳先进技术研究院 | Medical image processing method and apparatus, and device and storage medium |
CN113487617A (en) * | 2021-07-26 | 2021-10-08 | 推想医疗科技股份有限公司 | Data processing method, data processing device, electronic equipment and storage medium |
CN113850804A (en) * | 2021-11-29 | 2021-12-28 | 北京鹰瞳科技发展股份有限公司 | Retina image generation system and method based on generation countermeasure network |
CN113902029A (en) * | 2021-10-25 | 2022-01-07 | 北京达佳互联信息技术有限公司 | Image annotation method and device, electronic equipment and storage medium |
CN114021698A (en) * | 2021-10-30 | 2022-02-08 | 河南省鼎信信息安全等级测评有限公司 | Malicious domain name training sample expansion method and device based on capsule generation countermeasure network |
WO2022121213A1 (en) * | 2020-12-10 | 2022-06-16 | 深圳先进技术研究院 | Gan-based contrast-agent-free medical image enhancement modeling method |
CN114898091A (en) * | 2022-04-14 | 2022-08-12 | 南京航空航天大学 | Image countermeasure sample generation method and device based on regional information |
WO2022205657A1 (en) * | 2021-04-02 | 2022-10-06 | 中国科学院深圳先进技术研究院 | Csm image segmentation method and apparatus, terminal device, and storage medium |
CN116168242A (en) * | 2023-02-08 | 2023-05-26 | 阿里巴巴(中国)有限公司 | Pixel-level label generation method, model training method and equipment |
WO2023165033A1 (en) * | 2022-03-02 | 2023-09-07 | 深圳硅基智能科技有限公司 | Method for training model for recognizing target in medical image, method for recognizing target in medical image, and device and medium |
US12093833B2 (en) | 2020-03-11 | 2024-09-17 | Shenzhen Institutes Of Advanced Technology | Visualization method for evaluating brain addiction traits, apparatus, and medium |
Families Citing this family (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950569B (en) * | 2021-02-25 | 2023-07-25 | 平安科技(深圳)有限公司 | Melanoma image recognition method, device, computer equipment and storage medium |
CN113066094B (en) * | 2021-03-09 | 2024-01-30 | 中国地质大学(武汉) | Geographic grid intelligent local desensitization method based on generation countermeasure network |
CN113052369B (en) * | 2021-03-15 | 2024-05-10 | 北京农业智能装备技术研究中心 | Intelligent agricultural machinery operation management method and system |
CN113112454B (en) * | 2021-03-22 | 2024-03-19 | 西北工业大学 | Medical image segmentation method based on task dynamic learning part marks |
CN112991304B (en) * | 2021-03-23 | 2024-06-14 | 湖南珞佳智能科技有限公司 | Molten pool sputtering detection method based on laser directional energy deposition monitoring system |
CN113052171B (en) * | 2021-03-24 | 2024-09-24 | 浙江工业大学 | Medical image augmentation method based on progressive generation network |
CN113171118B (en) * | 2021-04-06 | 2023-07-14 | 上海深至信息科技有限公司 | Ultrasonic inspection operation guiding method based on generation type countermeasure network |
CN113130050B (en) * | 2021-04-20 | 2023-11-24 | 皖南医学院第一附属医院(皖南医学院弋矶山医院) | Medical information display method and display system |
CN113239978B (en) * | 2021-04-22 | 2024-06-04 | 科大讯飞股份有限公司 | Method and device for correlation of medical image preprocessing model and analysis model |
CN113628159A (en) * | 2021-06-16 | 2021-11-09 | 维库(厦门)信息技术有限公司 | Full-automatic training method and device based on deep learning network and storage medium |
CN113470046B (en) * | 2021-06-16 | 2024-04-16 | 浙江工业大学 | Drawing meaning force network segmentation method for medical image super-pixel gray texture sampling characteristics |
CN113378472B (en) * | 2021-06-23 | 2022-09-13 | 合肥工业大学 | Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network |
CN113469084B (en) * | 2021-07-07 | 2023-06-30 | 西安电子科技大学 | Hyperspectral image classification method based on contrast generation countermeasure network |
CN113553954A (en) * | 2021-07-23 | 2021-10-26 | 上海商汤智能科技有限公司 | Method and apparatus for training behavior recognition model, device, medium, and program product |
CN113705371B (en) * | 2021-08-10 | 2023-12-01 | 武汉理工大学 | Water visual scene segmentation method and device |
CN113706546B (en) * | 2021-08-23 | 2024-03-19 | 浙江工业大学 | Medical image segmentation method and device based on lightweight twin network |
CN113763394B (en) * | 2021-08-24 | 2024-03-29 | 同济大学 | Medical image segmentation control method based on medical risks |
CN113902674B (en) * | 2021-09-02 | 2024-05-24 | 北京邮电大学 | Medical image segmentation method and electronic equipment |
CN113936165B (en) * | 2021-09-07 | 2024-06-07 | 上海商涌科技有限公司 | CT image processing method, terminal and computer storage medium |
CN113962999B (en) * | 2021-10-19 | 2024-06-25 | 浙江大学 | Noise label segmentation method based on Gaussian mixture model and label correction model |
CN113935977A (en) * | 2021-10-22 | 2022-01-14 | 河北工业大学 | Solar cell panel defect generation method based on generation countermeasure network |
CN114022586B (en) * | 2021-10-25 | 2024-07-02 | 华中科技大学 | Defect image generation method based on countermeasure generation network |
CN113920127B (en) * | 2021-10-27 | 2024-04-23 | 华南理工大学 | Training data set independent single-sample image segmentation method and system |
CN114004970A (en) * | 2021-11-09 | 2022-02-01 | 粟海信息科技(苏州)有限公司 | Tooth area detection method, device, equipment and storage medium |
CN114066964B (en) * | 2021-11-17 | 2024-04-05 | 江南大学 | Aquatic product real-time size detection method based on deep learning |
CN114049343A (en) * | 2021-11-23 | 2022-02-15 | 沈阳建筑大学 | Deep learning-based tracing method for complex missing texture of crack propagation process |
CN114240950B (en) * | 2021-11-23 | 2023-04-07 | 电子科技大学 | Brain tumor image generation and segmentation method based on deep neural network |
CN114037644B (en) * | 2021-11-26 | 2024-07-23 | 重庆邮电大学 | Artistic word image synthesis system and method based on generation countermeasure network |
CN116569216A (en) * | 2021-12-03 | 2023-08-08 | 宁德时代新能源科技股份有限公司 | Method and system for generating image samples containing specific features |
CN114140368B (en) * | 2021-12-03 | 2024-04-23 | 天津大学 | Multi-mode medical image synthesis method based on generation type countermeasure network |
CN114331875B (en) * | 2021-12-09 | 2024-06-18 | 上海大学 | Image bleeding position prediction method in printing process based on countermeasure edge learning |
CN114186735B (en) * | 2021-12-10 | 2023-10-20 | 沭阳鸿行照明有限公司 | Fire emergency lighting lamp layout optimization method based on artificial intelligence |
CN114494322B (en) * | 2022-02-11 | 2024-03-01 | 合肥工业大学 | Multi-mode image segmentation method based on image fusion technology |
CN114549554B (en) * | 2022-02-22 | 2024-05-14 | 山东融瓴科技集团有限公司 | Air pollution source segmentation method based on style invariance |
CN114581552A (en) * | 2022-03-15 | 2022-06-03 | 南京邮电大学 | Gray level image colorizing method based on generation countermeasure network |
CN114897782B (en) * | 2022-04-13 | 2024-04-23 | 华南理工大学 | Gastric cancer pathological section image segmentation prediction method based on generation type countermeasure network |
CN114821229B (en) * | 2022-04-14 | 2023-07-28 | 江苏集萃清联智控科技有限公司 | Underwater acoustic data set augmentation method and system based on condition generation countermeasure network |
CN114882047B (en) * | 2022-04-19 | 2024-07-12 | 厦门大学 | Medical image segmentation method and system based on semi-supervision and Transformers |
CN114862978B (en) * | 2022-04-21 | 2024-08-13 | 南通大学 | Resistance tomography method based on self-adaptive neural module network |
CN114549842B (en) * | 2022-04-22 | 2022-08-02 | 山东建筑大学 | Self-adaptive semi-supervised image segmentation method and system based on uncertain knowledge domain |
CN114677515B (en) * | 2022-04-25 | 2023-05-26 | 电子科技大学 | Weak supervision semantic segmentation method based on similarity between classes |
CN114998124B (en) * | 2022-05-23 | 2024-06-18 | 北京航空航天大学 | Image sharpening processing method for target detection |
CN114818734B (en) * | 2022-05-25 | 2023-10-31 | 清华大学 | Method and device for analyzing antagonism scene semantics based on target-attribute-relation |
CN115187467B (en) * | 2022-05-31 | 2024-07-02 | 北京昭衍新药研究中心股份有限公司 | Enhanced virtual image data generation method based on generation countermeasure network |
CN115018787A (en) * | 2022-06-02 | 2022-09-06 | 深圳市华汉伟业科技有限公司 | Anomaly detection method and system based on gradient enhancement |
CN115063384B (en) * | 2022-06-29 | 2024-09-06 | 北京理工大学 | SP-CTA image coronary artery segmentation method and device based on feature alignment domain |
CN115081920B (en) * | 2022-07-08 | 2024-07-12 | 华南农业大学 | Attendance check-in scheduling management method, system, equipment and storage medium |
CN115439846B (en) * | 2022-08-09 | 2023-04-25 | 北京邮电大学 | Image segmentation method and device, electronic equipment and medium |
CN115272136B (en) * | 2022-09-27 | 2023-05-05 | 广州卓腾科技有限公司 | Certificate photo glasses reflection eliminating method, device, medium and equipment based on big data |
CN115546239B (en) * | 2022-11-30 | 2023-04-07 | 珠海横琴圣澳云智科技有限公司 | Target segmentation method and device based on boundary attention and distance transformation |
CN115880440B (en) * | 2023-01-31 | 2023-04-28 | 中国科学院自动化研究所 | Magnetic particle three-dimensional reconstruction imaging method based on generation countermeasure network |
CN117094986B (en) * | 2023-10-13 | 2024-04-05 | 中山大学深圳研究院 | Self-adaptive defect detection method based on small sample and terminal equipment |
CN117093548B (en) * | 2023-10-20 | 2024-01-26 | 公诚管理咨询有限公司 | Bidding management auditing system |
CN117152138B (en) * | 2023-10-30 | 2024-01-16 | 陕西惠宾电子科技有限公司 | Medical image tumor target detection method based on unsupervised learning |
CN117726815B (en) * | 2023-12-19 | 2024-07-02 | 江南大学 | Small sample medical image segmentation method based on anomaly detection |
CN117523318B (en) * | 2023-12-26 | 2024-04-16 | 宁波微科光电股份有限公司 | Anti-light interference subway shielding door foreign matter detection method, device and medium |
CN117994167B (en) * | 2024-01-11 | 2024-06-28 | 太原理工大学 | Diffusion model defogging method integrating parallel multi-convolution attention |
CN117726642B (en) * | 2024-02-07 | 2024-05-31 | 中国科学院宁波材料技术与工程研究所 | High reflection focus segmentation method and device for optical coherence tomography image |
CN118015021B (en) * | 2024-04-07 | 2024-07-09 | 安徽农业大学 | Active domain self-adaptive cross-modal medical image segmentation method based on sliding window |
CN118154906B (en) * | 2024-05-09 | 2024-08-30 | 齐鲁工业大学(山东省科学院) | Image tampering detection method based on feature similarity and multi-scale edge attention |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961217A (en) * | 2018-06-08 | 2018-12-07 | 南京大学 | A kind of detection method of surface flaw based on positive example training |
CN109063724A (en) * | 2018-06-12 | 2018-12-21 | 中国科学院深圳先进技术研究院 | A kind of enhanced production confrontation network and target sample recognition methods |
CN109344833A (en) * | 2018-09-04 | 2019-02-15 | 中国科学院深圳先进技术研究院 | Medical image cutting method, segmenting system and computer readable storage medium |
US20190080206A1 (en) * | 2017-09-08 | 2019-03-14 | Ford Global Technologies, Llc | Refining Synthetic Data With A Generative Adversarial Network Using Auxiliary Inputs |
CN109584337A (en) * | 2018-11-09 | 2019-04-05 | 暨南大学 | A kind of image generating method generating confrontation network based on condition capsule |
WO2019118613A1 (en) * | 2017-12-12 | 2019-06-20 | Oncoustics Inc. | Machine learning to extract quantitative biomarkers from ultrasound rf spectrums |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10643320B2 (en) * | 2017-11-15 | 2020-05-05 | Toyota Research Institute, Inc. | Adversarial learning of photorealistic post-processing of simulation with privileged information |
CN108062753B (en) * | 2017-12-29 | 2020-04-17 | 重庆理工大学 | Unsupervised domain self-adaptive brain tumor semantic segmentation method based on deep counterstudy |
CN108198179A (en) * | 2018-01-03 | 2018-06-22 | 华南理工大学 | A kind of CT medical image pulmonary nodule detection methods for generating confrontation network improvement |
CN108932484A (en) * | 2018-06-20 | 2018-12-04 | 华南理工大学 | A kind of facial expression recognizing method based on Capsule Net |
CN109242849A (en) * | 2018-09-26 | 2019-01-18 | 上海联影智能医疗科技有限公司 | Medical image processing method, device, system and storage medium |
CN110503654B (en) * | 2019-08-01 | 2022-04-26 | 中国科学院深圳先进技术研究院 | Medical image segmentation method and system based on generation countermeasure network and electronic equipment |
-
2019
- 2019-08-01 CN CN201910707712.XA patent/CN110503654B/en active Active
- 2019-12-14 WO PCT/CN2019/125428 patent/WO2021017372A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190080206A1 (en) * | 2017-09-08 | 2019-03-14 | Ford Global Technologies, Llc | Refining Synthetic Data With A Generative Adversarial Network Using Auxiliary Inputs |
WO2019118613A1 (en) * | 2017-12-12 | 2019-06-20 | Oncoustics Inc. | Machine learning to extract quantitative biomarkers from ultrasound rf spectrums |
CN108961217A (en) * | 2018-06-08 | 2018-12-07 | 南京大学 | A kind of detection method of surface flaw based on positive example training |
CN109063724A (en) * | 2018-06-12 | 2018-12-21 | 中国科学院深圳先进技术研究院 | A kind of enhanced production confrontation network and target sample recognition methods |
CN109344833A (en) * | 2018-09-04 | 2019-02-15 | 中国科学院深圳先进技术研究院 | Medical image cutting method, segmenting system and computer readable storage medium |
CN109584337A (en) * | 2018-11-09 | 2019-04-05 | 暨南大学 | A kind of image generating method generating confrontation network based on condition capsule |
Non-Patent Citations (3)
Title |
---|
A. ODENA: "Semi-supervised learning with generative adversarial", 《ARXIV》 * |
FEI YANG ET AL.: "Capsule Based Image Translation Network", 《IET DOCTORAL FORUM ON BIOMEDICAL ENGINEERING, HEALTHCARE, ROBOTICS AND ARTIFICIAL INTELLIGENCE 2018 (BRAIN 2018)》 * |
陈锟 等: "生成对抗网络在医学图像处理中的应用", 《生命科学仪器》 * |
Cited By (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021017372A1 (en) * | 2019-08-01 | 2021-02-04 | 中国科学院深圳先进技术研究院 | Medical image segmentation method and system based on generative adversarial network, and electronic equipment |
WO2021120961A1 (en) * | 2019-12-16 | 2021-06-24 | 中国科学院深圳先进技术研究院 | Brain addiction structure map evaluation method and apparatus |
CN111160441A (en) * | 2019-12-24 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Classification method, computer device, and storage medium |
CN111160441B (en) * | 2019-12-24 | 2024-03-26 | 上海联影智能医疗科技有限公司 | Classification method, computer device, and storage medium |
CN111275686A (en) * | 2020-01-20 | 2020-06-12 | 中山大学 | Method and device for generating medical image data for artificial neural network training |
CN111275686B (en) * | 2020-01-20 | 2023-05-26 | 中山大学 | Method and device for generating medical image data for artificial neural network training |
CN111383215A (en) * | 2020-03-10 | 2020-07-07 | 图玛深维医疗科技(北京)有限公司 | Focus detection model training method based on generation of confrontation network |
CN111383217A (en) * | 2020-03-11 | 2020-07-07 | 深圳先进技术研究院 | Visualization method, device and medium for evaluation of brain addiction traits |
US12093833B2 (en) | 2020-03-11 | 2024-09-17 | Shenzhen Institutes Of Advanced Technology | Visualization method for evaluating brain addiction traits, apparatus, and medium |
WO2021179205A1 (en) * | 2020-03-11 | 2021-09-16 | 深圳先进技术研究院 | Medical image segmentation method, medical image segmentation apparatus and terminal device |
CN111383217B (en) * | 2020-03-11 | 2023-08-29 | 深圳先进技术研究院 | Visual method, device and medium for brain addiction character evaluation |
CN111429464A (en) * | 2020-03-11 | 2020-07-17 | 深圳先进技术研究院 | Medical image segmentation method, medical image segmentation device and terminal equipment |
WO2021184799A1 (en) * | 2020-03-19 | 2021-09-23 | 中国科学院深圳先进技术研究院 | Medical image processing method and apparatus, and device and storage medium |
CN111436936A (en) * | 2020-04-29 | 2020-07-24 | 浙江大学 | CT image reconstruction method based on MRI |
CN111598900A (en) * | 2020-05-18 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Image region segmentation model training method, segmentation method and device |
CN111951274A (en) * | 2020-07-24 | 2020-11-17 | 上海联影智能医疗科技有限公司 | Image segmentation method, system, readable storage medium and device |
CN111798471A (en) * | 2020-07-27 | 2020-10-20 | 中科智脑(北京)技术有限公司 | Training method of image semantic segmentation network |
CN111798471B (en) * | 2020-07-27 | 2024-04-02 | 中科智脑(北京)技术有限公司 | Training method of image semantic segmentation network |
CN111932555A (en) * | 2020-07-31 | 2020-11-13 | 商汤集团有限公司 | Image processing method and device and computer readable storage medium |
US11663293B2 (en) * | 2020-07-31 | 2023-05-30 | Sensetime Group Limited | Image processing method and device, and computer-readable storage medium |
US20220036124A1 (en) * | 2020-07-31 | 2022-02-03 | Sensetime Group Limited | Image processing method and device, and computer-readable storage medium |
CN111899251A (en) * | 2020-08-06 | 2020-11-06 | 中国科学院深圳先进技术研究院 | Copy-move type forged image detection method for distinguishing forged source and target area |
CN112150478B (en) * | 2020-08-31 | 2021-06-22 | 温州医科大学 | Method and system for constructing semi-supervised image segmentation framework |
CN112150478A (en) * | 2020-08-31 | 2020-12-29 | 温州医科大学 | Method and system for constructing semi-supervised image segmentation framework |
CN112420205B (en) * | 2020-12-08 | 2024-09-06 | 医惠科技有限公司 | Entity recognition model generation method, entity recognition model generation device and computer readable storage medium |
CN112420205A (en) * | 2020-12-08 | 2021-02-26 | 医惠科技有限公司 | Entity recognition model generation method and device and computer readable storage medium |
CN112560925A (en) * | 2020-12-10 | 2021-03-26 | 中国科学院深圳先进技术研究院 | Complex scene target detection data set construction method and system |
WO2022121213A1 (en) * | 2020-12-10 | 2022-06-16 | 深圳先进技术研究院 | Gan-based contrast-agent-free medical image enhancement modeling method |
CN112507950A (en) * | 2020-12-18 | 2021-03-16 | 中国科学院空天信息创新研究院 | Method and device for generating confrontation type multi-task multi-element sample automatic labeling |
CN112686906A (en) * | 2020-12-25 | 2021-04-20 | 山东大学 | Image segmentation method and system based on uniform distribution migration guidance |
CN112686906B (en) * | 2020-12-25 | 2022-06-14 | 山东大学 | Image segmentation method and system based on uniform distribution migration guidance |
CN112890766A (en) * | 2020-12-31 | 2021-06-04 | 山东省千佛山医院 | Breast cancer auxiliary treatment equipment |
CN112837338B (en) * | 2021-01-12 | 2022-06-21 | 浙江大学 | Semi-supervised medical image segmentation method based on generation countermeasure network |
CN112837338A (en) * | 2021-01-12 | 2021-05-25 | 浙江大学 | Semi-supervised medical image segmentation method based on generation countermeasure network |
CN112749791A (en) * | 2021-01-22 | 2021-05-04 | 重庆理工大学 | Link prediction method based on graph neural network and capsule network |
CN113160243A (en) * | 2021-03-24 | 2021-07-23 | 联想(北京)有限公司 | Image segmentation method and electronic equipment |
CN112990044A (en) * | 2021-03-25 | 2021-06-18 | 北京百度网讯科技有限公司 | Method and device for generating image recognition model and image recognition |
WO2022205657A1 (en) * | 2021-04-02 | 2022-10-06 | 中国科学院深圳先进技术研究院 | Csm image segmentation method and apparatus, terminal device, and storage medium |
CN113223010A (en) * | 2021-04-22 | 2021-08-06 | 北京大学口腔医学院 | Method and system for fully automatically segmenting multiple tissues of oral cavity image |
CN113223010B (en) * | 2021-04-22 | 2024-02-27 | 北京大学口腔医学院 | Method and system for multi-tissue full-automatic segmentation of oral cavity image |
CN113052840B (en) * | 2021-04-30 | 2024-02-02 | 江苏赛诺格兰医疗科技有限公司 | Processing method based on low signal-to-noise ratio PET image |
CN113052840A (en) * | 2021-04-30 | 2021-06-29 | 江苏赛诺格兰医疗科技有限公司 | Processing method based on low signal-to-noise ratio PET image |
CN113487617A (en) * | 2021-07-26 | 2021-10-08 | 推想医疗科技股份有限公司 | Data processing method, data processing device, electronic equipment and storage medium |
CN113902029A (en) * | 2021-10-25 | 2022-01-07 | 北京达佳互联信息技术有限公司 | Image annotation method and device, electronic equipment and storage medium |
CN114021698A (en) * | 2021-10-30 | 2022-02-08 | 河南省鼎信信息安全等级测评有限公司 | Malicious domain name training sample expansion method and device based on capsule generation countermeasure network |
CN113850804B (en) * | 2021-11-29 | 2022-03-18 | 北京鹰瞳科技发展股份有限公司 | Retina image generation system and method based on generation countermeasure network |
CN113850804A (en) * | 2021-11-29 | 2021-12-28 | 北京鹰瞳科技发展股份有限公司 | Retina image generation system and method based on generation countermeasure network |
WO2023165033A1 (en) * | 2022-03-02 | 2023-09-07 | 深圳硅基智能科技有限公司 | Method for training model for recognizing target in medical image, method for recognizing target in medical image, and device and medium |
CN114898091A (en) * | 2022-04-14 | 2022-08-12 | 南京航空航天大学 | Image countermeasure sample generation method and device based on regional information |
CN116168242A (en) * | 2023-02-08 | 2023-05-26 | 阿里巴巴(中国)有限公司 | Pixel-level label generation method, model training method and equipment |
CN116168242B (en) * | 2023-02-08 | 2023-12-01 | 阿里巴巴(中国)有限公司 | Pixel-level label generation method, model training method and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110503654B (en) | 2022-04-26 |
WO2021017372A1 (en) | 2021-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110503654B (en) | Medical image segmentation method and system based on generation countermeasure network and electronic equipment | |
Soudani et al. | An image-based segmentation recommender using crowdsourcing and transfer learning for skin lesion extraction | |
EP3432263B1 (en) | Semantic segmentation for cancer detection in digital breast tomosynthesis | |
Kooi et al. | Classifying symmetrical differences and temporal change for the detection of malignant masses in mammography using deep neural networks | |
Yu et al. | Transferring deep neural networks for the differentiation of mammographic breast lesions | |
CN112614119B (en) | Medical image region of interest visualization method, device, storage medium and equipment | |
CN109389129A (en) | A kind of image processing method, electronic equipment and storage medium | |
Niyaz et al. | Advances in deep learning techniques for medical image analysis | |
Liu et al. | A semi-supervised convolutional transfer neural network for 3D pulmonary nodules detection | |
Singh et al. | A study on convolution neural network for breast cancer detection | |
Miller et al. | Self-supervised deep learning to enhance breast cancer detection on screening mammography | |
Singh et al. | An efficient hybrid methodology for an early detection of breast cancer in digital mammograms | |
CN113269799A (en) | Cervical cell segmentation method based on deep learning | |
Udawant et al. | Cotton leaf disease detection using instance segmentation | |
Pavithra et al. | An Overview of Convolutional Neural Network Architecture and Its Variants in Medical Diagnostics of Cancer and Covid-19 | |
Wang et al. | Optic disc detection based on fully convolutional neural network and structured matrix decomposition | |
Korez et al. | Segmentation of pathological spines in CT images using a two-way CNN and a collision-based model | |
CN113486930A (en) | Small intestinal lymphoma segmentation model establishing and segmenting method and device based on improved RetinaNet | |
CN112508914A (en) | Rib fracture image detection method based on small sample deep learning | |
Gulsoy et al. | FocalNeXt: A ConvNeXt augmented FocalNet architecture for lung cancer classification from CT-scan images | |
Indraswari et al. | Brain tumor detection on magnetic resonance imaging (MRI) images using convolutional neural network (CNN) | |
Ravinder et al. | Effective Multitier Network Model for MRI Brain Disease Prediction using Learning Approaches | |
Ghildiyal et al. | Layer-based deep net models for automated classification of pulmonary tuberculosis from chest radiographs | |
Zhang | Deep learning frameworks for computer aided diagnosis based on medical images | |
Yang et al. | Weakly-Supervised Learning for Attention-Guided Skull Fracture Classification In Computed Tomography Imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |