CN110503654A - A kind of medical image cutting method, system and electronic equipment based on generation confrontation network - Google Patents

A kind of medical image cutting method, system and electronic equipment based on generation confrontation network Download PDF

Info

Publication number
CN110503654A
CN110503654A CN201910707712.XA CN201910707712A CN110503654A CN 110503654 A CN110503654 A CN 110503654A CN 201910707712 A CN201910707712 A CN 201910707712A CN 110503654 A CN110503654 A CN 110503654A
Authority
CN
China
Prior art keywords
image
sample
pixel
level
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910707712.XA
Other languages
Chinese (zh)
Other versions
CN110503654B (en
Inventor
王书强
吴昆�
陈卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201910707712.XA priority Critical patent/CN110503654B/en
Publication of CN110503654A publication Critical patent/CN110503654A/en
Priority to PCT/CN2019/125428 priority patent/WO2021017372A1/en
Application granted granted Critical
Publication of CN110503654B publication Critical patent/CN110503654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This application involves a kind of based on the medical image cutting method, system and the electronic equipment that generate confrontation network.Firstly, how research generator extracts the Pixel-level feature of different classes of high quality graphic, and carries out structured features expression using capsule model, and then realize the generation of Pixel-level mark sample;Secondly the suitable arbiter of building, for differentiating the true and false for generating Pixel-level mark sample, and design suitable error majorized function, it will differentiate that result is fed back respectively in the model of generator and arbiter, pass through continuous dual training, the sample generative capacity and discriminating power of generator and arbiter is respectively increased, finally generates Pixel-level using trained generator and marks sample, realizes the Pixel-level segmentation of image level mark medical image.The application is effectively reduced dependence of the parted pattern to Pixel-level labeled data, can improve the efficiency for generating sample and authentic specimen dual training and effectively realize that high-precision pixel-level image is divided.

Description

A kind of medical image cutting method, system and electronics based on generation confrontation network Equipment
Technical field
The application belongs to technical field of medical image processing, in particular to a kind of based on the medical image for generating confrontation network Dividing method, system and electronic equipment.
Background technique
With flourishing for Medical Imaging Technology, medical image has extensive application in clinical treatment. According to statistics, the whole world has several ten million cases to carry out auxiliary diagnosis and treatment by medical image every year.It is being based on medical imaging diagnosis In the conventional method for the treatment of, doctor reads medical image data, identifies, and makes and sentencing to the diagnosing and treating of disease It is disconnected.This diagnosis and treatment mode is very inefficient, and individual difference is big, and doctor is easy to fail to pinpoint a disease in diagnosis with personal experience and mistaken diagnosis, for a long time Diagosis will lead to doctor's fatigue, the decline of diagosis accuracy rate.With the rise of artificial intelligence, by with machine in advance to image data Screening and judgement, mark emphasis suspicious region, then transfer to doctor carry out diagnosing and treating, the work of doctor can be mitigated significantly Amount, and result is comprehensive, stable and efficient.Therefore, artificial intelligence is with important application prospects in field of medical imaging.
Medical Imaging Technology includes medical imaging and Medical Image Processing two parts.Common medical imaging technology mainly has MRI (Magnetic Resonance Imaging, magnetic resonance imaging), Computed tomography (CT), positron emission Type computerized tomograph (PET), ultrasonic imaging (US) and x-ray imaging.Different imaging technique, in different medicals diagnosis on disease With treatment using upper each advantageous, the diagnosis and treatment purpose selection for specified disease has been gradually formed in specific clinical application Corresponding imaging technique.For example, magnetic resonance imaging can have fabulous resolution ratio to soft-tissue imaging, without ionising radiation etc. Harm, has a wide range of applications in the diagnosis and treatment at the positions such as brain and uterus.Deep learning is applied to the base of Medical Image Processing This process is as shown in Figure 1.
And traditional based on generating in the medical image segmentation task for fighting network, in order to adequately training nerve Network, reach high-accuracy as a result, it is desirable to prepares a large amount of relevant medical image data, and needs to these medicine shadows As data carry out artificial mark pixel-by-pixel.Such as the tumor region segmentation of research human brain, it is necessary to which artificial removes mark pair The brain tumor image answered.Medical conditions are varied, corresponding medical image be also it is varied, carried out using deep learning The segmentation of medical image, the corresponding medical image of each disease requires to carry out artificial manual mark, and can expend in this way A large amount of manpower and material resources.Even if being the largest common data sets, the Pixel-level mark sample of limited semantic classes can only be also provided This.The data of high quality are rare in medical images data sets, the serious accuracy for limiting semantic segmentation model.
It is a kind of deep learning mould that production, which fights network (GAN, Generative Adversarial Networks), Type is one of the method for unsupervised learning most prospect in complex distributions in recent years.Model passes through (at least) two moulds in frame Block: the mutual Game Learning for generating model (Generative Model) and discrimination model (Discriminative Model) produces Raw fairly good output.It is existing that the subject image across classification point can be applied to based on the Image Segmentation Model for generating confrontation network Cut, but in the field of medical imaging, the model then existing characteristics extract be not enough, dual training it is computationally intensive the problems such as.
Summary of the invention
This application provides a kind of based on the medical image cutting method, system and the electronic equipment that generate confrontation network, purport One of above-mentioned technical problem in the prior art is being solved at least to a certain extent.
To solve the above-mentioned problems, this application provides following technical solutions:
A kind of medical image cutting method based on generation confrontation network, comprising the following steps:
Step a: the Pixel-level mark sample of other medical images and the image level of medical image to be split are acquired respectively Mark sample;
Step b: the image level of sample and medical image to be split is marked by the Pixel-level of other medical images It marks sample training and network is fought based on the generation of capsule network, the generation confrontation network includes generator and arbiter;
Step c: the generator carries out Pixel-level feature extraction to the Pixel-level mark sample of other medical images, passes through The image level mark sample that the Pixel-level feature treats Medical Image Segmentation is handled, and the medical image to be split is generated Pixel-level mark sample, and the pre- test sample of segmentation of the medical image to be split is generated based on Pixel-level mark sample This;
Step d: the true mark sample of segmentation forecast sample and image to be split that the generator is generated is defeated together Enter to arbiter and carry out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and according to error function to generation Device and arbiter optimize, and obtain trained generation confrontation network;
Step e: the medical image to be split of image level mark is inputted into trained generation and fights network, passes through the life The Pixel-level segmented image of medical image to be split is exported at confrontation network.
The technical solution that the embodiment of the present application is taken further include: in the step c, the generator includes capsule network Module and local positioning network, the segmentation forecast sample that the generator generates medical image to be split specifically include:
Step b1: sample is marked by the Pixel-level of other medical images, pre-training is carried out to capsule network module, obtained Without semantic label sample, sample is marked by the image level that the no semantic label sample treats segmented image and is handled, area Divide the background and effective cut zone of the image level mark sample of the image to be split;
Step b2: the capsule network mould after the image level mark sample input of the image to be split to be completed to pre-training Block marks the reconstruction image of sample by the image level that the capsule network module exports image to be split;
Step b3: local positioning network generates the image level mark sample of image to be split using the feature extraction of convolutional layer The characteristic pattern comprising location information, and using global average pond layer, by weight (w1,w2…,wn) be weighted with characteristic pattern It is average, obtain the zone location characteristic pattern of the image level mark sample of image to be split;
Step b4: self-diffusion algorithm is executed according to the reconstruction image and zone location characteristic pattern, determines area pixel point Cut-off rule obtains the segmentation forecast sample of the image level mark sample of image to be split.
The technical solution that the embodiment of the present application is taken further include: in the step b2, the capsule network module includes Convolutional layer, PrimaryCaps layers, DigitCaps layers and decoding layer, the capsule network module is using single capsule neuron Output vector records direction and the location information of the image level mark sample decomposition edges of regions pixel of image to be split, uses The nonlinear activation function of vector extracts the probability value of classification, determines the cut zone of the image level mark sample of image to be split With background, calculates edge penalty and export the reconstruction image of the image level mark sample of image to be split.
The technical solution that the embodiment of the present application is taken further include: described according to reconstruction image and area in the step b4 Domain location feature figure executes self-diffusion algorithm and specifically includes: the region that activation value is bigger in zone location characteristic pattern is with random The self-diffusion algorithm diffusion pixel strolled calculates on image each pixel to defeated using the input point of zone location characteristic pattern The Gauss distance of access point, and optimal path is therefrom selected, the cut-off rule of area pixel point is obtained, the pre- test sample of segmentation is ultimately generated This.
The technical solution that the embodiment of the present application is taken further include: in the step d, the arbiter includes cascade Cascade module, Capsule network module and parameter optimization module, " generation-confrontation " training that the arbiter carries out are specific Include:
Step d1: the pixel of marking error in the segmentation forecast sample is extracted by cascade Cascade module and is set Crucial pixel of the reliability lower than given threshold and corresponding ground truth, and filter that mark is correct and confidence level is higher than and sets Determine the pixel of threshold value;
Step d2: the crucial pixel extracted and corresponding groundtruth are carried out by Capsule network module Processing, and generate error;
Step d3: the error that the parameter optimization module is generated using Capsule network module is to generator and arbiter Network parameter optimize;Wherein, for given segmentation forecast sample { If,Lf*And corresponding true mark sample { If, Lf, the global error function of network are as follows:
In above-mentioned formula, θSAnd θpRespectively indicate the parameter of generator and arbiter, JbIndicate that binary intersects entropy loss letter Number, OsAnd OpThe output for respectively indicating generator and arbiter, when input is from true mark sample { If,LfAnd segmentation prediction Sample { If,Lf*When, the true and false of pixel classification is marked by output 1 and 0.
Another technical solution that the embodiment of the present application is taken are as follows: a kind of based on the medical image segmentation system for generating confrontation network System, including sample collection module and generation confrontation network,
Sample collection module: the Pixel-level for acquiring other medical images respectively marks sample and medicine figure to be split The image level of picture marks sample;
Sample is marked by the image level that the Pixel-level of other medical images marks sample and medical image to be split This training fights network based on the generation of capsule network;
The generation confrontation network includes generator and arbiter, Pixel-level mark of the generator to other medical images Infuse sample carry out Pixel-level feature extraction, by the Pixel-level feature treat Medical Image Segmentation image level mark sample into Row processing, the Pixel-level for generating the medical image to be split marks sample, and generates institute based on Pixel-level mark sample State the segmentation forecast sample of medical image to be split;
The true mark sample of segmentation forecast sample and image to be split that the generator generates is input to together and is sentenced Other device carries out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and to generator and is sentenced according to error function Other device optimizes, and obtains trained generation confrontation network;
The medical image to be split of image level mark is inputted into trained generation and fights network, is fought by the generation Network exports the Pixel-level segmented image of medical image to be split.
The technical solution that the embodiment of the present application is taken further include: the generator includes pre-training module, capsule network mould Block, local positioning network module and sample generation module:
Pre-training module: sample is marked for the Pixel-level by other medical images, capsule network module is instructed in advance Practice, obtain no semantic label sample, sample is marked by the image level that the no semantic label sample treats segmented image and is carried out The background and effective cut zone of the image level mark sample of the image to be split are distinguished in processing;
Capsule network module: for the glue after the image level mark sample input completion pre-training by the image to be split Capsule network module marks the reconstruction image of sample by the image level that the capsule network module exports image to be split;
Local positioning network: the image level mark sample of image to be split is generated for the feature extraction using convolutional layer Characteristic pattern comprising location information, and using global average pond layer, by weight (w1,w2…,wn) be weighted with characteristic pattern it is flat , the zone location characteristic pattern of the image level mark sample of image to be split is obtained;
Sample generation module: it for executing self-diffusion algorithm according to the reconstruction image and zone location characteristic pattern, determines Area pixel point cut-off rule obtains the segmentation forecast sample of the image level mark sample of image to be split.
The technical solution that the embodiment of the present application is taken further include: the capsule network module include convolutional layer, PrimaryCaps layers, DigitCaps layers and decoding layer, the capsule network module using single capsule neuron output to Amount records direction and the location information of the image level mark sample decomposition edges of regions pixel of image to be split, using vector Nonlinear activation function extracts the probability value of classification, determines the cut zone and back of the image level mark sample of image to be split Scape calculates edge penalty and exports the reconstruction image of the image level mark sample of image to be split.
The technical solution that the embodiment of the present application is taken further include: the sample generation module is fixed according to reconstruction image and region Position characteristic pattern executes self-diffusion algorithm and specifically includes: the bigger region of activation value uses walk random in zone location characteristic pattern Self-diffusion algorithm spread pixel, using the input point of zone location characteristic pattern, calculate on image each pixel to input point Gauss distance, and therefrom select optimal path, obtain the cut-off rule of area pixel point, ultimately generate segmentation forecast sample.
The technical solution that the embodiment of the present application is taken further include: the arbiter include cascade Cascade module, Capsule network module and parameter optimization module:
Cascade Cascade module: the pixel and confidence level for extracting marking error in the segmentation forecast sample are low Crucial pixel and corresponding ground truth in given threshold, and filter and mark correct and confidence level higher than given threshold Pixel;
Capsule network module: for the crucial pixel extracted and corresponding ground truth to be handled, And generate error;
Parameter optimization module: for the error using the generation of Capsule network module to the network of generator and arbiter Parameter optimizes;Wherein, for given segmentation forecast sample { If,Lf*And corresponding true mark sample { If,Lf, net The global error function of network are as follows:
In above-mentioned formula, θSAnd θpRespectively indicate the parameter of generator and arbiter, JbIndicate that binary intersects entropy loss letter Number, OsAnd OpThe output for respectively indicating generator and arbiter, when input is from true mark sample { If,LfAnd segmentation prediction Sample { If,Lf*When, the true and false of pixel classification is marked by output 1 and 0.
The another technical solution that the embodiment of the present application is taken are as follows: a kind of electronic equipment, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by one processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out the above-mentioned medical image segmentation side based on generation confrontation network The following operation of method:
Step a: the Pixel-level mark sample of other medical images and the image level of medical image to be split are acquired respectively Mark sample;
Step b: the image level of sample and medical image to be split is marked by the Pixel-level of other medical images It marks sample training and network is fought based on the generation of capsule network, the generation confrontation network includes generator and arbiter;
Step c: the generator carries out Pixel-level feature extraction to the Pixel-level mark sample of other medical images, passes through The image level mark sample that the Pixel-level feature treats Medical Image Segmentation is handled, and the medical image to be split is generated Pixel-level mark sample, and the pre- test sample of segmentation of the medical image to be split is generated based on Pixel-level mark sample This;
Step d: the true mark sample of segmentation forecast sample and image to be split that the generator is generated is defeated together Enter to arbiter and carry out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and according to error function to generation Device and arbiter optimize, and obtain trained generation confrontation network;
Step e: the medical image to be split of image level mark is inputted into trained generation and fights network, passes through the life The Pixel-level segmented image of medical image to be split is exported at confrontation network.
Compared with the existing technology, the embodiment of the present application generate beneficial effect be: the embodiment of the present application based on generation Fight network medical image cutting method, system and electronic equipment by fusion capsule mechanism to depth convolutional neural networks into Row optimization, fusion Capsule network and the thought for cascading waterfall generate new instruction in the case where medical image sample size is small Practice image pattern, realizes only have the semantic segmentation of the medical image data of image level label to low quality, point that will learn The full labeled data that knowledge is marked from Pixel-level is cut, the weak labeled data of image level is transferred to, to improve the aspect of model Ability to express, the usability of extension medical image mark sample, is effectively reduced parted pattern to Pixel-level labeled data It relies on, has network message redundancy few, the abundant feature of feature extraction can under the premise of a small amount of Pixel-level marks sample The efficiency for generating sample and authentic specimen dual training is improved, and can effectively realize high-precision pixel-level image segmentation.
Detailed description of the invention
Fig. 1 is the basic flow chart that deep learning is applied to Medical Image Processing;
Fig. 2 is the flow chart based on the medical image cutting method for generating confrontation network of the embodiment of the present application;
Fig. 3 is the structural schematic diagram of the generation confrontation network of the embodiment of the present application;
Fig. 4 is the structural schematic diagram of capsule network module;
Fig. 5 is the schematic network structure of local positioning network;
Fig. 6 is the structural schematic diagram based on the medical image segmentation system for generating confrontation network of the embodiment of the present application;
Fig. 7 is the hardware device knot provided by the embodiments of the present application based on the medical image cutting method for generating confrontation network Structure schematic diagram.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not For limiting the application.
In order to solve the shortcomings of the prior art, the embodiment of the present application is divided based on the medical image for generating confrontation network Segmentation method is improved by fusion capsule mechanism to confrontation network is generated, firstly, how research generator is to different classes of The Pixel-level feature of high quality graphic extracts, and carries out structured features expression using capsule model, and then realize pixel The generation of grade mark sample;Secondly the suitable arbiter of building for differentiating the true and false for generating Pixel-level mark sample, and designs Suitable error majorized function will differentiate that result is fed back respectively in the model of generator and arbiter, by continuous right The sample generative capacity and discriminating power of generator and arbiter is respectively increased in anti-training, finally utilizes trained generator It generates Pixel-level and marks sample, realize the Pixel-level segmentation of image level mark medical image.In following embodiment, the application only with It is explained in detail for the medical image segmentation of cervical spondylotic myelopathy (cervical spondylotic myelopathy, CSM) It states, pathogeny type is not limited to single case, may extend into the image segmentation scene of a variety of cases, such as brain MRI image Segmentation etc..For the image segmentation of different cases, only the training sample of corresponding case need to be acquired in data acquisition phase, and in mould Training sample is replaced in the generator of type.
Referring to Fig. 2, being the flow chart based on the medical image cutting method for generating confrontation network of the embodiment of the present application. The embodiment of the present application based on generate confrontation network medical image cutting method the following steps are included:
Step 100: acquiring the Pixel-level mark sample and CSM image level mark sample of other medical images respectively;
In step 100, other medical images include the medical image at the positions such as lung, and the acquisition modes for marking sample are specific Are as follows: acquire a small amount of full mark CSM image pattern { If,Lf,TfTotally 500 DTI sample (diffusion tensor Imaging, diffusion tensor), picture size 28x28, CSM group 60, wherein male 27 female 33, age 20~71 It is year, 45 years old average, sample { L is marked comprising Pixel-level mark sample and image levelf,Tf};Other medical images (lung of such as people) Pixel-level mark sample { IO,LO8000 DTI samples, picture size 28x28.Pass through correcting deformed and selected threshold value The DTI sample for getting CSM image He other medical images respectively afterwards determines cut zone interested on DTI sample (ROI), it and by ROI is placed in myeleterosis area center, guarantees the unification of DTI sample-size, avoids the shadow of spinal fluid and artifact It rings.
Referring to Fig. 3, being the structural schematic diagram of the generation confrontation network of the embodiment of the present application.The generation of the embodiment of the present application Fighting network includes generator and arbiter, wherein generator is responsible for generating the Pixel-level labeled data of medical image, and differentiates Device is then responsible for the mark that fining generates.Generator includes two parts of capsule network module and local positioning network, by other The Pixel-level of medical image marks sample { IO,LOPre-training sample as capsule network module, and CSM image level is marked Sample { Lf,TfTraining sample as local positioning network.
Step 200: sample being marked by the Pixel-level of other medical images, pre-training is carried out to capsule network module, obtained Without semantic label sample, and sample is marked to CSM image level by no semantic label sample and is handled, distinguishes CSM image level Mark the background and effective cut zone of sample;
In step 200, capsule network module uses transportable semantic segmentation model, point which can will learn Knowledge is cut, from the image level mark that the Pixel-level of full labeled data mark is transferred to weak labeled data.In actual application, The medical image acquisition of high quality is very difficult, and therefore, the application makes new data and target by trained model The data in domain are matched, and the feature of Pixel-level segmentation are obtained, to equally can be realized height in the case where sample size is less The image segmentation of precision.The pre-training process of capsule network module specifically includes:
Step 201: the Pixel-level mark sample of other medical images being handled, is become without semantic segmentation figure As mark;
In step 201, the segmented image mark i.e. mark of no semanteme just distinguishes the effective cut zone and background of sample, And the shape of sample is not distinguished.The network that such training data trains, what is learnt is to discriminate between knowing for object and background Know, belongs to wide in range high-order feature, and for image, there are the differentiation of object and background, therefore wide in range high-order feature With stronger versatility, can easily migrate into other different tasks, it so can will be from high quality pixel The knowledge learnt in the medical image of rank mark, moves in the medical image that low-quality image rank marks, and high-quality The image of amount and low-quality image have not needed direct relevance.
Step 202: according to being marked at sample without semantic segmentation image labeling to CSM image level for other medical images Reason, generate CSM image level mark sample without semantic label sample { IO,LO, and by filtering semantic information, obtain Pixel-level Other no semantic tagger LO
In step 202, obtain pixel scale the purpose without semantic tagger be distinguish data effective cut zone and Background, to be easier to migrate the knowledge learnt in the data of power mark.
Step 300: CSM image level is marked into sample { Lf,TfInput the capsule network module after completing pre-training, capsule Network module exports the reconstruction image of CSM image level mark sample;
In step 300, capsule network module uses the output vector of single capsule neuron, record CSM image level mark The direction of sample decomposition edges of regions pixel and location information extract the probability of classification using the nonlinear activation function of vector Value determines the cut zone and background of CSM image level mark sample, calculates edge penalty and exports CSM image level mark sample Reconstruction image.The application utilizes the parameter information of capsule network module instance pixel, and output activation vector records cut section The position in domain and angle information can effectively improve the sharpness of cut zone borderline region.
The structure of capsule network module is as shown in Figure 4.The model includes convolutional layer, PrimaryCaps layers, DigitCaps Layer and decoding layer, one function of each Capsules representative, export the vector of activation, and vector length represents the region point that capsule is found The correct probability of secant.The function of every layer of capsule network module is respectively as follows:
Convolutional layer: by marking sample { L to CSM image levelf,TfCarry out convolution operation acquisition spinal cord disk form, be pressurized The primary features such as position.For inputting the CSM image having a size of 28x28, it is the volume of 1 9x9x1 that convolutional layer, which has 256 step-lengths, Product core, exports the characteristic tensor of 20x20x256 by the feature extraction of convolutional layer using Relu activation primitive.
PrimaryCaps layers: including 32 main capsules, receive the primary features of convolutional layer acquisition, each capsule generates special The group of sign is merged into row vectorization expression, and each dimension of vector indicates the information such as direction, the position of feature.Each main capsule will The convolution kernel of 8 9x9x256 is applied in the input tensor of 20x20x256, due to there is 32 main capsules, is exported as 6x6x8x32 Characteristic tensor.
DigitCaps layers: each number capsule corresponds to the vector of PrimaryCaps layers of output, and each number capsule receives The output variable nesting of each main capsule is mapped to multilayer number using dynamic routing as input by the tensor of one 6x6x8x32 In word capsule, the key feature of vector is activated, the 8 dimension input spaces are mapped to by the output of 16 dimension capsules by the weight matrix of 8x16 Space.
Decoding layer: decoding layer is the last full articulamentum of network, includes Relu function and Sigmoid function, receives Correct 16 dimensional vector of DigitCaps layers of output, and the multiple characteristics as input study capsule output vector expression, calculate The image of one with the 28x28 size of input picture same pixel are rebuild in edge penalty, study.
In the embodiment of the present application, the loss function of capsule network module is as follows:
Sample { I is marked for given CSM image levelf,Tf, corresponding loss function are as follows:
In formula (1), OL((If,Tf);θL) indicate capsule network module output, θLIndicate the weight and ginseng of network training Number, JbIndicate the binary cross entropy loss function of bracket interior element.
Capsule network module application to CSM image level is marked into sample { If,Tf, it exports without semantic coarse segmentation figure M=OL ((If,Tf);θL)。
S400: sample { I is marked to CSM image level by local positioning networkf,TfPixel-level Tag Estimation is carried out, and it is defeated Zone location characteristic pattern out;
In step 400, the network structure of local positioning network is as shown in Figure 5.Local positioning network utilizes the spy of convolutional layer Sign, which is extracted, generates the characteristic pattern comprising location information, and using global average pond layer, by weight (w1,w2…,wn) and characteristic pattern It is weighted and averaged, obtains zone location characteristic pattern.The bigger region of activation value most likely spinal cord in zone location characteristic pattern The damage field division position of cervical vertebra.The primary that local positioning network makes full use of training sample to obtain after convolution layer operation is special The hot spot region for levying location feature figure, since convolutional neural networks need the parameter of training very much, the recycling to characteristic pattern, It can reduce the parameter amount of network, so that the training effectiveness of model is higher.
Step 500: the area that the reconstruction image and local positioning network that sample generation module is exported according to capsule network export Domain location feature figure executes self-diffusion algorithm, determines area pixel point cut-off rule, obtains more coarse segmentation forecast sample {If,Lf*};
In step 500, in order to by include semantic information segmentation figure M export for Pixel-level mark segmentation sample, swash Living to be worth bigger region with walk random (Random Walk) thought, the self-diffusion algorithm for passing through walk random spreads pixel Point calculates each pixel on image to the Gauss distance of input point, and therefrom selects using the input point of zone location characteristic pattern Optimal path obtains the cut-off rule of area pixel point, from the biggish classification point diffusion of each activation value, ultimately generates more thick Rough segmentation forecast sample { If,lf*}。
A given CSM image level marks sample { If,Tf, it is converted into super-pixel p={ p1, p2 ..., pN }, this A little images are described by a undirected graph model G, wherein each node corresponds to a specific super-pixel, then in non-directed graph Self-diffusion algorithm is executed on model G.Based on coarse segmentation figure M, the objective function of the self-diffusion process of classification is defined:
In formula (2), q=[q1, q2 ..., qN] indicates the label vector of all super-pixel p, if pi ∈ A, qi are solid It is set to 1, otherwise its initial value is 0.
Zi,j=exp (- | | F (pi)-F(pj)||/2σ2) (3)
In formula (3), Zi,jIndicate the Gauss distance between two neighbouring super pixels.
By operating above, so that it may realized using the image that a small amount of high quality has pixel scale to mark to only figure The CSM image marked as rank carries out semantic segmentation.
Step 600: the segmentation forecast sample { I that generator is exportedf,Lf*And true mark sample { If,LfInput together " generation-confrontation " training is carried out to arbiter, generator is optimized;
In step 600, segmentation is improved in direction and location information of the arbiter using capsule network module record cut zone The sharpness in zone boundary region, and the key area pixel for being difficult to correctly classify in image is extracted using cascade mode, Filter out simple specific flat site pixel, carry out " generation-confrontation " training using processed image, until generator with Nash Equilibrium is formed between arbiter, the segmentation forecast sample { I that image comes from generator generation cannot be distinguished in arbiterf, Lf*Or true mark sample { If,Lf, complete the training for generating confrontation network.
As shown in figure 3, in the embodiment of the present application, arbiter include cascade Cascade module, Capsule network module and Parameter optimization module;Each module concrete function is as follows:
It cascades Cascade module: extracting the crucial pixel of segmentation forecast sample for being responsible for;In the process of image segmentation In, the complexity of each pixel mark is that different, flat background area easily can be distinguished out, but It is that the boundary pixel of object and background area is but difficult to differentiate between.Previous network structure by these pixels be completely put into network into Row processing, causes the unnecessary redundancy of network.And the application treats pixel with a certain discrimination, emphasis using the thought of cascade Cascade Processing is difficult to the emphasis pixel region classified.Marking error and confidence level in the segmentation forecast sample of generator generation are lower than The crucial pixel of certain threshold value and corresponding ground truth are extracted, and will mark the correct and very high picture of confidence level Element filters out.In this way, the pixel for being input to next stage training is only the crucial pixel for being not easy to distinguish, it is possible to reduce in network Redundancy, improve the working efficiency of network.
Capsule network module is responsible for the crucial pixel that will be extracted and corresponding ground truth is handled, And generate error;Specifically, the function of Capsule network module includes:
Step 610: local shape factor;It is inputted respectively using crucial pixel and corresponding ground truth as input Into corresponding convolutional layer;Then several convolutional layers are utilized, the crucial pixel of input is distinguished with corresponding ground truth Convolution is carried out, segmentation forecast sample { I is extractedf,Lf*In low-level features;Wherein the activation primitive of convolutional layer is ReLU letter Number.
Step 611: high dimensional feature extracts;By PrimaryCaps layers of building, the low-level features of extraction are input to PrimaryCaps layers, obtain the high dimensional feature vector comprising spatial positional information;DigitCaps layers of building, using dynamic routing Output variable nesting in PrimaryCaps layers is mapped in DigitCaps layers, construct currently be best able to characterize it is all defeated Enter the advanced features of feature, and is entered into next layer;
Calculation between the PrimaryCaps layer being related in step 611 and DigitCaps layers is as follows:
If crucial pixel is u with the feature vector that corresponding Ground truth is extracted after convolutional layer convolutioni, will Low-level features vector uiAs PrimaryCaps layers of input, with weight matrix WijMultiplication obtains predicted vectorWherein:
And weighted sum S can be obtained between predicted vector by way of linear combinationj, weight coefficient cij, in which:
Obtain weighted sum SjAfterwards, by compression function by SjVector length limits, and obtains output vector Vj, in which:
In formula (6), first half is input vector SjZoom scale, latter half SjUnit vector.And it is counting Calculate SjDuring, coefficient cijFor constant, cijCalculation formula are as follows:
In formula (7), bijFor constant, bijThe numerical value b that passes through last iterationijValue and VjWithProduct sum It arrives, that is, bijUpdate mode are as follows:
Step 612: the advanced features vector V that DigitCaps layers export is put into decoding layer by arbiter, by several complete Articulamentum, the differentiation result of the final output image true and false;Specifically: if output result is 0, is determined as vacation, indicates input figure As being identified as forging image;If exporting result is 1, it is determined as very, indicating that input picture has successfully obscured arbiter.
Parameter optimization module: the error generated using Capsule network module is to the network parameter of generator and arbiter It optimizes, generator is allow to export more optimized segmentation result.
For given segmentation forecast sample { If,Lf*And corresponding true mark sample { If,Lf, the whole of network misses Difference function is as follows:
In formula (9), θSAnd θpRespectively indicate the parameter of generator and arbiter, JbIndicate binary cross entropy loss function, OsAnd OpRespectively indicate the output of generator and arbiter.When input is from true mark sample { If,LfAnd segmentation forecast sample {If,Lf*When, output 1 and 0 marks the true and false of pixel classification.
In the embodiment of the present application, the process of parameter optimization includes two parts:
Step 620: fixed generator parameter θS, optimize arbiter parameter θp;During dual training, life fixed first It grows up to be a useful person parameter θS, it is sent in arbiter using the segmentation forecast sample that generator generates, the true and false is judged by arbiter, and utilize Arbiter error function adjusts arbiter parameter θ by back-propagation algorithmp, improve itself distinguishing ability.And arbiter is corresponding Error function are as follows:
During training, the parameter of arbiter is continued to optimize, and discriminating power constantly enhances, and is increasingly easy to distinguish The generation image of generator, hence into arrive next stage.
Step 621: fixed arbiter parameter θp, optimize generator parameter θS;The differentiation result of arbiter is brought by network In generator error function, generator parameter θ is adjusted by back-propagation algorithmS, so that generator generates higher-quality point It cuts as a result, in this way, generator generates more accurate result to confuse arbiter.And the corresponding error function of generator are as follows:
Above-mentioned two Optimization Steps are repeated, finally, forming Nash Equilibrium between generator and arbiter, arbiter can not area Partial image comes from the segmentation forecast sample { I of generator outputf,Lf*Or true mark sample { If,Lf, then generation pair Anti- network training is completed.
Step 700: the CSM image of image level mark being inputted into trained generation and fights network, fights net by generating The Pixel-level segmented image of network output CSM image.
Referring to Fig. 6, being being shown based on the structure for generating the medical image segmentation system for fighting network for the embodiment of the present application It is intended to.The embodiment of the present application includes sample collection module and generation pair based on the medical image segmentation system for generating confrontation network Anti- network, the image pattern training acquired by sample collection module generate confrontation network, and generating confrontation network to include includes life It grows up to be a useful person and arbiter, generator carries out structured features expression using capsule model, and then realize the life of Pixel-level mark sample At arbiter is used to differentiate the true and false for generating Pixel-level mark sample, and designs suitable error majorized function, will differentiate result Respectively in feedback to the model of generator and arbiter, by continuous dual training, generator and arbiter is respectively increased Sample generative capacity and discriminating power, finally using trained generator generate Pixel-level mark sample, realize image level Mark the Pixel-level segmentation of medical image.It is specific:
Sample collection module: the Pixel-level for acquiring other medical images respectively marks sample and CSM image level mark Infuse sample;Wherein, other medical images include the medical image at the positions such as lung, mark the acquisition modes of sample specifically: adopt The a small amount of full mark CSM image pattern { I of collectionf,Lf,TfTotally 500 DTI samples (diffusion tensor imaging, Diffusion tensor), picture size 28x28, CSM group 60, wherein male 27 female 33, the age 20~71 years old, average 45 Year, sample { L was marked comprising Pixel-level mark sample and image levelf,Tf};The Pixel-level of other medical images (lung of such as people) Mark sample { IO,LO8000 DTI samples, picture size 28x28.By being obtained respectively after correcting deformed and selected threshold value The DTI sample for getting CSM image He other medical images, determines cut zone (ROI) interested on DTI sample, and by ROI It is placed in myeleterosis area center, guarantees the unification of DTI sample-size, avoids the influence of spinal fluid and artifact.
Generator includes pre-training module, capsule network module, local positioning network module and sample generation module, each mould Block function is specific as follows:
Pre-training module: sample is marked for the Pixel-level by other medical images, capsule network module is instructed in advance Practice, obtains no semantic label sample, and sample is marked to CSM image level by no semantic label sample and is handled, distinguish CSM The background and effective cut zone of image level mark sample;Wherein, capsule network module uses transportable semantic segmentation model, The split knowledge that the model can will learn is transferred to the image level of weak labeled data from the Pixel-level of full labeled data mark In mark.In actual application, the medical image acquisition of high quality is very difficult, and therefore, the application passes through trained Model matches new data with the data of aiming field, the feature of Pixel-level segmentation is obtained, thus less in sample size In the case where equally can be realized high-precision image segmentation.The pre-training process of capsule network module specifically includes:
1, the Pixel-level mark sample of other medical images is handled, is become without semantic segmented image mark Note;Wherein, the segmented image mark i.e. mark of no semanteme just distinguishes the effective cut zone and background of sample, and does not distinguish The shape of sample.The network that such training data trains, what is learnt is to discriminate between the knowledge of object and background, belongs to wide in range High-order feature have a differentiation of object and background and for image, therefore wide in range high-order feature have it is stronger logical It with property, can easily migrate into other different tasks, the doctor that can will be so marked from high quality pixel scale Learn the knowledge that learns in image, in the medical image for moving to low-quality image rank mark, and the image of high quality with it is low The image of quality has not needed direct relevance.
2, CSM image level mark sample is handled without semantic segmentation image labeling according to other medical images, it is raw At CSM image level mark sample without semantic label sample { IO,LO, and by filtering semantic information, obtain the nothing of pixel scale Semantic tagger LO;Wherein, the purpose without semantic tagger for obtaining pixel scale is to distinguish the effective cut zone and back of data Scape, to be easier to migrate the knowledge learnt in the data of power mark.
Capsule network module: for CSM image level to be marked sample { Lf,TfInput the capsule network after completing pre-training Module, capsule network module export the reconstruction image of CSM image level mark sample;Wherein, capsule network module uses single glue The output vector of capsule neuron, the direction of record CSM image level mark sample decomposition edges of regions pixel and location information, use The nonlinear activation function of vector extracts the probability value of classification, determines the cut zone and background of CSM image level mark sample, meter It calculates edge penalty and exports the reconstruction image of CSM image level mark sample.The application utilizes capsule network module instance pixel Parameter information, output activation vector record cut zone position and angle information, cut zone boundary can be effectively improved The sharpness in region.
Capsule network module includes convolutional layer, PrimaryCaps layers, DigitCaps layers and decoding layer, each Capsules representative One function, exports the vector of activation, and vector length represents the correct probability of region segmentation line that capsule is found.Capsule network The function of every layer of module is respectively as follows:
Convolutional layer: by marking sample { L to CSM image levelf,TfCarry out convolution operation acquisition spinal cord disk form, be pressurized The primary features such as position.For inputting the CSM image having a size of 28x28, it is the volume of 1 9x9x1 that convolutional layer, which has 256 step-lengths, Product core, exports the characteristic tensor of 20x20x256 by the feature extraction of convolutional layer using Relu activation primitive.
PrimaryCaps layers: including 32 main capsules, receive the primary features of convolutional layer acquisition, each capsule generates special The group of sign is merged into row vectorization expression, and each dimension of vector indicates the information such as direction, the position of feature.Each main capsule will The convolution kernel of 8 9x9x256 is applied in the input tensor of 20x20x256, due to there is 32 main capsules, is exported as 6x6x8x32 Characteristic tensor.
DigitCaps layers: each number capsule corresponds to the vector of PrimaryCaps layers of output, and each number capsule receives The output variable nesting of each main capsule is mapped to multilayer number using dynamic routing as input by the tensor of one 6x6x8x32 In word capsule, the key feature of vector is activated, the 8 dimension input spaces are mapped to by the output of 16 dimension capsules by the weight matrix of 8x16 Space.
Decoding layer: decoding layer is the last full articulamentum of network, includes Relu function and Sigmoid function, receives Correct 16 dimensional vector of DigitCaps layers of output, and the multiple characteristics as input study capsule output vector expression, calculate The image of one with the 28x28 size of input picture same pixel are rebuild in edge penalty, study.
In the embodiment of the present application, the loss function of capsule network module is as follows:
Sample { I is marked for given CSM image levelf,Tf, corresponding loss function are as follows:
In formula (1), OL((If,Tf);θL) indicate capsule network module output, θLIndicate the weight and ginseng of network training Number, JbIndicate the binary cross entropy loss function of bracket interior element.
Capsule network module application to CSM image level is marked into sample { If,Tf, it exports without semantic coarse segmentation figure M=OL ((If,Tf);θL)。
Local positioning network module: for marking sample { I to CSM image levelf,TfPixel-level Tag Estimation is carried out, and it is defeated Zone location characteristic pattern out;Wherein, local positioning network generates the feature comprising location information using the feature extraction of convolutional layer Figure, and using global average pond layer, by weight (w1,w2…,wn) be weighted and averaged with characteristic pattern, obtain zone location spy Sign figure.The damage field division position of the bigger region of activation value most likely spinal cord cervical vertebra in zone location characteristic pattern.Region The hot spot region for the primary features location feature figure that positioning network makes full use of training sample to obtain after convolution layer operation, due to Convolutional neural networks need the parameter of training very much, and the recycling to characteristic pattern can reduce the parameter amount of network, so that mould The training effectiveness of type is higher.
Sample generation module: the area of reconstruction image and local positioning network output for being exported according to capsule network module Domain location feature figure executes self-diffusion algorithm, determines area pixel point cut-off rule, obtains more coarse segmentation forecast sample {If,Lf*};Wherein, in order to by include semantic information segmentation figure M export for Pixel-level mark segmentation sample, got in activation value Big region uses walk random (RandomWalk) thought, spreads pixel by the self-diffusion algorithm of walk random, utilizes The input point of zone location characteristic pattern calculates each pixel on image and and therefrom selects optimal to the Gauss distance of input point Path obtains the cut-off rule of area pixel point, from the biggish classification point diffusion of each activation value, ultimately generates more coarse point Cut forecast sample { If,Lf*}。
A given CSM image level marks sample { If,Tf, it is converted into super-pixel p={ p1, p2 ..., pN }, this A little images are described by a undirected graph model G, wherein each node corresponds to a specific super-pixel, then in non-directed graph Self-diffusion algorithm is executed on model G.Based on coarse segmentation figure M, the objective function of the self-diffusion process of classification is defined:
In formula (2), q=[q1, q2 ..., qN] indicates the label vector of all super-pixel p, if pi ∈ A, qi are solid It is set to 1, otherwise its initial value is 0.
Zi,j=exp (- | | F (pi)-F(pj)||/2σ2) (3)
In formula (3), Zi,jIndicate the Gauss distance between two neighbouring super pixels.
By operating above, so that it may realized using the image that a small amount of high quality has pixel scale to mark to only figure The CSM image marked as rank carries out semantic segmentation.
Segmentation forecast sample { the I that generator is generatedf,Lf*And true mark sample { If,LfInput arbiter carries out pair Anti- training, direction and location information of the arbiter using capsule Network records cut zone, improves cut zone borderline region Sharpness, and the key area pixel for being difficult to correctly classify in image is extracted using cascade mode, it filters out simple clear Flat site pixel, " generation-confrontation " training is carried out using processed image, until formed between generator and arbiter Segmentation forecast sample { the I that image comes from generator generation cannot be distinguished in Nash Equilibrium, arbiterf,Lf*Or true mark Infuse sample { If,Lf, complete the training for generating confrontation network.
Specifically, arbiter includes cascade Cascade module, Capsule network module and parameter optimization module;Each module Concrete function is as follows:
It cascades Cascade module: extracting the crucial pixel of segmentation forecast sample for being responsible for;In the process of image segmentation In, the complexity of each pixel mark is that different, flat background area easily can be distinguished out, but It is that the boundary pixel of object and background area is but difficult to differentiate between.Previous network structure by these pixels be completely put into network into Row processing, causes the unnecessary redundancy of network.And the application treats pixel with a certain discrimination, emphasis using the thought of cascade Cascade Processing is difficult to the emphasis pixel region classified.The pixel and confidence of marking error in the segmentation forecast sample that generator is generated Degree comes out lower than the pixel extraction of certain threshold value, and filters out the correct and very high pixel of confidence level is marked.In this way, being input to The pixel of next stage training is only the crucial pixel for being not easy to distinguish, it is possible to reduce the redundancy in network improves network Working efficiency.
The crucial pixel that Capsule network module is responsible for extract is handled, and generates error;Specifically, The function of Capsule network module includes:
1, local shape factor;Segmentation forecast sample { the I that generator is exportedf,Lf*By cascade Cascade module mistake Filter, extracts crucial pixel and corresponding ground truth, is separately input in corresponding convolutional layer as input; Then several convolutional layers are utilized, convolution is carried out respectively with corresponding ground truth to the crucial pixel of input, are extracted point Cut forecast sample { If,Lf*In low-level features;Wherein the activation primitive of convolutional layer is ReLU function.
2, high dimensional feature extracts;By PrimaryCaps layers of building, the low-level features of extraction are input to PrimaryCaps layers, obtain the high dimensional feature vector comprising spatial positional information;DigitCaps layers of building, using dynamic routing Output variable nesting in PrimaryCaps layers is mapped in DigitCaps layers, construct currently be best able to characterize it is all defeated Enter the advanced features of feature, and is entered into next layer;
Among the above, the calculation between PrimaryCaps layers and DigitCaps layers is as follows:
If crucial pixel is u with the feature vector that corresponding Ground truth is extracted after convolutional layer convolutioni, will Low-level features vector uiAs PrimaryCaps layers of input, with weight matrix WijMultiplication obtains predicted vectorWherein:
And weighted sum S can be obtained between predicted vector by way of linear combinationj, weight coefficient cij, in which:
Obtain weighted sum SjAfterwards, by compression function by SjVector length limits, and obtains output vector Vj, in which:
In formula (6), first half is input vector SjZoom scale, latter half SjUnit vector.And it is counting Calculate SjDuring, coefficient cijFor constant, cijCalculation formula are as follows:
In formula (7), bijFor constant, bijThe numerical value b that passes through last iterationijValue and VjWithProduct summation obtain, That is, bijUpdate mode are as follows:
3, the advanced features vector V that DigitCaps layers export is put into decoding layer by arbiter, passes through several full connections Layer, the differentiation result of the final output image true and false;Specifically: if output result is 0, is determined as vacation, indicates input picture quilt It is determined as forging image;If exporting result is 1, it is determined as very, indicating that input picture has successfully obscured arbiter.
Parameter optimization module: for the error using the generation of Capsule network module to the network of generator and arbiter Parameter optimizes, and generator is allow to export more optimized segmentation result.
For given segmentation forecast sample { If,Lf*And corresponding true mark sample { If,Lf, the whole of network misses Difference function is as follows:
In formula (9), θSAnd θpRespectively indicate the parameter of generator and arbiter, JbIndicate binary cross entropy loss function, OsAnd OpRespectively indicate the output of generator and arbiter.When input is from true mark sample { If,LfAnd segmentation forecast sample {If,Lf*When, output 1 and 0 marks the true and false of pixel classification.
In the embodiment of the present application, the process of parameter optimization includes two parts:
1, fixed generator parameter θS, optimize arbiter parameter θp;During dual training, generator ginseng fixed first Number θS, it is sent in arbiter using the segmentation forecast sample that generator generates, the true and false is judged by arbiter, and utilize arbiter Error function adjusts arbiter parameter θ by back-propagation algorithmp, improve itself distinguishing ability.And the corresponding error of arbiter Function are as follows:
During training, the parameter of arbiter is continued to optimize, and discriminating power constantly enhances, and is increasingly easy to distinguish The generation image of generator, hence into arrive next stage.
2, fixed arbiter parameter θp, optimize generator parameter θS;The differentiation result of arbiter is brought into generator by network In error function, generator parameter θ is adjusted by back-propagation algorithmS, so that generator generates higher-quality segmentation result, In this way, generator generates more accurate result to confuse arbiter.And the corresponding error function of generator are as follows:
Above-mentioned two Optimization Steps are repeated, finally, forming Nash Equilibrium between generator and arbiter, arbiter can not area Partial image comes from the segmentation forecast sample { I of generator outputf,Lf*Or true mark sample { If,Lf, then generation pair Anti- network training is completed.
The CSM image to be split of image level mark is inputted into trained generation and fights network, fights network by generating Export the Pixel-level segmented image of CSM image to be split.
Fig. 7 is the hardware device knot provided by the embodiments of the present application based on the medical image cutting method for generating confrontation network Structure schematic diagram.As shown in figure 3, the equipment includes one or more processors and memory.It takes a processor as an example, this sets Standby can also include: input system and output system.
Processor, memory, input system and output system can be connected by bus or other modes, in Fig. 3 with For being connected by bus.
Memory as a kind of non-transient computer readable storage medium, can be used for storing non-transient software program, it is non-temporarily State computer executable program and module.Processor passes through operation non-transient software program stored in memory, instruction And module realizes the place of above method embodiment thereby executing the various function application and data processing of electronic equipment Reason method.
Memory may include storing program area and storage data area, wherein storing program area can storage program area, extremely Application program required for a few function;It storage data area can storing data etc..In addition, memory may include that high speed is random Memory is accessed, can also include non-transient memory, a for example, at least disk memory, flush memory device or other are non- Transient state solid-state memory.In some embodiments, it includes the memory remotely located relative to processor that memory is optional, this A little remote memories can pass through network connection to processing system.The example of above-mentioned network includes but is not limited to internet, enterprise Intranet, local area network, mobile radio communication and combinations thereof.
Input system can receive the number or character information of input, and generate signal input.Output system may include showing Display screen etc. shows equipment.
One or more of module storages in the memory, are executed when by one or more of processors When, execute the following operation of any of the above-described embodiment of the method:
Step a: the Pixel-level mark sample of other medical images and the image level of medical image to be split are acquired respectively Mark sample;
Step b: the image level of sample and medical image to be split is marked by the Pixel-level of other medical images It marks sample training and network is fought based on the generation of capsule network, the generation confrontation network includes generator and arbiter;
Step c: the generator carries out Pixel-level feature extraction to the Pixel-level mark sample of other medical images, passes through The image level mark sample that the Pixel-level feature treats Medical Image Segmentation is handled, and the medical image to be split is generated Pixel-level mark sample, and the pre- test sample of segmentation of the medical image to be split is generated based on Pixel-level mark sample This;
Step d: the true mark sample of segmentation forecast sample and image to be split that the generator is generated is defeated together Enter to arbiter and carry out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and according to error function to generation Device and arbiter optimize, and obtain trained generation confrontation network;
Step e: the medical image to be split of image level mark is inputted into trained generation and fights network, passes through the life The Pixel-level segmented image of medical image to be split is exported at confrontation network.
Method provided by the embodiment of the present application can be performed in the said goods, has the corresponding functional module of execution method and has Beneficial effect.The not technical detail of detailed description in the present embodiment, reference can be made to method provided by the embodiments of the present application.
The embodiment of the present application provides a kind of non-transient (non-volatile) computer storage medium, and the computer storage is situated between Matter is stored with computer executable instructions, the executable following operation of the computer executable instructions:
Step a: the Pixel-level mark sample of other medical images and the image level of medical image to be split are acquired respectively Mark sample;
Step b: the image level of sample and medical image to be split is marked by the Pixel-level of other medical images It marks sample training and network is fought based on the generation of capsule network, the generation confrontation network includes generator and arbiter;
Step c: the generator carries out Pixel-level feature extraction to the Pixel-level mark sample of other medical images, passes through The image level mark sample that the Pixel-level feature treats Medical Image Segmentation is handled, and the medical image to be split is generated Pixel-level mark sample, and the pre- test sample of segmentation of the medical image to be split is generated based on Pixel-level mark sample This;
Step d: the true mark sample of segmentation forecast sample and image to be split that the generator is generated is defeated together Enter to arbiter and carry out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and according to error function to generation Device and arbiter optimize, and obtain trained generation confrontation network;
Step e: the medical image to be split of image level mark is inputted into trained generation and fights network, passes through the life The Pixel-level segmented image of medical image to be split is exported at confrontation network.
The embodiment of the present application provides a kind of computer program product, and the computer program product is non-temporary including being stored in Computer program on state computer readable storage medium, the computer program include program instruction, when described program instructs When being computer-executed, the computer is made to execute following operation:
Step a: the Pixel-level mark sample of other medical images and the image level of medical image to be split are acquired respectively Mark sample;
Step b: the image level of sample and medical image to be split is marked by the Pixel-level of other medical images It marks sample training and network is fought based on the generation of capsule network, the generation confrontation network includes generator and arbiter;
Step c: the generator carries out Pixel-level feature extraction to the Pixel-level mark sample of other medical images, passes through The image level mark sample that the Pixel-level feature treats Medical Image Segmentation is handled, and the medical image to be split is generated Pixel-level mark sample, and the pre- test sample of segmentation of the medical image to be split is generated based on Pixel-level mark sample This;
Step d: the true mark sample of segmentation forecast sample and image to be split that the generator is generated is defeated together Enter to arbiter and carry out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and according to error function to generation Device and arbiter optimize, and obtain trained generation confrontation network;
Step e: the medical image to be split of image level mark is inputted into trained generation and fights network, passes through the life The Pixel-level segmented image of medical image to be split is exported at confrontation network.
Being passed through based on medical image cutting method, system and the electronic equipment for generating confrontation network for the embodiment of the present application is melted It closes capsule mechanism to optimize depth convolutional neural networks, fusion Capsule network and the thought for cascading waterfall, in medicine figure In the case that picture sample size is small, new training image sample is generated, realizes only have the medicine shadow of image level label to low quality As the semantic segmentation of data, the full labeled data that the split knowledge learnt is marked from Pixel-level is transferred to image level Weak labeled data, to improve aspect of model ability to express, the usability of extension medical image mark sample effectively drops Dependence of the low parted pattern to Pixel-level labeled data has network message redundancy few, the abundant feature of feature extraction, a small amount of Under the premise of Pixel-level marks sample, the efficiency for generating sample and authentic specimen dual training and effectively real can be improved Existing high-precision pixel-level image segmentation.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, defined herein General Principle can realize in other embodiments without departing from the spirit or scope of the application.Therefore, this Shen These embodiments shown in the application please be not intended to be limited to, and are to fit to special with principle disclosed in the present application and novelty The consistent widest scope of point.

Claims (11)

1. a kind of based on the medical image cutting method for generating confrontation network, which comprises the following steps:
Step a: the Pixel-level mark sample of other medical images and the image level mark of medical image to be split are acquired respectively Sample;
Step b: it is marked by the image level that the Pixel-level of other medical images marks sample and medical image to be split Sample training fights network based on the generation of capsule network, and the generation confrontation network includes generator and arbiter;
Step c: the generator carries out Pixel-level feature extraction to the Pixel-level mark sample of other medical images, by described The image level mark sample that Pixel-level feature treats Medical Image Segmentation is handled, and the picture of the medical image to be split is generated Plain grade marks sample, and the segmentation forecast sample of the medical image to be split is generated based on Pixel-level mark sample;
Step d: the true mark sample of segmentation forecast sample and image to be split that the generator generates is input to together Arbiter carries out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and according to error function to generator and Arbiter optimizes, and obtains trained generation confrontation network;
Step e: inputting trained generation for the medical image to be split of image level mark and fight network, passes through the generation pair Anti- network exports the Pixel-level segmented image of medical image to be split.
2. according to claim 1 based on the medical image cutting method for generating confrontation network, which is characterized in that described In step c, the generator includes capsule network module and local positioning network, and the generator generates medical image to be split Segmentation forecast sample specifically include:
Step b1: sample is marked by the Pixel-level of other medical images, pre-training is carried out to capsule network module, obtain no language Adopted exemplar marks sample by the image level that the no semantic label sample treats segmented image and is handled, distinguishes institute State the background and effective cut zone of the image level mark sample of image to be split;
Step b2: the capsule network module after the image level mark sample input of the image to be split to be completed to pre-training leads to Cross the reconstruction image that the capsule network module exports the image level mark sample of image to be split;
Step b3: local positioning network generates the packet of the image level mark sample of image to be split using the feature extraction of convolutional layer Characteristic pattern containing location information, and using global average pond layer, by weight (w1,w2…,wn) be weighted with characteristic pattern it is flat , the zone location characteristic pattern of the image level mark sample of image to be split is obtained;
Step b4: executing self-diffusion algorithm according to the reconstruction image and zone location characteristic pattern, determines that area pixel point is divided Line obtains the segmentation forecast sample of the image level mark sample of image to be split.
3. according to claim 2 based on the medical image cutting method for generating confrontation network, which is characterized in that described In step b2, the capsule network module includes convolutional layer, PrimaryCaps layers, DigitCaps layers and decoding layer, the glue Capsule network module uses the output vector of single capsule neuron, records the image level mark sample decomposition region of image to be split The direction of edge pixel and location information are extracted the probability value of classification using the nonlinear activation function of vector, determined to be split The cut zone and background of the image level mark sample of image calculate edge penalty and export the image level mark of image to be split The reconstruction image of sample.
4. according to claim 2 based on the medical image cutting method for generating confrontation network, which is characterized in that described It is described to be specifically included according to reconstruction image and zone location characteristic pattern execution self-diffusion algorithm: in zone location spy in step b4 It levies the region that activation value is bigger in figure and spreads pixel with the self-diffusion algorithm of walk random, utilize zone location characteristic pattern Input point calculates the Gauss distance that each pixel on image arrives input point, and therefrom selects optimal path, acquisition area pixel The cut-off rule of point, ultimately generates segmentation forecast sample.
5. according to any one of claims 1 to 4 based on the medical image cutting method for generating confrontation network, feature exists In, in the step d, the arbiter includes cascading Cascade module, Capsule network module and parameter optimization module, " generation-confrontation " training that the arbiter carries out specifically includes:
Step d1: the pixel and confidence level of marking error in the segmentation forecast sample are extracted by cascading Cascade module Lower than the crucial pixel of given threshold and corresponding ground truth, and filters and mark correct and confidence level higher than setting threshold The pixel of value;
Step d2: will be at the crucial pixel that extracted and corresponding ground truth by Capsule network module Reason, and generate error;
Step d3: net of the error that the parameter optimization module utilizes Capsule network module to generate to generator and arbiter Network parameter optimizes;Wherein, for given segmentation forecast sample { If,Lf*And corresponding true mark sample { If,Lf, The global error function of network are as follows:
In above-mentioned formula, θSAnd θpRespectively indicate the parameter of generator and arbiter, JbIndicate binary cross entropy loss function, OsWith OpThe output for respectively indicating generator and arbiter, when input is from true mark sample { If,LfAnd segmentation forecast sample { If, Lf*When, the true and false of pixel classification is marked by output 1 and 0.
6. a kind of based on the medical image segmentation system for generating confrontation network, which is characterized in that including sample collection module and life At confrontation network,
Sample collection module: the Pixel-level for acquiring other medical images respectively marks sample and medical image to be split Image level marks sample;
Sample instruction is marked by the image level that the Pixel-level of other medical images marks sample and medical image to be split Practice the generation based on capsule network and fights network;
The generation confrontation network includes generator and arbiter, and the generator marks sample to the Pixel-level of other medical images This progress Pixel-level feature extraction is marked at sample by the image level that the Pixel-level feature treats Medical Image Segmentation Reason, the Pixel-level for generating the medical image to be split mark sample, and based on Pixel-level mark sample generate it is described to The segmentation forecast sample of Medical Image Segmentation;
The true mark sample of segmentation forecast sample and image to be split that the generator generates is input to arbiter together " generation-confrontation " training is carried out, differentiates the true and false of the segmentation forecast sample, and according to error function to generator and arbiter It optimizes, obtains trained generation confrontation network;
The medical image to be split of image level mark is inputted into trained generation and fights network, network is fought by the generation Export the Pixel-level segmented image of medical image to be split.
7. according to claim 6 based on the medical image segmentation system for generating confrontation network, which is characterized in that the life It grows up to be a useful person including pre-training module, capsule network module, local positioning network module and sample generation module:
Pre-training module: marking sample for the Pixel-level by other medical images and carry out pre-training to capsule network module, No semantic label sample is obtained, is marked at sample by the image level that the no semantic label sample treats segmented image Reason distinguishes the background and effective cut zone of the image level mark sample of the image to be split;
Capsule network module: for the capsule net after the image level mark sample input completion pre-training by the image to be split Network module marks the reconstruction image of sample by the image level that the capsule network module exports image to be split;
Local positioning network: include for what the feature extraction using convolutional layer generated the image level mark sample of image to be split The characteristic pattern of location information, and using global average pond layer, by weight (w1,w2…,wn) be weighted and averaged with characteristic pattern, Obtain the zone location characteristic pattern of the image level mark sample of image to be split;
Sample generation module: for executing self-diffusion algorithm according to the reconstruction image and zone location characteristic pattern, region is determined Pixel cut-off rule obtains the segmentation forecast sample of the image level mark sample of image to be split.
8. according to claim 7 based on the medical image segmentation system for generating confrontation network, which is characterized in that the glue Capsule network module includes convolutional layer, PrimaryCaps layers, DigitCaps layers and decoding layer, and the capsule network module is using single The output vector of a capsule neuron, record image to be split image level mark sample decomposition edges of regions pixel direction and Location information extracts the probability value of classification using the nonlinear activation function of vector, determines the image level mark of image to be split The cut zone and background of sample calculate edge penalty and export the reconstruction image of the image level mark sample of image to be split.
9. according to claim 7 based on the medical image segmentation system for generating confrontation network, which is characterized in that the sample This generation module executes self-diffusion algorithm according to reconstruction image and zone location characteristic pattern and specifically includes: in zone location characteristic pattern Pixel is spread with the self-diffusion algorithm of walk random in the bigger region of middle activation value, utilizes the input of zone location characteristic pattern Point calculates the Gauss distance that each pixel on image arrives input point, and therefrom selects optimal path, acquisition area pixel point Cut-off rule ultimately generates segmentation forecast sample.
10. described in any item based on the medical image segmentation system for generating confrontation network, feature according to claim 6 to 9 It is, the arbiter includes cascade Cascade module, Capsule network module and parameter optimization module:
Cascade Cascade module: pixel and confidence level for extracting marking error in the segmentation forecast sample, which are lower than, to be set Determine threshold value crucial pixel and corresponding ground truth, and filter that mark is correct and confidence level is higher than the picture of given threshold Element;
Capsule network module: it for handling the crucial pixel extracted and corresponding ground truth, and produces Raw error;
Parameter optimization module: for the error using the generation of Capsule network module to the network parameter of generator and arbiter It optimizes;Wherein, for given segmentation forecast sample { If,Lf*And corresponding true mark sample { If,Lf, network Global error function are as follows:
In above-mentioned formula, θSAnd θpRespectively indicate the parameter of generator and arbiter, JbIndicate binary cross entropy loss function, OsWith OpThe output for respectively indicating generator and arbiter, when input is from true mark sample { If,LfAnd segmentation forecast sample { If, Lf*When, the true and false of pixel classification is marked by output 1 and 0.
11. a kind of electronic equipment, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by one processor, and described instruction is by least one described processor It executes, so that at least one described processor is able to carry out above-mentioned 1 to 5 described in any item medicine based on generation confrontation network The following operation of image partition method:
Step a: the Pixel-level mark sample of other medical images and the image level mark of medical image to be split are acquired respectively Sample;
Step b: it is marked by the image level that the Pixel-level of other medical images marks sample and medical image to be split Sample training fights network based on the generation of capsule network, and the generation confrontation network includes generator and arbiter;
Step c: the generator carries out Pixel-level feature extraction to the Pixel-level mark sample of other medical images, by described The image level mark sample that Pixel-level feature treats Medical Image Segmentation is handled, and the picture of the medical image to be split is generated Plain grade marks sample, and the segmentation forecast sample of the medical image to be split is generated based on Pixel-level mark sample;
Step d: the true mark sample of segmentation forecast sample and image to be split that the generator generates is input to together Arbiter carries out " generation-confrontation " training, differentiates the true and false of the segmentation forecast sample, and according to error function to generator and Arbiter optimizes, and obtains trained generation confrontation network;
Step e: inputting trained generation for the medical image to be split of image level mark and fight network, passes through the generation pair Anti- network exports the Pixel-level segmented image of medical image to be split.
CN201910707712.XA 2019-08-01 2019-08-01 Medical image segmentation method and system based on generation countermeasure network and electronic equipment Active CN110503654B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910707712.XA CN110503654B (en) 2019-08-01 2019-08-01 Medical image segmentation method and system based on generation countermeasure network and electronic equipment
PCT/CN2019/125428 WO2021017372A1 (en) 2019-08-01 2019-12-14 Medical image segmentation method and system based on generative adversarial network, and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910707712.XA CN110503654B (en) 2019-08-01 2019-08-01 Medical image segmentation method and system based on generation countermeasure network and electronic equipment

Publications (2)

Publication Number Publication Date
CN110503654A true CN110503654A (en) 2019-11-26
CN110503654B CN110503654B (en) 2022-04-26

Family

ID=68586980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910707712.XA Active CN110503654B (en) 2019-08-01 2019-08-01 Medical image segmentation method and system based on generation countermeasure network and electronic equipment

Country Status (2)

Country Link
CN (1) CN110503654B (en)
WO (1) WO2021017372A1 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160441A (en) * 2019-12-24 2020-05-15 上海联影智能医疗科技有限公司 Classification method, computer device, and storage medium
CN111275686A (en) * 2020-01-20 2020-06-12 中山大学 Method and device for generating medical image data for artificial neural network training
CN111383217A (en) * 2020-03-11 2020-07-07 深圳先进技术研究院 Visualization method, device and medium for evaluation of brain addiction traits
CN111383215A (en) * 2020-03-10 2020-07-07 图玛深维医疗科技(北京)有限公司 Focus detection model training method based on generation of confrontation network
CN111429464A (en) * 2020-03-11 2020-07-17 深圳先进技术研究院 Medical image segmentation method, medical image segmentation device and terminal equipment
CN111436936A (en) * 2020-04-29 2020-07-24 浙江大学 CT image reconstruction method based on MRI
CN111598900A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Image region segmentation model training method, segmentation method and device
CN111798471A (en) * 2020-07-27 2020-10-20 中科智脑(北京)技术有限公司 Training method of image semantic segmentation network
CN111899251A (en) * 2020-08-06 2020-11-06 中国科学院深圳先进技术研究院 Copy-move type forged image detection method for distinguishing forged source and target area
CN111932555A (en) * 2020-07-31 2020-11-13 商汤集团有限公司 Image processing method and device and computer readable storage medium
CN112150478A (en) * 2020-08-31 2020-12-29 温州医科大学 Method and system for constructing semi-supervised image segmentation framework
WO2021017372A1 (en) * 2019-08-01 2021-02-04 中国科学院深圳先进技术研究院 Medical image segmentation method and system based on generative adversarial network, and electronic equipment
CN112420205A (en) * 2020-12-08 2021-02-26 医惠科技有限公司 Entity recognition model generation method and device and computer readable storage medium
CN112507950A (en) * 2020-12-18 2021-03-16 中国科学院空天信息创新研究院 Method and device for generating confrontation type multi-task multi-element sample automatic labeling
CN112560925A (en) * 2020-12-10 2021-03-26 中国科学院深圳先进技术研究院 Complex scene target detection data set construction method and system
CN112686906A (en) * 2020-12-25 2021-04-20 山东大学 Image segmentation method and system based on uniform distribution migration guidance
CN112749791A (en) * 2021-01-22 2021-05-04 重庆理工大学 Link prediction method based on graph neural network and capsule network
CN112837338A (en) * 2021-01-12 2021-05-25 浙江大学 Semi-supervised medical image segmentation method based on generation countermeasure network
CN112890766A (en) * 2020-12-31 2021-06-04 山东省千佛山医院 Breast cancer auxiliary treatment equipment
WO2021120961A1 (en) * 2019-12-16 2021-06-24 中国科学院深圳先进技术研究院 Brain addiction structure map evaluation method and apparatus
CN113052840A (en) * 2021-04-30 2021-06-29 江苏赛诺格兰医疗科技有限公司 Processing method based on low signal-to-noise ratio PET image
CN113160243A (en) * 2021-03-24 2021-07-23 联想(北京)有限公司 Image segmentation method and electronic equipment
CN113223010A (en) * 2021-04-22 2021-08-06 北京大学口腔医学院 Method and system for fully automatically segmenting multiple tissues of oral cavity image
WO2021179205A1 (en) * 2020-03-11 2021-09-16 深圳先进技术研究院 Medical image segmentation method, medical image segmentation apparatus and terminal device
WO2021184799A1 (en) * 2020-03-19 2021-09-23 中国科学院深圳先进技术研究院 Medical image processing method and apparatus, and device and storage medium
CN113487617A (en) * 2021-07-26 2021-10-08 推想医疗科技股份有限公司 Data processing method, data processing device, electronic equipment and storage medium
CN113850804A (en) * 2021-11-29 2021-12-28 北京鹰瞳科技发展股份有限公司 Retina image generation system and method based on generation countermeasure network
CN114021698A (en) * 2021-10-30 2022-02-08 河南省鼎信信息安全等级测评有限公司 Malicious domain name training sample expansion method and device based on capsule generation countermeasure network
WO2022121213A1 (en) * 2020-12-10 2022-06-16 深圳先进技术研究院 Gan-based contrast-agent-free medical image enhancement modeling method
WO2022205657A1 (en) * 2021-04-02 2022-10-06 中国科学院深圳先进技术研究院 Csm image segmentation method and apparatus, terminal device, and storage medium
CN116168242A (en) * 2023-02-08 2023-05-26 阿里巴巴(中国)有限公司 Pixel-level label generation method, model training method and equipment
WO2023165033A1 (en) * 2022-03-02 2023-09-07 深圳硅基智能科技有限公司 Method for training model for recognizing target in medical image, method for recognizing target in medical image, and device and medium

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950569B (en) * 2021-02-25 2023-07-25 平安科技(深圳)有限公司 Melanoma image recognition method, device, computer equipment and storage medium
CN113066094B (en) * 2021-03-09 2024-01-30 中国地质大学(武汉) Geographic grid intelligent local desensitization method based on generation countermeasure network
CN113052369B (en) * 2021-03-15 2024-05-10 北京农业智能装备技术研究中心 Intelligent agricultural machinery operation management method and system
CN113112454B (en) * 2021-03-22 2024-03-19 西北工业大学 Medical image segmentation method based on task dynamic learning part marks
CN112991304A (en) * 2021-03-23 2021-06-18 武汉大学 Molten pool sputtering detection method based on laser directional energy deposition monitoring system
CN113171118B (en) * 2021-04-06 2023-07-14 上海深至信息科技有限公司 Ultrasonic inspection operation guiding method based on generation type countermeasure network
CN113130050B (en) * 2021-04-20 2023-11-24 皖南医学院第一附属医院(皖南医学院弋矶山医院) Medical information display method and display system
CN113239978A (en) * 2021-04-22 2021-08-10 科大讯飞股份有限公司 Method and device for correlating medical image preprocessing model and analysis model
CN113628159A (en) * 2021-06-16 2021-11-09 维库(厦门)信息技术有限公司 Full-automatic training method and device based on deep learning network and storage medium
CN113470046B (en) * 2021-06-16 2024-04-16 浙江工业大学 Drawing meaning force network segmentation method for medical image super-pixel gray texture sampling characteristics
CN113378472B (en) * 2021-06-23 2022-09-13 合肥工业大学 Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN113469084B (en) * 2021-07-07 2023-06-30 西安电子科技大学 Hyperspectral image classification method based on contrast generation countermeasure network
CN113553954A (en) * 2021-07-23 2021-10-26 上海商汤智能科技有限公司 Method and apparatus for training behavior recognition model, device, medium, and program product
CN113705371B (en) * 2021-08-10 2023-12-01 武汉理工大学 Water visual scene segmentation method and device
CN113706546B (en) * 2021-08-23 2024-03-19 浙江工业大学 Medical image segmentation method and device based on lightweight twin network
CN113763394B (en) * 2021-08-24 2024-03-29 同济大学 Medical image segmentation control method based on medical risks
CN113935977A (en) * 2021-10-22 2022-01-14 河北工业大学 Solar cell panel defect generation method based on generation countermeasure network
CN113920127B (en) * 2021-10-27 2024-04-23 华南理工大学 Training data set independent single-sample image segmentation method and system
CN114066964B (en) * 2021-11-17 2024-04-05 江南大学 Aquatic product real-time size detection method based on deep learning
CN114240950B (en) * 2021-11-23 2023-04-07 电子科技大学 Brain tumor image generation and segmentation method based on deep neural network
CN114140368B (en) * 2021-12-03 2024-04-23 天津大学 Multi-mode medical image synthesis method based on generation type countermeasure network
CN116569216A (en) 2021-12-03 2023-08-08 宁德时代新能源科技股份有限公司 Method and system for generating image samples containing specific features
CN114331875A (en) * 2021-12-09 2022-04-12 上海大学 Image bleeding position prediction method in printing process based on antagonistic edge learning
CN114186735B (en) * 2021-12-10 2023-10-20 沭阳鸿行照明有限公司 Fire emergency lighting lamp layout optimization method based on artificial intelligence
CN114494322B (en) * 2022-02-11 2024-03-01 合肥工业大学 Multi-mode image segmentation method based on image fusion technology
CN114549554B (en) * 2022-02-22 2024-05-14 山东融瓴科技集团有限公司 Air pollution source segmentation method based on style invariance
CN114897782B (en) * 2022-04-13 2024-04-23 华南理工大学 Gastric cancer pathological section image segmentation prediction method based on generation type countermeasure network
CN114821229B (en) * 2022-04-14 2023-07-28 江苏集萃清联智控科技有限公司 Underwater acoustic data set augmentation method and system based on condition generation countermeasure network
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers
CN114549842B (en) * 2022-04-22 2022-08-02 山东建筑大学 Self-adaptive semi-supervised image segmentation method and system based on uncertain knowledge domain
CN114677515B (en) * 2022-04-25 2023-05-26 电子科技大学 Weak supervision semantic segmentation method based on similarity between classes
CN114818734B (en) * 2022-05-25 2023-10-31 清华大学 Method and device for analyzing antagonism scene semantics based on target-attribute-relation
CN115439846B (en) * 2022-08-09 2023-04-25 北京邮电大学 Image segmentation method and device, electronic equipment and medium
CN115272136B (en) * 2022-09-27 2023-05-05 广州卓腾科技有限公司 Certificate photo glasses reflection eliminating method, device, medium and equipment based on big data
CN115546239B (en) * 2022-11-30 2023-04-07 珠海横琴圣澳云智科技有限公司 Target segmentation method and device based on boundary attention and distance transformation
CN115880440B (en) * 2023-01-31 2023-04-28 中国科学院自动化研究所 Magnetic particle three-dimensional reconstruction imaging method based on generation countermeasure network
CN117094986B (en) * 2023-10-13 2024-04-05 中山大学深圳研究院 Self-adaptive defect detection method based on small sample and terminal equipment
CN117093548B (en) * 2023-10-20 2024-01-26 公诚管理咨询有限公司 Bidding management auditing system
CN117152138B (en) * 2023-10-30 2024-01-16 陕西惠宾电子科技有限公司 Medical image tumor target detection method based on unsupervised learning
CN117523318B (en) * 2023-12-26 2024-04-16 宁波微科光电股份有限公司 Anti-light interference subway shielding door foreign matter detection method, device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961217A (en) * 2018-06-08 2018-12-07 南京大学 A kind of detection method of surface flaw based on positive example training
CN109063724A (en) * 2018-06-12 2018-12-21 中国科学院深圳先进技术研究院 A kind of enhanced production confrontation network and target sample recognition methods
CN109344833A (en) * 2018-09-04 2019-02-15 中国科学院深圳先进技术研究院 Medical image cutting method, segmenting system and computer readable storage medium
US20190080206A1 (en) * 2017-09-08 2019-03-14 Ford Global Technologies, Llc Refining Synthetic Data With A Generative Adversarial Network Using Auxiliary Inputs
CN109584337A (en) * 2018-11-09 2019-04-05 暨南大学 A kind of image generating method generating confrontation network based on condition capsule
WO2019118613A1 (en) * 2017-12-12 2019-06-20 Oncoustics Inc. Machine learning to extract quantitative biomarkers from ultrasound rf spectrums

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10643320B2 (en) * 2017-11-15 2020-05-05 Toyota Research Institute, Inc. Adversarial learning of photorealistic post-processing of simulation with privileged information
CN108062753B (en) * 2017-12-29 2020-04-17 重庆理工大学 Unsupervised domain self-adaptive brain tumor semantic segmentation method based on deep counterstudy
CN108198179A (en) * 2018-01-03 2018-06-22 华南理工大学 A kind of CT medical image pulmonary nodule detection methods for generating confrontation network improvement
CN108932484A (en) * 2018-06-20 2018-12-04 华南理工大学 A kind of facial expression recognizing method based on Capsule Net
CN109242849A (en) * 2018-09-26 2019-01-18 上海联影智能医疗科技有限公司 Medical image processing method, device, system and storage medium
CN110503654B (en) * 2019-08-01 2022-04-26 中国科学院深圳先进技术研究院 Medical image segmentation method and system based on generation countermeasure network and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190080206A1 (en) * 2017-09-08 2019-03-14 Ford Global Technologies, Llc Refining Synthetic Data With A Generative Adversarial Network Using Auxiliary Inputs
WO2019118613A1 (en) * 2017-12-12 2019-06-20 Oncoustics Inc. Machine learning to extract quantitative biomarkers from ultrasound rf spectrums
CN108961217A (en) * 2018-06-08 2018-12-07 南京大学 A kind of detection method of surface flaw based on positive example training
CN109063724A (en) * 2018-06-12 2018-12-21 中国科学院深圳先进技术研究院 A kind of enhanced production confrontation network and target sample recognition methods
CN109344833A (en) * 2018-09-04 2019-02-15 中国科学院深圳先进技术研究院 Medical image cutting method, segmenting system and computer readable storage medium
CN109584337A (en) * 2018-11-09 2019-04-05 暨南大学 A kind of image generating method generating confrontation network based on condition capsule

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A. ODENA: "Semi-supervised learning with generative adversarial", 《ARXIV》 *
FEI YANG ET AL.: "Capsule Based Image Translation Network", 《IET DOCTORAL FORUM ON BIOMEDICAL ENGINEERING, HEALTHCARE, ROBOTICS AND ARTIFICIAL INTELLIGENCE 2018 (BRAIN 2018)》 *
陈锟 等: "生成对抗网络在医学图像处理中的应用", 《生命科学仪器》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021017372A1 (en) * 2019-08-01 2021-02-04 中国科学院深圳先进技术研究院 Medical image segmentation method and system based on generative adversarial network, and electronic equipment
WO2021120961A1 (en) * 2019-12-16 2021-06-24 中国科学院深圳先进技术研究院 Brain addiction structure map evaluation method and apparatus
CN111160441A (en) * 2019-12-24 2020-05-15 上海联影智能医疗科技有限公司 Classification method, computer device, and storage medium
CN111160441B (en) * 2019-12-24 2024-03-26 上海联影智能医疗科技有限公司 Classification method, computer device, and storage medium
CN111275686A (en) * 2020-01-20 2020-06-12 中山大学 Method and device for generating medical image data for artificial neural network training
CN111275686B (en) * 2020-01-20 2023-05-26 中山大学 Method and device for generating medical image data for artificial neural network training
CN111383215A (en) * 2020-03-10 2020-07-07 图玛深维医疗科技(北京)有限公司 Focus detection model training method based on generation of confrontation network
WO2021179205A1 (en) * 2020-03-11 2021-09-16 深圳先进技术研究院 Medical image segmentation method, medical image segmentation apparatus and terminal device
CN111429464A (en) * 2020-03-11 2020-07-17 深圳先进技术研究院 Medical image segmentation method, medical image segmentation device and terminal equipment
CN111383217B (en) * 2020-03-11 2023-08-29 深圳先进技术研究院 Visual method, device and medium for brain addiction character evaluation
CN111383217A (en) * 2020-03-11 2020-07-07 深圳先进技术研究院 Visualization method, device and medium for evaluation of brain addiction traits
WO2021184799A1 (en) * 2020-03-19 2021-09-23 中国科学院深圳先进技术研究院 Medical image processing method and apparatus, and device and storage medium
CN111436936A (en) * 2020-04-29 2020-07-24 浙江大学 CT image reconstruction method based on MRI
CN111598900A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Image region segmentation model training method, segmentation method and device
CN111798471A (en) * 2020-07-27 2020-10-20 中科智脑(北京)技术有限公司 Training method of image semantic segmentation network
CN111798471B (en) * 2020-07-27 2024-04-02 中科智脑(北京)技术有限公司 Training method of image semantic segmentation network
US11663293B2 (en) * 2020-07-31 2023-05-30 Sensetime Group Limited Image processing method and device, and computer-readable storage medium
US20220036124A1 (en) * 2020-07-31 2022-02-03 Sensetime Group Limited Image processing method and device, and computer-readable storage medium
CN111932555A (en) * 2020-07-31 2020-11-13 商汤集团有限公司 Image processing method and device and computer readable storage medium
CN111899251A (en) * 2020-08-06 2020-11-06 中国科学院深圳先进技术研究院 Copy-move type forged image detection method for distinguishing forged source and target area
CN112150478B (en) * 2020-08-31 2021-06-22 温州医科大学 Method and system for constructing semi-supervised image segmentation framework
CN112150478A (en) * 2020-08-31 2020-12-29 温州医科大学 Method and system for constructing semi-supervised image segmentation framework
CN112420205A (en) * 2020-12-08 2021-02-26 医惠科技有限公司 Entity recognition model generation method and device and computer readable storage medium
WO2022121213A1 (en) * 2020-12-10 2022-06-16 深圳先进技术研究院 Gan-based contrast-agent-free medical image enhancement modeling method
CN112560925A (en) * 2020-12-10 2021-03-26 中国科学院深圳先进技术研究院 Complex scene target detection data set construction method and system
CN112507950A (en) * 2020-12-18 2021-03-16 中国科学院空天信息创新研究院 Method and device for generating confrontation type multi-task multi-element sample automatic labeling
CN112686906A (en) * 2020-12-25 2021-04-20 山东大学 Image segmentation method and system based on uniform distribution migration guidance
CN112686906B (en) * 2020-12-25 2022-06-14 山东大学 Image segmentation method and system based on uniform distribution migration guidance
CN112890766A (en) * 2020-12-31 2021-06-04 山东省千佛山医院 Breast cancer auxiliary treatment equipment
CN112837338B (en) * 2021-01-12 2022-06-21 浙江大学 Semi-supervised medical image segmentation method based on generation countermeasure network
CN112837338A (en) * 2021-01-12 2021-05-25 浙江大学 Semi-supervised medical image segmentation method based on generation countermeasure network
CN112749791A (en) * 2021-01-22 2021-05-04 重庆理工大学 Link prediction method based on graph neural network and capsule network
CN113160243A (en) * 2021-03-24 2021-07-23 联想(北京)有限公司 Image segmentation method and electronic equipment
WO2022205657A1 (en) * 2021-04-02 2022-10-06 中国科学院深圳先进技术研究院 Csm image segmentation method and apparatus, terminal device, and storage medium
CN113223010B (en) * 2021-04-22 2024-02-27 北京大学口腔医学院 Method and system for multi-tissue full-automatic segmentation of oral cavity image
CN113223010A (en) * 2021-04-22 2021-08-06 北京大学口腔医学院 Method and system for fully automatically segmenting multiple tissues of oral cavity image
CN113052840A (en) * 2021-04-30 2021-06-29 江苏赛诺格兰医疗科技有限公司 Processing method based on low signal-to-noise ratio PET image
CN113052840B (en) * 2021-04-30 2024-02-02 江苏赛诺格兰医疗科技有限公司 Processing method based on low signal-to-noise ratio PET image
CN113487617A (en) * 2021-07-26 2021-10-08 推想医疗科技股份有限公司 Data processing method, data processing device, electronic equipment and storage medium
CN114021698A (en) * 2021-10-30 2022-02-08 河南省鼎信信息安全等级测评有限公司 Malicious domain name training sample expansion method and device based on capsule generation countermeasure network
CN113850804A (en) * 2021-11-29 2021-12-28 北京鹰瞳科技发展股份有限公司 Retina image generation system and method based on generation countermeasure network
CN113850804B (en) * 2021-11-29 2022-03-18 北京鹰瞳科技发展股份有限公司 Retina image generation system and method based on generation countermeasure network
WO2023165033A1 (en) * 2022-03-02 2023-09-07 深圳硅基智能科技有限公司 Method for training model for recognizing target in medical image, method for recognizing target in medical image, and device and medium
CN116168242B (en) * 2023-02-08 2023-12-01 阿里巴巴(中国)有限公司 Pixel-level label generation method, model training method and equipment
CN116168242A (en) * 2023-02-08 2023-05-26 阿里巴巴(中国)有限公司 Pixel-level label generation method, model training method and equipment

Also Published As

Publication number Publication date
CN110503654B (en) 2022-04-26
WO2021017372A1 (en) 2021-02-04

Similar Documents

Publication Publication Date Title
CN110503654A (en) A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
Zhang et al. ME‐Net: multi‐encoder net framework for brain tumor segmentation
Jiang et al. Ahcnet: An application of attention mechanism and hybrid connection for liver tumor segmentation in ct volumes
Liao et al. Evaluate the malignancy of pulmonary nodules using the 3-d deep leaky noisy-or network
Al-Antari et al. Deep learning computer-aided diagnosis for breast lesion in digital mammogram
Kitrungrotsakul et al. VesselNet: A deep convolutional neural network with multi pathways for robust hepatic vessel segmentation
Ni et al. GC-Net: Global context network for medical image segmentation
Qureshi et al. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends
CN109215033A (en) The method and system of image segmentation
CN109754361A (en) The anisotropic hybrid network of 3D: the convolution feature from 2D image is transmitted to 3D anisotropy volume
CN109583440A (en) It is identified in conjunction with image and reports the medical image aided diagnosis method edited and system
Perre et al. Lesion classification in mammograms using convolutional neural networks and transfer learning
Liu et al. A semi-supervised convolutional transfer neural network for 3D pulmonary nodules detection
Gonçalves et al. Carcass image segmentation using CNN-based methods
Jia et al. Two-branch network for brain tumor segmentation using attention mechanism and super-resolution reconstruction
Liang et al. Residual convolutional neural networks with global and local pathways for classification of focal liver lesions
Gao et al. Multi-label deep regression and unordered pooling for holistic interstitial lung disease pattern detection
Pradhan et al. Lung cancer detection using 3D convolutional neural networks
Singh et al. A study on convolution neural network for breast cancer detection
Özcan et al. Fully automatic liver and tumor segmentation from CT image using an AIM-Unet
Ott et al. Detecting pulmonary Coccidioidomycosis with deep convolutional neural networks
Valizadeh et al. The Progress of Medical Image Semantic Segmentation Methods for Application in COVID-19 Detection
Wang et al. Multi-view fusion segmentation for brain glioma on CT images
Chatterjee et al. A survey on techniques used in medical imaging processing
Wang et al. Deep learning based nodule detection from pulmonary CT images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant