CN107169527A - Classification method of medical image based on collaboration deep learning - Google Patents

Classification method of medical image based on collaboration deep learning Download PDF

Info

Publication number
CN107169527A
CN107169527A CN201710417724.XA CN201710417724A CN107169527A CN 107169527 A CN107169527 A CN 107169527A CN 201710417724 A CN201710417724 A CN 201710417724A CN 107169527 A CN107169527 A CN 107169527A
Authority
CN
China
Prior art keywords
mrow
msup
msubsup
msub
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710417724.XA
Other languages
Chinese (zh)
Other versions
CN107169527B (en
Inventor
夏勇
张建鹏
谢雨彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710417724.XA priority Critical patent/CN107169527B/en
Publication of CN107169527A publication Critical patent/CN107169527A/en
Application granted granted Critical
Publication of CN107169527B publication Critical patent/CN107169527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The invention discloses a kind of classification method of medical image based on collaboration deep learning, the technical problem for solving existing classification method of medical image classification accuracy difference.Technical scheme is, using Cooperative Study method between two depth convolutional neural networks, to be trained by paired mode of learning, each model receives image to youngster as input, a pair of image is transported in corresponding depth convolutional neural networks respectively.These depth convolutional networks are initialized and trained using the method for fine setting pre-training model, a Cooperative Study system is designed, two depth networks is carried out mutual assistance study.The cooperative system is used for exercising supervision to the similarities and differences attribute of youngster to image, whether belong to a classification, and the collaboration error in real time producing two depth convolutional networks carries out backpropagation, the weights of corrective networks, so as to the ability that further Strengthens network learning characteristic is characterized, more efficiently it can differentiate exactly to easily obscuring sample and making.

Description

Classification method of medical image based on collaboration deep learning
Technical field
The present invention relates to a kind of classification method of medical image, more particularly to a kind of medical science figure based on collaboration deep learning As sorting technique.
Background technology
Classification method of medical image has extremely important make in terms of medical retrieval, literature review and medical research With being always the hot research problem in computer-aided diagnosis and medical research field.In the tens of of past numerous studies person In year research, the Image Classfication Technology under the traditional mode of complete set is formd.Its key element be manual feature extraction and Two parts of design of grader.In spite of a set of very perfect theoretical system, traditional image classification method is difficult to realize The seamless union of optimal characteristics and optimum classifier, this causes its performance to be greatly affected.In recent years, depth learning technology Appearance bring feature self study under new breakthrough, end-to-end pattern to image classification problem there is very powerful image table Levy ability.Convolutional neural networks model in deep learning has been successfully applied in Medical Images Classification problem, and phase Huge breakthrough is achieved for traditional images sorting technique.But unlike that possessing the natural scene image classification of mass data Problem, the domain expert that medical image generally requires specialty is labeled, and its cost is very expensive, therefore, has in medical domain The data of mark are very rare.In addition, similitude gives classification between otherness and class in significant class in Medical Images Classification problem Problem brings great puzzlement, it would be desirable to which according to the mode of imaging, the tissue site of non-imaged carries out classification judgement, shadow The anatomical structure and difference of position is easy to so that model is seriously obscured in assorting process as in.
Document " Kumar A, Kim J, Lyndon D, et al.An Ensemble of Fine-Tuned Convolutional Neural Networks for Medical Image Classification[J].IEEE Journal of Biomedical&Health Informatics,2016,PP(99):1-1. " discloses a kind of based on multiple The classification method of medical image of pre-training system integrating.This method is instructed using large-scale image target classification database ImageNet Practice multiple convolutional neural networks, these pre-training networks are finely adjusted according to the medical image data of small sample so that these The parameter adaptation Medical Images Classification task of network.Substantial amounts of experiment has been proven that deep neural network has very strong feature Transfer ability, this solves the problems, such as the small-sample learning in medical image classification to a certain extent.Then by these pre-training The decision probability of network is integrated averagely to obtain last class probability, and this integrated thought is capable of point of further lift scheme Class performance.Document methods described averagely obtains last classification results by the prediction probability for exporting multiple pre-training models, These pre-training networks are mutually independent when training and prediction, for those it is difficult to point to sample for, simply Integrated approach can not improve last classification results.Therefore, the method in the document can not solve medical science shadow well As the problem of similar between difference and class in class in classification.
The content of the invention
In order to overcome the shortcomings of that existing classification method of medical image classification accuracy is poor, the present invention provides a kind of based on collaboration The classification method of medical image of deep learning.This method is led to using Cooperative Study method between two depth convolutional neural networks Cross paired mode of learning to be trained, each model receives image to youngster as input, a pair of image is transported to pair respectively In the depth convolutional neural networks answered.Because the data volume in Medical Images Classification problem is all smaller, therefore use the pre- instruction of fine setting The method for practicing model is initialized and trained to these depth convolutional networks.In order to strengthen network characterization learning ability, design One Cooperative Study system dexterously makes two depth networks carry out mutual assistance study.The cooperative system is used for image to the different of youngster Exercise supervision, i.e., whether belong to a classification with attribute, and the collaboration in real time producing two depth convolutional networks is missed Difference carries out backpropagation, the weights of corrective networks, so that the ability that further Strengthens network learning characteristic is characterized, can more added with Effect ground differentiates exactly to easily obscuring sample and making.
The technical solution adopted for the present invention to solve the technical problems is:A kind of medical image based on collaboration deep learning Sorting technique, is characterized in comprising the following steps:
Step 1: initializing two convolutional neural networks respectively using the parameter of pre-training residual error depth convolutional neural networks Parameter θAB, and Cooperative Study system parameter θVS, initialization learning rate η (t) and hyper parameter λ.
Step 2: being trained using image to the pattern pair model of youngster.An image is often inputted to youngster, two depths Pre-training neutral net is spent respectively in the full articulamentum generation depth characteristic of penultimate, is designated as xA T、xB T, by the two depth Feature be coupled obtaining an assemblage characteristic, is designated as (xA T, xB T), three supervisory signals of model are respectively yA,yBAnd yVS
Step 3: calculating the penalty values l that two pre-training convolutional networks and Cooperative Study system are produced respectivelyAA),lBB) and lVSVS)。
Wherein, M is the number of training set sample, and K is class categories number, and K ' values take 2.
Step 4: calculating Grad:
Hereλ is the weight factor of synergistic signal, final updating model parameter:
θAA-η(t)·ΔA, θBB-η(t)·ΔB
The beneficial effects of the invention are as follows:This method is led to using Cooperative Study method between two depth convolutional neural networks Cross paired mode of learning to be trained, each model receives image to youngster as input, a pair of image is transported to pair respectively In the depth convolutional neural networks answered.Because the data volume in Medical Images Classification problem is all smaller, therefore use the pre- instruction of fine setting The method for practicing model is initialized and trained to these depth convolutional networks.In order to strengthen network characterization learning ability, design One Cooperative Study system dexterously makes two depth networks carry out mutual assistance study.The cooperative system is used for image to the different of youngster Exercise supervision, i.e., whether belong to a classification with attribute, and the collaboration in real time producing two depth convolutional networks is missed Difference carries out backpropagation, the weights of corrective networks, so that the ability that further Strengthens network learning characteristic is characterized, can more added with Effect ground differentiates exactly to easily obscuring sample and making.
As a result of Cooperative Study mechanism, the feature learning ability of depth convolutional neural networks is strengthened, and overcomes Sorting algorithm effect based on deep learning caused by Similar Problems between difference and class significantly in class present in medical image The strategy mutually learn between the not good difficult point of fruit, pre-training network, improved jointly is so that model has to the sample of easy misclassification Good resolving ability, substantially increases the accuracy rate of Medical Images Classification.
The present invention is elaborated with reference to embodiment.
Embodiment
Classification method of medical image of the present invention based on collaboration deep learning is comprised the following steps that:
1. image is inputted to youngster.
Using input mode of the image to youngster, two images of being sampled at random from training image are separately input to two phases It is trained in the convolutional neural networks answered.There are three supervisory signals per a couple image, be two respective classes of image respectively Whether distinguishing label and this pair of image belong to same category, the training of the common monitor model of these three signals.
2. double-depth convolutional neural networks are trained.
Double-depth convolutional neural networks module is basic part in this algorithm, and it includes two complete, tools There are the convolutional neural networks A and B of standalone feature.For in principle, the convolutional neural networks of arbitrary structures can be by based on depth The further feature of degree learning model, which is extracted, to be applied in the algoritic module.But, it is contemplated that the finiteness of medical image and Residual error network just regard residual error network as two to the powerful characteristic present ability of image using the technology of fine setting pre-training network The initial model of convolutional network.The pre-training network includes 50 learning layers, and these learning layers parameter be all from ImageNet large-scale image data collection goes to school what acquistion was arrived.Convolutional neural networks A and B can learning parameter be respectively labeled as θA、θB, θ hereAAnd θBDo not share parameter.In order that the residual error network parameter of pre-training can adapt to be fitted Medical Images Classification Data, it is necessary to remove all full articulamentums in raw residual network for a K classification problem, with 1024 god The full articulamentum of full articulamentum and K neuron through member is replaced.The parameter of these new adding layers by be uniformly distributed U (- 0.05,0.05) initialize, and the loss function of each depth network is set to intersect entropy function
Here M is the sample size of whole training set, optimizes the ginseng of convolutional neural networks with stochastic gradient descent algorithm Number θ.Two depth convolutional neural networks are respectively from the image of input to receiving list entries, each own corresponding true mark in youngster Sign to supervise the process of the respective classification learning of two networks.
3. Cooperative Study system.
In order that learning from each other between two neutral nets, improving feature representation ability, one association of unique design jointly Same learning system.The system is used to supervise whether the image above inputted comes from same classification to youngster, and its supervisory signals is Similarities and differences attribute of the image to youngster.Select image to youngster from training data at random, its attribute value table is shown as
Here xAAnd xBIt is that the image obtained from convolutional Neural neutral net A, B learning is characterized to the depth characteristic of youngster, It is exactly the input of cooperative system.yAAnd yBIt is true tag of the input picture to youngster.' VS=1 ' represents a positive image to youngster, ' VS=0 ' represents a negative image to youngster.Using gradient descent algorithm learning network parameter, schemed from a collection of view data As a selection.In order to avoid the imbalance problem of positive and negative class data, the ratio of positive negative sample is artificially controlled to maintain 45%- 55%.xAAnd xBIt is joined together in binder course, then connects a full articulamentum with 2 neurons.In order to realize The normal work of three supervisory signals is, it is necessary to which additionally one softmax layers of addition, prison is removed using following cross entropy loss function Survey synergistic signal.
Here θVSIt is the network parameter of Cooperative Study system.
4. training process.
1. two convolutional neural networks and the network parameter θ of systematic learning system are initializedABVS, learning rate η is set , and hyper parameter λ (t).
2. an image is inputted to youngster, and the depth characteristic that two depth convolutional neural networks produce two input pictures is characterized xAAnd xB
3. it is coupled characteristic present of the image from two convolutional neural networks to youngster, is designated as (xA T, xB T), three supervision Signal is respectively yA、yBAnd yVS
4. the loss produced by two depth convolutional neural networks and Cooperative Study system is calculated according to formula (1) and (3) Value lAA),lBB) and lVSVS)。
5. Grad is calculated:
Hereλ is the weight factor of synergistic signal.
6. model parameter is updated:θAA-η(t)·ΔA, θBB-η(t)·ΔB
5. test process.
In test, for a test image x, depth convolutional neural networks A and B, which sets forth, to predict the outcomeWithThat is the activation value of last full articulamentum.At this moment, additionally Cooperative Study system be dropped in last classification prediction, the prediction label for recently entering image x is

Claims (1)

1. a kind of classification method of medical image based on collaboration deep learning, it is characterised in that comprise the following steps:
Step 1: initializing the ginseng of two convolutional neural networks respectively using the parameter of pre-training residual error depth convolutional neural networks Number θAB, and Cooperative Study system parameter θVS, initialization learning rate η (t) and hyper parameter λ;
Step 2: being trained using image to the pattern pair model of youngster;An image is often inputted to youngster, two depth are pre- Neutral net is trained in the full articulamentum generation depth characteristic of penultimate, to be designated as x respectivelyA T、xB T, by the two depth characteristics Progress, which is coupled, obtains an assemblage characteristic, is designated as (xA T, xB T), three supervisory signals of model are respectively yA,yBAnd yVS
Step 3: calculating the penalty values l that two pre-training convolutional networks and Cooperative Study system are produced respectivelyAA),lBB) and lVSVS);
<mrow> <msup> <mi>l</mi> <mi>A</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <mo>&amp;lsqb;</mo> <mi>log</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <msub> <mi>x</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mrow> <msup> <msub> <mi>y</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> <mi>T</mi> </msubsup> <msup> <msub> <mi>x</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msup> <mi>l</mi> <mi>B</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <mo>&amp;lsqb;</mo> <mi>log</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <msub> <mi>x</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mrow> <msup> <msub> <mi>y</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </mrow> <mi>T</mi> </msubsup> <msup> <msub> <mi>x</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msup> <mi>l</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <mo>&amp;lsqb;</mo> <mi>log</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>K</mi> <mo>&amp;prime;</mo> </msup> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mi>j</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <msup> <msub> <mi>x</mi> <mi>A</mi> </msub> <msup> <mi>T</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </msup> <mo>,</mo> <msup> <msub> <mi>x</mi> <mi>B</mi> </msub> <msup> <mi>T</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </msup> <mo>)</mo> </mrow> </mrow> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <msub> <mi>&amp;theta;</mi> <mrow> <mi>v</mi> <mi>s</mi> </mrow> </msub> <msubsup> <mi>y</mi> <mrow> <mi>v</mi> <mi>s</mi> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <msup> <msub> <mi>x</mi> <mi>A</mi> </msub> <msup> <mi>T</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </msup> <mo>,</mo> <msup> <msub> <mi>x</mi> <mi>B</mi> </msub> <msup> <mi>T</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> </msup> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Wherein, M is the number of training set sample, and K is class categories number, and K ' values take 2;
Step 4: calculating Grad:
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msup> <mi>l</mi> <mi>A</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mi>k</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <mo>{</mo> <mrow> <msubsup> <mi>x</mi> <mi>A</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>&amp;lsqb;</mo> <mrow> <mfrac> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mi>k</mi> <mi>T</mi> </msubsup> <msubsup> <mi>x</mi> <mi>A</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> </msup> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mi>j</mi> <mi>T</mi> </msubsup> <msubsup> <mi>x</mi> <mi>A</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> </msup> </mrow> </mfrac> <mo>-</mo> <msub> <mi>&amp;delta;</mi> <mrow> <msubsup> <mi>ky</mi> <mi>A</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> </msub> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msup> <mi>l</mi> <mi>B</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mi>k</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <mo>{</mo> <mrow> <msubsup> <mi>x</mi> <mi>B</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>&amp;lsqb;</mo> <mrow> <mfrac> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mi>k</mi> <mi>T</mi> </msubsup> <msubsup> <mi>x</mi> <mi>B</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> </msup> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mi>j</mi> <mi>T</mi> </msubsup> <msubsup> <mi>x</mi> <mi>B</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> </msup> </mrow> </mfrac> <mo>-</mo> <msub> <mi>&amp;delta;</mi> <mrow> <msubsup> <mi>ky</mi> <mi>B</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> </msub> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <msup> <mi>l</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <msup> <mi>k</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </msubsup> <mo>{</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>x</mi> <mi>A</mi> </msub> <mi>T</mi> </msup> <mo>,</mo> <msup> <msub> <mi>x</mi> <mi>B</mi> </msub> <mi>T</mi> </msup> <mo>)</mo> </mrow> <mo>&amp;lsqb;</mo> <mrow> <mfrac> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <msup> <mi>k</mi> <mo>&amp;prime;</mo> </msup> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <msup> <msub> <mi>x</mi> <mi>A</mi> </msub> <mi>T</mi> </msup> <mo>,</mo> <msup> <msub> <mi>x</mi> <mi>B</mi> </msub> <mi>T</mi> </msup> <mo>)</mo> </mrow> </mrow> </msup> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>K</mi> <mo>&amp;prime;</mo> </msup> </msubsup> <msup> <mi>e</mi> <mrow> <msubsup> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mi>j</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <msup> <msub> <mi>x</mi> <mi>A</mi> </msub> <mi>T</mi> </msup> <mo>,</mo> <msup> <msub> <mi>x</mi> <mi>B</mi> </msub> <mi>T</mi> </msup> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </mfrac> <mo>-</mo> <msub> <mi>&amp;delta;</mi> <mrow> <msup> <mi>k</mi> <mo>&amp;prime;</mo> </msup> <msubsup> <mi>y</mi> <mrow> <mi>v</mi> <mi>s</mi> </mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> </msub> </mrow> <mo>&amp;rsqb;</mo> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msup> <mi>&amp;Delta;</mi> <mi>A</mi> </msup> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <msup> <mi>l</mi> <mi>A</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <msup> <mi>&amp;theta;</mi> <mi>A</mi> </msup> <mi>k</mi> </msub> </mrow> </mfrac> <mo>+</mo> <mi>&amp;lambda;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <msup> <mi>l</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <msup> <mi>k</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> </mfrac> <mo>,</mo> <msup> <mi>&amp;Delta;</mi> <mi>B</mi> </msup> <mo>=</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <msup> <mi>l</mi> <mi>B</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <msup> <mi>&amp;theta;</mi> <mi>B</mi> </msup> <mi>k</mi> </msub> </mrow> </mfrac> <mo>+</mo> <mi>&amp;lambda;</mi> <mfrac> <mrow> <mo>&amp;part;</mo> <msup> <mi>l</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mrow> <mo>(</mo> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <msup> <mi>&amp;theta;</mi> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msup> <msup> <mi>k</mi> <mo>&amp;prime;</mo> </msup> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Hereλ is the weight factor of synergistic signal, final updating model parameter:
θAA-η(t)·ΔA, θBB-η(t)·ΔB
CN201710417724.XA 2017-06-06 2017-06-06 Medical image classification method based on collaborative deep learning Active CN107169527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710417724.XA CN107169527B (en) 2017-06-06 2017-06-06 Medical image classification method based on collaborative deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710417724.XA CN107169527B (en) 2017-06-06 2017-06-06 Medical image classification method based on collaborative deep learning

Publications (2)

Publication Number Publication Date
CN107169527A true CN107169527A (en) 2017-09-15
CN107169527B CN107169527B (en) 2020-04-03

Family

ID=59825096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710417724.XA Active CN107169527B (en) 2017-06-06 2017-06-06 Medical image classification method based on collaborative deep learning

Country Status (1)

Country Link
CN (1) CN107169527B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109152A (en) * 2018-01-03 2018-06-01 深圳北航新兴产业技术研究院 Medical Images Classification and dividing method and device
CN108416535A (en) * 2018-03-27 2018-08-17 中国科学技术大学 The method of patent valve estimating based on deep learning
CN108734138A (en) * 2018-05-24 2018-11-02 浙江工业大学 A kind of melanoma skin disease image classification method based on integrated study
CN110008971A (en) * 2018-08-23 2019-07-12 腾讯科技(深圳)有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN110472663A (en) * 2019-07-15 2019-11-19 西北工业大学 Classifying Method in Remote Sensing Image based on introspection study
CN110688888A (en) * 2019-08-02 2020-01-14 浙江省北大信息技术高等研究院 Pedestrian attribute identification method and system based on deep learning
CN110705516A (en) * 2019-10-18 2020-01-17 大连海事大学 Sole pattern image clustering method based on collaborative network structure
CN111310578A (en) * 2020-01-17 2020-06-19 上海优加利健康管理有限公司 Method and device for generating heart beat data sample classification network
CN112906796A (en) * 2021-02-23 2021-06-04 西北工业大学深圳研究院 Medical image classification method aiming at uncertainty marking data
CN113284136A (en) * 2021-06-22 2021-08-20 南京信息工程大学 Medical image classification method of residual error network and XGboost of double-loss function training
CN114120289A (en) * 2022-01-25 2022-03-01 中科视语(北京)科技有限公司 Method and system for identifying driving area and lane line
CN114580571A (en) * 2022-04-01 2022-06-03 南通大学 Small sample power equipment image classification method based on migration mutual learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662931A (en) * 2012-04-13 2012-09-12 厦门大学 Semantic role labeling method based on synergetic neural network
CN106650806A (en) * 2016-12-16 2017-05-10 北京大学深圳研究生院 Cooperative type deep network model method for pedestrian detection
CN106780482A (en) * 2017-01-08 2017-05-31 广东工业大学 A kind of classification method of medical image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662931A (en) * 2012-04-13 2012-09-12 厦门大学 Semantic role labeling method based on synergetic neural network
CN106650806A (en) * 2016-12-16 2017-05-10 北京大学深圳研究生院 Cooperative type deep network model method for pedestrian detection
CN106780482A (en) * 2017-01-08 2017-05-31 广东工业大学 A kind of classification method of medical image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAO WANG 等: "Collaborative Deep Learning for Recommender Systems", 《ARXIV》 *
谭娟 等: "基于深度学习的交通拥堵预测模型研究", 《计算机应用研究》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109152A (en) * 2018-01-03 2018-06-01 深圳北航新兴产业技术研究院 Medical Images Classification and dividing method and device
CN108416535A (en) * 2018-03-27 2018-08-17 中国科学技术大学 The method of patent valve estimating based on deep learning
CN108416535B (en) * 2018-03-27 2021-08-13 中国科学技术大学 Deep learning-based patent value evaluation method
CN108734138A (en) * 2018-05-24 2018-11-02 浙江工业大学 A kind of melanoma skin disease image classification method based on integrated study
US20210019551A1 (en) * 2018-08-23 2021-01-21 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable storage medium, and computer device
CN110008971A (en) * 2018-08-23 2019-07-12 腾讯科技(深圳)有限公司 Image processing method, device, computer readable storage medium and computer equipment
US11604949B2 (en) * 2018-08-23 2023-03-14 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable storage medium, and computer device
CN110472663A (en) * 2019-07-15 2019-11-19 西北工业大学 Classifying Method in Remote Sensing Image based on introspection study
CN110688888B (en) * 2019-08-02 2022-08-05 杭州未名信科科技有限公司 Pedestrian attribute identification method and system based on deep learning
CN110688888A (en) * 2019-08-02 2020-01-14 浙江省北大信息技术高等研究院 Pedestrian attribute identification method and system based on deep learning
CN110705516A (en) * 2019-10-18 2020-01-17 大连海事大学 Sole pattern image clustering method based on collaborative network structure
CN110705516B (en) * 2019-10-18 2022-10-25 大连海事大学 Sole pattern image clustering method based on collaborative network structure
CN111310578A (en) * 2020-01-17 2020-06-19 上海优加利健康管理有限公司 Method and device for generating heart beat data sample classification network
CN111310578B (en) * 2020-01-17 2023-05-02 上海乐普云智科技股份有限公司 Method and device for generating heart beat data sample classification network
CN112906796A (en) * 2021-02-23 2021-06-04 西北工业大学深圳研究院 Medical image classification method aiming at uncertainty marking data
CN113284136A (en) * 2021-06-22 2021-08-20 南京信息工程大学 Medical image classification method of residual error network and XGboost of double-loss function training
CN114120289A (en) * 2022-01-25 2022-03-01 中科视语(北京)科技有限公司 Method and system for identifying driving area and lane line
CN114120289B (en) * 2022-01-25 2022-05-03 中科视语(北京)科技有限公司 Method and system for identifying driving area and lane line
CN114580571A (en) * 2022-04-01 2022-06-03 南通大学 Small sample power equipment image classification method based on migration mutual learning

Also Published As

Publication number Publication date
CN107169527B (en) 2020-04-03

Similar Documents

Publication Publication Date Title
CN107169527A (en) Classification method of medical image based on collaboration deep learning
CN110210486A (en) A kind of generation confrontation transfer learning method based on sketch markup information
CN110097185A (en) A kind of Optimized model method and application based on generation confrontation network
CN106067042B (en) Polarization SAR classification method based on semi-supervised depth sparseness filtering network
CN106874956A (en) The construction method of image classification convolutional neural networks structure
CN108416755A (en) A kind of image de-noising method and system based on deep learning
CN110222792A (en) A kind of label defects detection algorithm based on twin network
CN107563439A (en) A kind of model for identifying cleaning food materials picture and identification food materials class method for distinguishing
CN107545245A (en) A kind of age estimation method and equipment
CN108182450A (en) A kind of airborne Ground Penetrating Radar target identification method based on depth convolutional network
CN109241491A (en) The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN109492765A (en) A kind of image Increment Learning Algorithm based on migration models
CN106446942A (en) Crop disease identification method based on incremental learning
CN106778921A (en) Personnel based on deep learning encoding model recognition methods again
CN108846426A (en) Polarization SAR classification method based on the twin network of the two-way LSTM of depth
CN107132516A (en) A kind of Radar range profile&#39;s target identification method based on depth confidence network
CN107122798A (en) Chin-up count detection method and device based on depth convolutional network
CN109063724A (en) A kind of enhanced production confrontation network and target sample recognition methods
CN111860267B (en) Multichannel body-building exercise identification method based on human body skeleton joint point positions
CN109544518A (en) A kind of method and its system applied to the assessment of skeletal maturation degree
CN104849650B (en) One kind is based on improved analog-circuit fault diagnosis method
CN106203625A (en) A kind of deep-neural-network training method based on multiple pre-training
CN105046268B (en) Classification of Polarimetric SAR Image method based on Wishart depth networks
CN108509939A (en) A kind of birds recognition methods based on deep learning
CN113807215B (en) Tea tender shoot grading method combining improved attention mechanism and knowledge distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant