CN108805208A - A kind of coorinated training method based on unlabeled exemplars consistency checking - Google Patents

A kind of coorinated training method based on unlabeled exemplars consistency checking Download PDF

Info

Publication number
CN108805208A
CN108805208A CN201810609674.XA CN201810609674A CN108805208A CN 108805208 A CN108805208 A CN 108805208A CN 201810609674 A CN201810609674 A CN 201810609674A CN 108805208 A CN108805208 A CN 108805208A
Authority
CN
China
Prior art keywords
sample
unlabeled exemplars
visual angle
label
usc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810609674.XA
Other languages
Chinese (zh)
Other versions
CN108805208B (en
Inventor
谷延锋
李天帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201810609674.XA priority Critical patent/CN108805208B/en
Publication of CN108805208A publication Critical patent/CN108805208A/en
Application granted granted Critical
Publication of CN108805208B publication Critical patent/CN108805208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A kind of coorinated training method based on unlabeled exemplars consistency checking, the present invention relates to coorinated trainings and multi-angle image to classify.Present invention aims at solve existing multiple-angle thinking to increase the difficulty to the same area atural object Conjoint Analysis, the especially difficulty of feature changes analysis, the problem for keeping multiple-angle thinking image classification accuracy rate low.Process is:One:Carry out just classification:Two:The authentic specimen of unlabeled exemplars in selection shooting image;Three:Obtain the good grader of the visual angle re -training;Until the corresponding grader re -training in all visual angles is completed;Four:Obtain the classification results at the visual angle;Until the corresponding grader in all visual angles reclassifies completion;Five:Repeat two, three, four, until meet stopping criterion for iteration, obtain the classification results at each visual angle, vote, using the highest label of turnout as shooting image in one in without label sample label.The present invention is used for digital image processing field.

Description

A kind of coorinated training method based on unlabeled exemplars consistency checking
Technical field
The invention belongs to digital image processing fields, are related to coorinated training and multi-angle image classification, are a kind of various visual angles Learning method.
Background technology
The characteristic obtained for same target different level or from different approaches is commonly referred to as various visual angles data. Various visual angles study usually requires to follow two principles:The consistency principle and complementary principle;The consistency principle refers to same object Different visual angles connect each other, and complementary principle refers to that the different visual angles of same object are difference, can conduct The feature to complement each other.Existing various visual angles learning algorithm is broadly divided into three categories below:Coorinated training, sub-space learning and Multiple Kernel Learning.Wherein coorinated training algorithm is made inconsistent between it by the two or more different perspective datas of learning into groups Property it is minimum.
Resolution ratio is an important indicator for assessing remote sensing satellite value.The resolution ratio of satellite includes three parts: Spatial resolution, temporal resolution and spectral resolution.However temporal resolution and spatial resolution are mutually contradictory, are increased Satellite transit track can improve temporal resolution but can reduce spatial resolution simultaneously, and vice versa.Fly in sensor technology Before jump, in order to form the rapid scan covering to the whole world, remote sensing satellite cannot only observe region under star, it is also desirable in single It revisits and in-flight expands observation area.And expand observation area and rely primarily on satellite side-sway to realize, the increase of lateral swinging angle can make At the larger difference at image taking visual angle, high-resolution multiple-angle thinking image is just produced.
Multiple-angle thinking increases the difficulty to the same area atural object Conjoint Analysis, the especially difficulty of feature changes analysis Degree, keeps multiple-angle thinking image classification accuracy rate low.
Invention content
Present invention aims at the existing multiple-angle thinkings of solution to increase the difficulty to the same area atural object Conjoint Analysis, special It is not the difficulty of feature changes analysis, the problem for keeping multiple-angle thinking image classification accuracy rate low, and propose a kind of based on classification Coorinated training method (CO-training with Unlabeled Sample ' s of the device to unlabeled exemplars consistency checking Abbreviation CO-USC under Consistency).
A kind of coorinated training method detailed process based on unlabeled exemplars consistency checking is:
Step 1:To carrying out just classification without the sample of label in N number of viewing angles image:
Satellite selects the sample that label is carried in each viewing angles image respectively in N number of viewing angles image of same time This training grader, and trained grader is subjected to just classification to shooting unlabeled exemplars in image in the visual angle;
The grader is N number of, is corresponded with N number of visual angle;N values are positive integer;
Step 2:Determine shooting image in unlabeled exemplars confidence level and pseudo label, according to shooting image in without label The authentic specimen of unlabeled exemplars in confidence level and pseudo label the selection shooting image of sample;
Step 3:Each visual angle is sequentially selected, by the pseudo- mark of authentic specimen and its correspondence that the visual angle is obtained in step 2 Label are added grader and are trained, and obtain the good grader of the visual angle re -training;
Until the corresponding grader re -training in all visual angles is completed, good point of the corresponding re -training in each visual angle is obtained Class device;
Step 4:Each visual angle is sequentially selected, will be considered that the incredible visual angle unlabeled exemplars are added in step 2 The good grader of the visual angle re -training is classified in step 3, obtains the classification results at the visual angle;
Until the corresponding grader in all visual angles reclassifies completion, the corresponding classification results in each visual angle are obtained;
Step 5:It repeats Step 2: Step 3: step 4 obtains each visual angle until meeting stopping criterion for iteration Classification results, determine the pseudo label of authentic specimen to shooting image in step 1 the classification results at all visual angles and step 2 In vote at whole visual angles without the sample of label, using the highest label of turnout as step 1 in shooting image in Without the label of the sample of label.
Beneficial effects of the present invention are:
The present invention is judged by grader classification performance variation before and after different visual angles addition unlabeled exemplars and its label Unlabeled exemplars confidence level and determine sample pseudo label, and then using believable unlabeled exemplars retraining grader to be promoted point Class device classifying quality.Multi-angle high-resolution remote sensing image classification accuracy or similar is improved using this coorinated training method The classification accuracy of various visual angles problem concerning study, and it is particularly suitable in the case of having exemplar less, it is existing to solve Multiple-angle thinking increases the problem of difficulty to the same area atural object Conjoint Analysis, the especially difficulty of feature changes analysis. 48 kinds of whole classification rates in test are averagely promoted by 80.1% to 82.5%, and there have exemplar to account for total sample to be smaller 4 kinds of situation entirety classification rates promote bigger, up to 4%.It solves existing multiple-angle thinking to increase to the same area atural object The difficulty of the difficulty of Conjoint Analysis, especially feature changes analysis, the problem for keeping multiple-angle thinking image classification accuracy rate low.
Description of the drawings:
Fig. 1 is CO-USC method flow diagrams;
Fig. 2 is the relational graph of pseudo label classification accuracy rate and its USC value;
Fig. 3 is the pseudo label accuracy and four graders ballot accuracy comparison that USC pseudo label calculation formula obtain Schematic diagram;
Fig. 4 a are full figure experimental image;
Fig. 4 b are true value figure;
Fig. 5 a are the full figure classification results schematic diagram for directly using KNN;
Fig. 5 b are the full figure classification results schematic diagram using CO-USC methods of the present invention.
Specific implementation mode:
Specific implementation mode one:A kind of coorinated training method based on unlabeled exemplars consistency checking of present embodiment Detailed process is:
Consistency checking is:The classification performance difference of unlabeled exemplars is added in comparator-sorter, by comparing grader plus Enter the front and back consistency classified to unlabeled exemplars of unlabeled exemplars to determine unlabeled exemplars confidence level;It is simple based on one Thinking a, if sample and a label are added in the training sample of a grader, if the classifying quality of grader is complete It is complete consistent, it is believed that this sample is completely corresponding with this label;Be exactly for extension, grader be added one it is new After training sample and label, classification performance is more close, and the confidence level of this sample and corresponding label is also higher;But this Thinking is that value is limited for common single-view sample, although can determine that this sample is mutual right with the label chosen It answers, but this sample is added in grader to the performance that can't change grader, for the addition of other unmarked samples Label does not have any change.But it is exactly highly effective for multiple view sample, although currently choosing at us View in feature and the label of the sample improvement effect will not be played to grader, but have in other views and regarded with this Different graders, different features in figure, the label of such a determination, which will play the grader in other views, to be changed Into effect.Specifically USC judgements are inherently compared unlabeled exemplars are added after grader classification performance consistency.
Unlabeled exemplars consistency checking coorinated training (co-training with unlabeled sample ' s Consistency abbreviation CO-USC);
USC is unlabeled exemplars consistency checking;
Step 1:To carrying out just classification without the sample of label in N number of viewing angles image:
Satellite selects the sample that label is carried in each viewing angles image respectively in N number of viewing angles image of same time This training grader, and trained grader is subjected to just classification to shooting unlabeled exemplars in image in the visual angle;
The grader is N number of, is corresponded with N number of visual angle;N values are positive integer;
Step 2:Determine shooting image in unlabeled exemplars confidence level and pseudo label, according to shooting image in without label The authentic specimen of unlabeled exemplars in confidence level and pseudo label the selection shooting image of sample;Detailed process is:
Step 3:Each visual angle is sequentially selected, by the pseudo- mark of authentic specimen and its correspondence that the visual angle is obtained in step 2 Label are added grader (the corresponding grader in the visual angle) and are trained, and obtain the good grader of the visual angle re -training;
Until the corresponding grader re -training in all visual angles is completed, good point of the corresponding re -training in each visual angle is obtained Class device;
Step 4:Each visual angle is sequentially selected, will be considered that the incredible visual angle unlabeled exemplars are added in step 2 The good grader of the visual angle re -training (the corresponding grader in the visual angle) is classified in step 3, obtains the classification at the visual angle As a result;
Until the corresponding grader in all visual angles reclassifies completion, the corresponding classification results in each visual angle are obtained;
Step 5:It repeats Step 2: Step 3: step 4 obtains each visual angle until meeting stopping criterion for iteration Classification results, by the classification results at all visual angles and step 2 determine authentic specimen pseudo label (in iteration each time can Letter sample pseudo label all calculates) it is voted at whole visual angles without the sample of label being shot in image in step 1, The label of the sample in image without label is shot in using the highest label of turnout as step 1.
Specific implementation mode two:The present embodiment is different from the first embodiment in that:Grader in the step 1 For supervision or semi-supervised classifier.
Other steps and parameter are same as the specific embodiment one.
Specific implementation mode three:The present embodiment is different from the first and the second embodiment in that:In the step 2 really Surely the confidence level and pseudo label for shooting unlabeled exemplars in image, according to the confidence level of unlabeled exemplars in shooting image and pseudo- mark The authentic specimen of unlabeled exemplars in label selection shooting image;Detailed process is:
Step 2 one:
Using first visual angle as main perspective, remaining visual angle as non-main perspective,
USC judgements are carried out in non-main perspective, obtain the USC sequences of non-main perspective, the USC sequence numbers of non-main perspective are N- 1, the USC sequences of non-main perspective are overlapped, main perspective unlabeled exemplars USC confidence levels are obtained according to the sequence after superposition (sequential digit values after superposition are lower, and USC confidence levels are higher);(USC judgements are carried out in non-main perspective, the sample being determined is The corresponding unlabeled exemplars of main perspective, that is to say, that the confidence level of main perspective sample is determined using the information at other visual angles) it is right Unlabeled exemplars carry out the pseudo label ballot of remaining whole viewpoint classification device in addition to current main perspective in shooting image, with each The turnout of sample as main perspective unlabeled exemplars ballot confidence level (turnout sequence) (turnout more high confidence level more It is high);
Step 2 two:In non-main perspective, unlabeled exemplars in shooting image are calculated using USC pseudo label calculation formula The weight of the pseudo label of non-main perspective sample, the weight of the pseudo label of the whole non-main perspective samples of superposition, votes, with weight The pseudo label that highest label is used as Main classification device (the corresponding grader of main perspective);
Pseudo label sequence number corresponding to all weight maximums is N-1;
Step 2 three:Set USC confidence threshold values (0- unlabeled exemplars sum) and ballot confidence threshold value (0- 100%);
The main perspective unlabeled exemplars USC confidence levels that step 2 one obtains are compared with USC confidence threshold values, will be walked The ballot confidence level of rapid 21 obtained main perspective unlabeled exemplars is compared with ballot confidence threshold value, when sample USC is set Reliability be less than threshold value when, and the ballot confidence level of sample be more than or equal to ballot confidence threshold value when, it is believed that the sample is believable; When sample USC confidence levels are more than or equal to threshold value, it is believed that the sample is incredible;When the ballot confidence level of sample is less than ballot When confidence threshold value, it is believed that the sample is incredible;
It is credible to refer to its corresponding non-main perspective label USC pseudo label sheet that are whether credible, and being calculated of this sample Body is exactly comprehensive USC judgements and ballot judgement, so only just think when the confidence level that both judgements obtain is all effective, This label is effective;
Step 2 one is repeated to step 2 three, sequentially selects each visual angle as main perspective, remaining visual angle is as non- Main perspective, until all visual angles traversal is completed.
Other steps and parameter are the same as one or two specific embodiments.
Specific implementation mode four:Unlike one of present embodiment and specific implementation mode one to three:The step 2 USC judgements are carried out in one in non-main perspective, formula is:
Ω is the corresponding unlabeled exemplars of main perspective in shooting image in formula;xiFor the sample of confidence level to be determined in Ω (used to be characterized as non-main perspective character pair, feature is exactly the information for classification), xi∈Ω;xnIt is to be determined in Ω The unlabeled exemplars (used to be characterized as non-main perspective character pair) of confidence level, xn∈Ω;h(xi) it is that non-main perspective corresponds to Grader for sample xiPredicted value;h'(xi) it is that unlabeled exemplars x is addednGrader is to sample x afterwardsiPredicted value.
Other steps and parameter are identical as one of specific implementation mode one to three.
Specific implementation mode five:Unlike one of present embodiment and specific implementation mode one to four:The step 2 Each visual angle sample is calculated using USC pseudo label calculation formula at each visual angle to unlabeled exemplars in shooting image in two Pseudo label weight, formula is:
Wherein h (xi)==n indicates h (xi)=n or h (xi) ≠ n, h (xi)=n values 1, h (xi) ≠ n values 0;
K is coefficient, and value is positive integer;
Solving the pseudo label corresponding to weight maximum, formula in each visual angle sample is:
Other steps and parameter are identical as one of specific implementation mode one to four.
Specific implementation mode six:Unlike one of present embodiment and specific implementation mode one to five:The step 5 Described in stopping criterion for iteration be meet it is following any one:
1) whole unlabeled exemplars all have been added to grader (or remaining unlabeled exemplars);
2) M Iterative classification result does not change;
3) the set iteration upper limit is reached.
Other steps and parameter are identical as one of specific implementation mode one to five.
Specific implementation mode seven:Unlike one of present embodiment and specific implementation mode one to six:The M values model It encloses for 3-5.
Other steps and parameter are identical as one of specific implementation mode one to six.
Beneficial effects of the present invention are verified using following embodiment:
Embodiment one:
A kind of coorinated training method based on unlabeled exemplars consistency checking of the present embodiment is specifically according to the following steps It prepares:
Step 1:Select unlabeled exemplars collection to be sorted and it is initial have a label training sample set, training sample and to be sorted It is selected regional center pixel tag that sample, which needs spatially and spectrally dimension having the same, sample label,.
Step 2:CO-USC algorithm intrinsic parameters are selected, the fundamental classifier of used classification is included, USC pseudo labels calculate K (default choice k=30) in formula, USC confidence threshold values require (total number of samples/10 default choice USC=), confidence of voting The different visual angles for spending threshold requirement (default choice turnout 100%) confirmatory sample divide (sample characteristics form between different visual angles It can differ)
Step 3:Classified to sample using CO-USC algorithms, exports the label of each sample.
The effect of the present invention can be further illustrated by following experiment:
In order to verify the performance of method proposed by the invention, tested for one group of high-resolution multi-angle image Card.Data are selected from five different angles shot in Rio de Janeiro same time WorldView-2 satellites on January 19th, 2010 Image.The angle of this five width figure is respectively -38.79 °, -29.16 °, 6.09 °, 26.76 °, 39.5 °, and number is that angle 1 arrives respectively Angle 5.WV2 satellites have 8 wave bands, and spectral coverage is from 400nm-1040nm, 1.85 meters of multispectral resolution rate.Wherein include Seven class samples, Red Buildings object, black building, white building, meadow, trees, ocean, road.
USC judges compliance test result:
7 class * of total sample 500, wherein 7 class * of source sample 5 are selected, chooses its first time iterative data (because for the first time Unmarked sample is more in iteration, relatively more accurate).As a result the average value of the result of five angles is taken.The result of acquisition is as schemed 2, the wherein longitudinal axis is pseudo label probability identical with physical tags, and horizontal axis is that USC judges result.
It can be seen that USC judgements result is smaller, pseudo label probability identical with physical tags is higher namely confidence level is got over It is high;So it is proposed that USC judgement be reasonable.
USC pseudo label calculation formula compliance test results:
With selection identical data above, compare the accuracy of USC pseudo labels calculation formula and the direct voting results of grader Difference, as a result such as Fig. 3, in figure filament by USC pseudo label calculation formula classification accuracy rate and take the relationship between k values, thick line to be The direct voting results of four graders (could also say that result when k takes positive infinity).
It can be seen that USC pseudo label calculation formula obtain pseudo label accuracy in formula k variation be first on (falling-threshold directly vote for four graders accuracy) declined after rising, at the same can be seen that when k is more than certain value without What value accuracy is taken to be above (this definite value is related with real data) of the result that four classification are directly voted by k.Actually answering Although it (is the accuracy for the pseudo label for making USC pseudo label calculation formula obtain that we, which can not accurately find optimal k values, in K values), but be easy to select effect and be better than the k values that four graders are directly voted.That is USC pseudo labels calculation formula is It is highly effective.
CO-USC compliance test results:
The balanced sample of selection, lack of balance certainty ratio sample, lack of balance random sample, full figure sample are tested respectively.
Balanced sample:Test sample sample number of all categories is equal, is respectively 7*50,7*100,7*200, source in total number of samples Sample number is that the total 12 kinds of situations of 7*5,7*10,7*15,7*20 are tested, and each case grab sample is tested 20 times and is averaged Value is used as final result, experimental result to be shown in Table 1.
CO-USC KAPPA coefficients and general classification rate before and after the processing are used under the balanced sample of table 1
Lack of balance certainty ratio sample:Test sample sample number of all categories is unequal, but sample number ratio is fixed, and the two of selection Kind situation is respectively that seven class ratio of A types is 0.8:1:0.7:0.5:0.9:0.4:0.7, seven class ratio of Type B is 0.4:0.5:0.8: 0.6:0.7:1:0.8.It is respectively 50,100,200 to select total sample radix 1, source sample 7*5,7*10,7*15,7*20 total 12 Kind situation is tested, and each case grab sample is tested 20 times and is averaged as final result, and experimental result is shown in Table 2A types With Type B part.
CO-USC general classification rates before and after the processing are used under 2 lack of balance sample of table
Lack of balance certainty ratio sample:Test sample sample number of all categories is unequal, and sample number ratio is also not fixed, per analogy Example is determined by random number between 0.5-1.Select total sample radix 1 be respectively 50,100,200, source sample 7*5,7*10,7*15, 7*20, which amounts to 12 kinds of situations, to be tested, and each case grab sample is tested 20 times and is averaged as final result, experiment knot Fruit is shown in Table 2C types part.
Full figure sample:Total sample, which is that full figure is all, has label and in the identical sample of same geographical location label, and totally 165000 It is a.Source sample is 7*20 totally 140.Artwork and label are shown in Fig. 4 a, 4b, and experimental result is shown in Fig. 5 a, 5b and table 3.
3 full figure of table is classified CO-USC all kinds of classification rates before and after the processing
By experiment it can be seen that either classifying in balanced sample, lack of balance sample or full figure, CO-USC is showed Good classification performance is gone out.

Claims (7)

1. a kind of coorinated training method based on unlabeled exemplars consistency checking, it is characterised in that:The method detailed process For:
Step 1:To carrying out just classification without the sample of label in N number of viewing angles image:
Satellite selects the sample with label in each viewing angles image to instruct respectively in N number of viewing angles image of same time Practice grader, and trained grader is subjected to just classification to shooting unlabeled exemplars in image in the visual angle;
The grader is N number of, is corresponded with N number of visual angle;N values are positive integer;
Step 2:The confidence level and pseudo label for determining unlabeled exemplars in shooting image, according to unlabeled exemplars in shooting image Confidence level and pseudo label selection shooting image in unlabeled exemplars authentic specimen;
Step 3:Each visual angle is sequentially selected, the visual angle is obtained in step 2 authentic specimen and its correspondence pseudo label are added Enter grader to be trained, obtains the good grader of the visual angle re -training;
Until the corresponding grader re -training in all visual angles is completed, the good classification of the corresponding re -training in each visual angle is obtained Device;
Step 4:Each visual angle is sequentially selected, will be considered that step is added in the incredible visual angle unlabeled exemplars in step 2 The good grader of the visual angle re -training is classified in three, obtains the classification results at the visual angle;
Until the corresponding grader in all visual angles reclassifies completion, the corresponding classification results in each visual angle are obtained;
Step 5:It repeats Step 2: Step 3: step 4 obtains point at each visual angle until meeting stopping criterion for iteration Class is as a result, determine the pseudo label of authentic specimen to being shot in image not in step 1 the classification results at all visual angles and step 2 Sample with label is voted at whole visual angles, using the highest label of turnout as step 1 in shooting image in without There is the label of the sample of label.
2. a kind of coorinated training method based on unlabeled exemplars consistency checking according to claim 1, it is characterised in that: Grader is supervision or semi-supervised classifier in the step 1.
3. a kind of coorinated training method based on unlabeled exemplars consistency checking according to claim 2, it is characterised in that: The confidence level and pseudo label that unlabeled exemplars in shooting image are determined in the step 2, according to unlabeled exemplars in shooting image Confidence level and pseudo label selection shooting image in unlabeled exemplars authentic specimen;Detailed process is:
Step 2 one:
Using first visual angle as main perspective, remaining visual angle is as non-main perspective;
USC judgements are carried out in non-main perspective, obtain the USC sequences of non-main perspective, the USC sequences of non-main perspective are folded Add, main perspective unlabeled exemplars USC confidence levels are obtained according to the sequence after superposition;
The pseudo label ballot that unlabeled exemplars in shooting image are carried out with remaining whole viewpoint classification device in addition to current main perspective, with Ballot confidence level of the turnout of each sample as main perspective unlabeled exemplars;
USC is unlabeled exemplars consistency checking;
Step 2 two:It is non-master using the calculating of USC pseudo label calculation formula to unlabeled exemplars in shooting image in non-main perspective The weight of the pseudo label of visual angle sample, the weight of the pseudo label of the whole non-main perspective samples of superposition, votes, with weight highest The pseudo label that is used as Main classification device of label;
Step 2 three:Set USC confidence threshold values and ballot confidence threshold value;
The main perspective unlabeled exemplars USC confidence levels that step 2 one obtains are compared with USC confidence threshold values, by step 2 The ballot confidence level of one obtained main perspective unlabeled exemplars is compared with ballot confidence threshold value, when sample USC confidence levels When less than threshold value, and the ballot confidence level of sample be more than or equal to ballot confidence threshold value when, it is believed that the sample is believable;Work as sample When this USC confidence levels are more than or equal to threshold value, it is believed that the sample is incredible;When the ballot confidence level of sample is less than ballot confidence When spending threshold value, it is believed that the sample is incredible;
Step 2 one is repeated to step 2 three, sequentially selects each visual angle as main perspective, remaining visual angle is as non-main view Angle, until all visual angles traversal is completed.
4. a kind of coorinated training method based on unlabeled exemplars consistency checking according to claim 3, it is characterised in that: USC judgements are carried out in the step 2 one in non-main perspective, formula is:
Ω is the corresponding unlabeled exemplars of main perspective in shooting image in formula;xiFor the sample of confidence level to be determined in Ω, xi∈ Ω;xnFor the unlabeled exemplars of confidence level to be determined in Ω, xn∈Ω;h(xi) be the corresponding grader of non-main perspective for sample xiPredicted value;h'(xi) it is that unlabeled exemplars x is addednGrader is to sample x afterwardsiPredicted value.
5. a kind of coorinated training method based on unlabeled exemplars consistency checking according to claim 4, it is characterised in that: It is each using the calculating of USC pseudo label calculation formula at each visual angle to unlabeled exemplars in shooting image in the step 2 two The weight of the pseudo label of a visual angle sample, formula are:
Wherein h (xi)==n indicates h (xi)=n or h (xi) ≠ n, h (xi)=n values 1, h (xi) ≠ n values 0;
K is coefficient, and value is positive integer;
Solving the pseudo label corresponding to weight maximum, formula in each visual angle sample is:
6. a kind of coorinated training method based on unlabeled exemplars consistency checking according to claim 5, it is characterised in that: Stopping criterion for iteration described in the step 5 is any one below meeting:
1) whole unlabeled exemplars all have been added to grader;
2) M Iterative classification result does not change;
3) the set iteration upper limit is reached.
7. a kind of coorinated training method based on unlabeled exemplars consistency checking according to claim 6, it is characterised in that: The M value ranges are 3-5.
CN201810609674.XA 2018-06-13 2018-06-13 Collaborative training method based on consistency judgment of label-free samples Active CN108805208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810609674.XA CN108805208B (en) 2018-06-13 2018-06-13 Collaborative training method based on consistency judgment of label-free samples

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810609674.XA CN108805208B (en) 2018-06-13 2018-06-13 Collaborative training method based on consistency judgment of label-free samples

Publications (2)

Publication Number Publication Date
CN108805208A true CN108805208A (en) 2018-11-13
CN108805208B CN108805208B (en) 2021-12-31

Family

ID=64086946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810609674.XA Active CN108805208B (en) 2018-06-13 2018-06-13 Collaborative training method based on consistency judgment of label-free samples

Country Status (1)

Country Link
CN (1) CN108805208B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697469A (en) * 2018-12-26 2019-04-30 西北工业大学 A kind of self study small sample Classifying Method in Remote Sensing Image based on consistency constraint
CN110738237A (en) * 2019-09-16 2020-01-31 深圳新视智科技术有限公司 Defect classification method and device, computer equipment and storage medium
CN110909820A (en) * 2019-12-02 2020-03-24 齐鲁工业大学 Image classification method and system based on self-supervision learning
CN112115829A (en) * 2020-09-09 2020-12-22 贵州大学 Expression recognition method based on classifier selective integration

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060222239A1 (en) * 2005-03-31 2006-10-05 Bargeron David M Systems and methods for detecting text
CN102208037A (en) * 2011-06-10 2011-10-05 西安电子科技大学 Hyper-spectral image classification method based on Gaussian process classifier collaborative training algorithm
CN105279519A (en) * 2015-09-24 2016-01-27 四川航天系统工程研究所 Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning
CN107463996A (en) * 2017-06-05 2017-12-12 西安交通大学 From step coorinated training learning method
CN107846392A (en) * 2017-08-25 2018-03-27 西北大学 A kind of intrusion detection algorithm based on improvement coorinated training ADBN
CN107992887A (en) * 2017-11-28 2018-05-04 东软集团股份有限公司 Classifier generation method, sorting technique, device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060222239A1 (en) * 2005-03-31 2006-10-05 Bargeron David M Systems and methods for detecting text
CN102208037A (en) * 2011-06-10 2011-10-05 西安电子科技大学 Hyper-spectral image classification method based on Gaussian process classifier collaborative training algorithm
CN105279519A (en) * 2015-09-24 2016-01-27 四川航天系统工程研究所 Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning
CN107463996A (en) * 2017-06-05 2017-12-12 西安交通大学 From step coorinated training learning method
CN107846392A (en) * 2017-08-25 2018-03-27 西北大学 A kind of intrusion detection algorithm based on improvement coorinated training ADBN
CN107992887A (en) * 2017-11-28 2018-05-04 东软集团股份有限公司 Classifier generation method, sorting technique, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FANGZHAO WU等: "Collaboratively Training Sentiment Classifiers for Multiple Domains", 《IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING》 *
袁凯: "多视角协同训练算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697469A (en) * 2018-12-26 2019-04-30 西北工业大学 A kind of self study small sample Classifying Method in Remote Sensing Image based on consistency constraint
CN110738237A (en) * 2019-09-16 2020-01-31 深圳新视智科技术有限公司 Defect classification method and device, computer equipment and storage medium
CN110909820A (en) * 2019-12-02 2020-03-24 齐鲁工业大学 Image classification method and system based on self-supervision learning
CN112115829A (en) * 2020-09-09 2020-12-22 贵州大学 Expression recognition method based on classifier selective integration
CN112115829B (en) * 2020-09-09 2023-02-28 贵州大学 Expression recognition method based on classifier selective integration

Also Published As

Publication number Publication date
CN108805208B (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN108805208A (en) A kind of coorinated training method based on unlabeled exemplars consistency checking
CN109284786B (en) SAR image terrain classification method for generating countermeasure network based on distribution and structure matching
CN103810699B (en) SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network
Williams et al. Mine classification with imbalanced data
CN102842032B (en) Method for recognizing pornography images on mobile Internet based on multi-mode combinational strategy
CN107563428A (en) Classification of Polarimetric SAR Image method based on generation confrontation network
CN109902715B (en) Infrared dim target detection method based on context aggregation network
CN110766058B (en) Battlefield target detection method based on optimized RPN (resilient packet network)
CN100595782C (en) Classification method for syncretizing optical spectrum information and multi-point simulation space information
CN113050042A (en) Radar signal modulation type identification method based on improved UNet3+ network
CN108564115A (en) Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN106886760B (en) A kind of EO-1 hyperion Ship Detection combined based on empty spectrum information
CN110414538A (en) Defect classification method, defect classification based training method and device thereof
CN109446894A (en) The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture
CN110245711A (en) The SAR target identification method for generating network is rotated based on angle
CN108492298A (en) Based on the multispectral image change detecting method for generating confrontation network
CN109379153A (en) A kind of frequency spectrum sensing method
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN105374047B (en) SAR image change detection based on improved bilateral filtering with cluster
CN108764298A (en) Electric power image-context based on single classifier influences recognition methods
CN113657491A (en) Neural network design method for signal modulation type recognition
CN109785359B (en) Video target detection method based on depth feature pyramid and tracking loss
CN112699717A (en) SAR image generation method and generation device based on GAN network
CN108345898A (en) A kind of novel line insulator Condition assessment of insulation method
CN105205807B (en) Method for detecting change of remote sensing image based on sparse automatic coding machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant