CN110046666A - Mass picture mask method - Google Patents

Mass picture mask method Download PDF

Info

Publication number
CN110046666A
CN110046666A CN201910312598.0A CN201910312598A CN110046666A CN 110046666 A CN110046666 A CN 110046666A CN 201910312598 A CN201910312598 A CN 201910312598A CN 110046666 A CN110046666 A CN 110046666A
Authority
CN
China
Prior art keywords
model
prediction
mask method
initial model
markup information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910312598.0A
Other languages
Chinese (zh)
Other versions
CN110046666B (en
Inventor
何志权
许琦
何志海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Deep View Creative Technology Ltd
Original Assignee
Shenzhen Deep View Creative Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Deep View Creative Technology Ltd filed Critical Shenzhen Deep View Creative Technology Ltd
Priority to CN201910312598.0A priority Critical patent/CN110046666B/en
Publication of CN110046666A publication Critical patent/CN110046666A/en
Application granted granted Critical
Publication of CN110046666B publication Critical patent/CN110046666B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of mass picture mask methods, comprising: step 1, establishes initial model according to the markup information to multiple pictures;Step 2, it is predicted using initial model to not marking picture;Step 3, Confidence Analysis is carried out to the prediction result in step 3, to pick out reliable prediction;Step 4, the prediction result of reliable prediction is replaced into markup information;Step 5, the initial model, 1 iteration of return step are updated using markup information obtained in step 4.The present invention can fast and effeciently mark mass picture, to solve the real bottleneck problem of labeled data in deep learning.

Description

Mass picture mask method
Technical field
The present invention relates to picture label technology field, in particular to a kind of mass picture mask method.
Background technique
Currently, deep learning has obtained increasingly extensive application in academia and industry.But deep learning model is deposited In the resource constraint problem of three aspects: the data volume of model training is big, if depending on network transmission, network bandwidth will become One bottleneck;A large amount of computing resource, deep learning model parameter is more, and calculating process is complicated, huge so as to cause calculation amount;It needs Want the model that a large amount of labeled data is complicated with training.
Labeled data is a cumbersome thing, is usually depicted point by point point by point with paintbrush alignment target, such as Fig. 1 institute Show, wherein (a) is original image, it is (b) picture after mark.As it can be seen that existing mode is time-consuming and laborious, and annotation process In, artificial factor and difference cause the effect of mark often undesirable.
The picture of mark magnanimity is even more to expend a large amount of manpower and material resources, and still, current one kind not yet is quickly and effectively marked The method for infusing mass picture.
Summary of the invention
The present invention provides a kind of mass picture mask methods, to solve at least one above-mentioned technical problem.
To solve the above problems, providing a kind of mass picture mask method as one aspect of the present invention, comprising: Step 1, initial model is established according to the markup information to multiple pictures;Step 2, it is carried out using initial model to picture is not marked Prediction;Step 3, Confidence Analysis is carried out to the prediction result in step 3, to pick out reliable prediction;Step 4, may be used Prediction result by prediction is replaced into markup information;Step 5, the introductory die is updated using markup information obtained in step 4 Type, 1 iteration of return step.
Preferably, the markup information in step 1 is obtained by way of manually marking.
Preferably, step 3 includes: the confidence level function that setting is used for assessment prediction reliability, which utilizes original graph As the output vector of the softmax of information and model layer assesses prediction result.
Preferably, step 3 includes: to judge that each point in prediction result calculates it using pixel in the field 4x4 around it Joint match degree between the output vector of the softmax of the information such as variance, the gradient of contrast and position model layer, such as Fruit is joint match according to preset confidence threshold value, then shows that prediction is reliable.
Preferably, the joint match is portrayed by neural network or support vector machines.
Preferably, updating the initial model using markup information obtained in step 4 includes: to set initial model as θt, The model that last time uses is θ 't-1, then new model is θ=α θ 't-1+(1-α)θt, wherein α is coefficent of exponential smoothing.
By adopting the above-described technical solution, the present invention can fast and effeciently mark mass picture, to solve deep learning The real bottleneck problem of middle labeled data marks a small amount of picture first, and one deep learning model of training, then utilizing should Model is predicted the remaining picture not marked, is assessed the result of prediction, accurate prediction is converted into mark, then The accuracy rate of re-optimization model, such loop iteration, model is higher and higher, and the accuracy rate of mark is also higher and higher.
Detailed description of the invention
Fig. 1 schematically shows the schematic diagram being labeled in the prior art to picture;
Fig. 2 schematically shows flow charts of the invention;
Fig. 3 schematically shows the schematic diagram that reliable prediction result is selected according to confidence level.
Specific embodiment
The embodiment of the present invention is described in detail below in conjunction with attached drawing, but the present invention can be defined by the claims Implement with the multitude of different ways of covering.
Assuming that there is an initial model, if it's not true, can taking human as a small amount of picture of mark, one introductory die of training Type.For convenience following explanation, we with defects detection as an example, as shown in Figure 1.
Step 1: predicting the data not marked with initial model.
Step 2: Confidence Analysis is carried out to the result of prediction.Because the model performance of step 1 is unsatisfactory at this time, prediction Result out alternates betwwen good and bad, and therefore, seeks to pick out reliable prediction to the purpose that prediction result does Confidence Analysis.
Confidence level functionFor assessment models forecasting reliability.The function utilizes original image information I and mould The output vector of the softmax layer of type assesses the curved portion in (b) of Fig. 1.It is specific as follows, to each curve On point, we calculate the variance of its contrast using pixel in the field 4x4 around it, the information such as gradient, and in this position, It is strong and weak that the softmax layer output of model represents the defect for predicting.Both information are if being identical matching, table Bright prediction is reliable.This mapping relations can pass through a simple neural network or support vector machines (support Vector machine) it portrays.
Step 3: crucial samples selection.Step 2 has done Confidence Analysis to all predictions, accordingly, so that it may select Reliable prediction.We can set a confidence threshold value T, when threshold value is greater than T, show that the prediction is reliable, retain, otherwise Give up, as shown in figure 3, the forecast confidence in the top box in (b) is low, is rejected, the reservation in following box.
Step 4: prediction result is converted to markup information.On very big probability, the prediction result with high confidence level Meet with artificial mark, can switch to for markup information.
Step 5: more new model.Using more markup informations, we can train a model θt, last time use Model be θ 't-1, then new model is θ=α θ 't-1+(1-α)θt, wherein α is coefficent of exponential smoothing.The model compares front Model have better estimated performance.
There is updated model, we return to step 1, the part not marked are predicted, meanwhile, it updates existing Prediction.The performance of such iteration, model can become better and better, and the quality of mark also steps up.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (6)

1. a kind of mass picture mask method characterized by comprising
Step 1, initial model is established according to the markup information to multiple pictures;
Step 2, it is predicted using initial model to not marking picture;
Step 3, Confidence Analysis is carried out to the prediction result in step 3, to pick out reliable prediction;
Step 4, the prediction result of reliable prediction is replaced into markup information;
Step 5, the initial model, 1 iteration of return step are updated using markup information obtained in step 4.
2. mass picture mask method according to claim 1, which is characterized in that the markup information in step 1 passes through people The mode of work mark obtains.
3. mass picture mask method according to claims 1 and 2, which is characterized in that step 3 includes: to be arranged for commenting Estimate the confidence level function of predicting reliability, which utilizes the output vector pair of the softmax layer of original image information and model Prediction result is assessed.
4. mass picture mask method according to claim 3, which is characterized in that step 3 includes: to judge in prediction result Each point calculate the information such as the variance of its contrast, gradient and the position model using pixel in the field 4x4 around it Joint match degree between softmax layers of output vector, if being joint match according to preset confidence threshold value, Show that prediction is reliable.
5. mass picture mask method according to claim 4, which is characterized in that the joint match passes through neural network Or support vector machines is portrayed.
6. mass picture mask method according to claim 4, which is characterized in that believed using being marked obtained in step 4 Breath updates the initial model
If initial model is θt, the model that the last time uses is θ 't-1, then new model is θ=α θ 't-1+(1-α)θt, wherein α For coefficent of exponential smoothing.
CN201910312598.0A 2019-04-18 2019-04-18 Mass picture labeling method Active CN110046666B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910312598.0A CN110046666B (en) 2019-04-18 2019-04-18 Mass picture labeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910312598.0A CN110046666B (en) 2019-04-18 2019-04-18 Mass picture labeling method

Publications (2)

Publication Number Publication Date
CN110046666A true CN110046666A (en) 2019-07-23
CN110046666B CN110046666B (en) 2022-12-02

Family

ID=67277763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910312598.0A Active CN110046666B (en) 2019-04-18 2019-04-18 Mass picture labeling method

Country Status (1)

Country Link
CN (1) CN110046666B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101156A (en) * 2020-09-02 2020-12-18 杭州海康威视数字技术股份有限公司 Target identification method and device and electronic equipment
CN112699908A (en) * 2019-10-23 2021-04-23 武汉斗鱼鱼乐网络科技有限公司 Method for labeling picture, electronic terminal, computer readable storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060171680A1 (en) * 2005-02-02 2006-08-03 Jun Makino Image processing apparatus and method
CN106096627A (en) * 2016-05-31 2016-11-09 河海大学 The Polarimetric SAR Image semisupervised classification method that considering feature optimizes
CN106897738A (en) * 2017-01-22 2017-06-27 华南理工大学 A kind of pedestrian detection method based on semi-supervised learning
CN107977667A (en) * 2016-10-21 2018-05-01 西安电子科技大学 SAR target discrimination methods based on semi-supervised coorinated training
US20180307917A1 (en) * 2017-04-24 2018-10-25 Here Global B.V. Method and apparatus for pixel based lane prediction
CN108764281A (en) * 2018-04-18 2018-11-06 华南理工大学 A kind of image classification method learning across task depth network based on semi-supervised step certainly

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060171680A1 (en) * 2005-02-02 2006-08-03 Jun Makino Image processing apparatus and method
CN106096627A (en) * 2016-05-31 2016-11-09 河海大学 The Polarimetric SAR Image semisupervised classification method that considering feature optimizes
CN107977667A (en) * 2016-10-21 2018-05-01 西安电子科技大学 SAR target discrimination methods based on semi-supervised coorinated training
CN106897738A (en) * 2017-01-22 2017-06-27 华南理工大学 A kind of pedestrian detection method based on semi-supervised learning
US20180307917A1 (en) * 2017-04-24 2018-10-25 Here Global B.V. Method and apparatus for pixel based lane prediction
CN108764281A (en) * 2018-04-18 2018-11-06 华南理工大学 A kind of image classification method learning across task depth network based on semi-supervised step certainly

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIN QI ET AL.: "Restoration algorithm of damaged mark image based on texture synthesis", 《2015 INTERNATIONAL CONFERENCE ON COMPUTER AND COMPUTATIONAL SCIENCES (ICCCS)》 *
应龙 等: ""一种基于置信度的边缘检测方法"", 《计算机仿真》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699908A (en) * 2019-10-23 2021-04-23 武汉斗鱼鱼乐网络科技有限公司 Method for labeling picture, electronic terminal, computer readable storage medium and equipment
CN112101156A (en) * 2020-09-02 2020-12-18 杭州海康威视数字技术股份有限公司 Target identification method and device and electronic equipment

Also Published As

Publication number Publication date
CN110046666B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN107134144B (en) A kind of vehicle checking method for traffic monitoring
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN106156781B (en) Sort convolutional neural networks construction method and its image processing method and device
CN105608456B (en) A kind of multi-direction Method for text detection based on full convolutional network
CN109670060A (en) A kind of remote sensing image semi-automation mask method based on deep learning
CN105069779B (en) A kind of architectural pottery surface detail pattern quality detection method
CN108460787A (en) Method for tracking target and device, electronic equipment, program, storage medium
CN102799669B (en) Automatic grading method for commodity image vision quality
CN109285179A (en) A kind of motion target tracking method based on multi-feature fusion
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN104658002B (en) Non-reference image objective quality evaluation method
CN110570435B (en) Method and device for carrying out damage segmentation on vehicle damage image
CN107066628A (en) Wear the clothes recommendation method and device
CN108898085A (en) Intelligent road disease detection method based on mobile phone video
CN108491766B (en) End-to-end crowd counting method based on depth decision forest
CN108074243A (en) A kind of cellular localization method and cell segmentation method
CN109191255B (en) Commodity alignment method based on unsupervised feature point detection
CN110046666A (en) Mass picture mask method
CN112861959B (en) Automatic labeling method for target detection image
CN116863274A (en) Semi-supervised learning-based steel plate surface defect detection method and system
CN106612457B (en) Video sequence alignment schemes and system
CN115937626B (en) Automatic generation method of paravirtual data set based on instance segmentation
CN110321870A (en) A kind of vena metacarpea recognition methods based on LSTM
CN107563299A (en) A kind of pedestrian detection method using ReCNN integrating context informations
CN112637550B (en) PTZ moving target tracking method for multi-path 4K quasi-real-time spliced video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant