CN108805172A - A kind of blind evaluation method of image efficiency of object-oriented - Google Patents

A kind of blind evaluation method of image efficiency of object-oriented Download PDF

Info

Publication number
CN108805172A
CN108805172A CN201810432104.8A CN201810432104A CN108805172A CN 108805172 A CN108805172 A CN 108805172A CN 201810432104 A CN201810432104 A CN 201810432104A CN 108805172 A CN108805172 A CN 108805172A
Authority
CN
China
Prior art keywords
image
efficiency
block
feature
lossless
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810432104.8A
Other languages
Chinese (zh)
Inventor
孙斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Rui Jing Mdt Infotech Ltd
Original Assignee
Chongqing Rui Jing Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Rui Jing Mdt Infotech Ltd filed Critical Chongqing Rui Jing Mdt Infotech Ltd
Priority to CN201810432104.8A priority Critical patent/CN108805172A/en
Publication of CN108805172A publication Critical patent/CN108805172A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of blind evaluation method of image efficiency of object-oriented, and this method includes:Step S1. acquires several image patterns and constitutes image pattern collection I, is labeled to the efficiency of image pattern;Step S2. trains to obtain image efficiency derivation model using machine learning algorithm;Step S3. derives the input of model using test set image as image efficiency, predicts efficiency of the test as image.The present invention relates to a kind of methods that the efficiency to object-oriented is described, and by semantic analysis, learn the descriptive model to specific object from the whole description to image efficiency, and then evaluate with the object model established and to the overall efficiency of unknown images.

Description

A kind of blind evaluation method of image efficiency of object-oriented
Technical field
The present invention relates to image processing fields, and in particular to a kind of blind evaluation method of image efficiency of object-oriented.
Background technology
Image/video has a wide range of applications as important multimedia bearer in actual life.However, image is being adopted Some distortions are inevitably introduced in sample, compression and transmission process, these distortions can significantly reduce picture quality.It is early Phase weighs picture quality with signal-to-noise ratio, by research it is found that people's vision has the height that certain characteristics make signal-to-noise ratio It is not fully consistent to the impression of picture quality quality with people.In the past twenty years, scientific research personnel to image quality evaluation this A problem has carried out a large amount of research, in conjunction with human vision property, it is proposed that a series of evaluation method.According to whether needs Non- degraded image is as benchmark, and image quality evaluation, which is divided into, has reference, half with reference to and without with reference to several, these algorithm performs Condition is different, so being suitable for different occasions.
The above-mentioned research to image quality evaluation is limited to Generalized image more, and application is the image in consumer electronics field And video.The goal in research of image quality evaluation is to make the score to picture appraisal obtained by algorithm with people to the image matter The subjective assessment of amount is highly relevant.For being normally applied, TV or image browsing are such as watched, what user pursued is preferable Visual experience, so the evaluation goal of picture quality is appropriate.
However, image/video is applied not only to consumer electronics field, in security protection and automatically controls equal fields and also have and answer extensively With.By taking security protection as an example, in National Standard of the People's Republic of China GB/T 21741-2008《Residential quarters safety and protection system Generic specifications》;People's Republic of China (PRC) industry standards of public safety GA 38-2004《Bank business place risk etc. The regulation of grade and protection level》Etc. have certain requirement to the subjective quality of image in mandatory standards.When examination, security protection inspection Monitored object is split up into " point " type, " line " type of entrance and " face " type on square such as passageway by survey personnel. " point " type scene requirement can obtain the sharp image of object of interest on depth of field direction;" line " type scene requirement exists Specific sections include the clear image of object of interest;" face " type scene does not often require the clear of object, and requires nothing more than energy Enough differentiate the posture of object.Traditional image quality evaluating method cannot distinguish between scene type, can not embody different scenes spy The requirement of different property, so not being suitable for such as monitoring this kind of scene with particular/special requirement.In addition, in automation control area, Camera is common signal acquisition tool.Active computer vision to acquisition image analyze when, can pay close attention to it is specific right The picture quality of elephant, rather than the quality of entire image, so traditional images quality evaluation is not suitable for these occasions equally.
Invention content
In view of the foregoing deficiencies of prior art, the purpose of the present invention is to provide a kind of image efficiency of object-oriented Blind evaluation method, this method is by semantic analysis, description mould of the study to specific object from the whole description to image efficiency Type, and then with the object model established and the overall efficiency of unknown images is evaluated.
In order to achieve the above objects and other related objects, the present invention provides a kind of blind evaluation side of image efficiency of object-oriented Method, this method specifically include:
Step S1. acquires several image patterns and constitutes image pattern collection I, is labeled to the efficiency of image pattern;
Step S2. trains to obtain image efficiency derivation model using machine learning algorithm;
Step S3. derives the input of model using test set image as image efficiency, predicts efficiency of the test as image.
Preferably, it is described train to obtain image efficiency using machine learning algorithm derive model and specifically include following sub-step Suddenly:
Step S21. obtains m foreground to as block F in image pattern collection ImWith n background to as block Bn
All image patterns in image pattern collection I are classified and extract feature by step S22., obtain feature vector; According to the characteristic value of each image block, the one group feature nearest with its distance, meter are focused to find out in the lossless feature of corresponding classification The feature for calculating the one group characteristic value nearest with its distance that the characteristic value of the image block is focused to find out with corresponding lossless feature is poor Value;
Step S23. establishes set of words;Image is reconstructed according to set of words;Calculate the potential language of image each object The adopted regularity of distribution;
The potential applications regularity of distribution of different objects in image is merged into new vector by step S24., as machine learning The required feature vector of algorithm;Using machine learning algorithm, the efficiency that image is obtained by training derives model.
Preferably, the step S21 specifically includes following sub-step:
Step S211. detects region of interest field object in image pattern collection I, is regarded as foreground, and remainder regards For background;
Foreground object in step S212. extraction image pattern collection I, is used in combination foreground of the hard-pressed bale containing the foreground object to as block FmThis foreground object is identified, m indicates the number of foreground;And it is background cutting is an equal amount of to picture block with foreground for several Region, the region are denoted as background to as block Bn, n is background to the number as block.
Preferably, the step S22 specifically includes following sub-step:
Step S221. screens undistorted image structure lossless image collection from image pattern collection I;
Step S222. is by the foreground in lossless atlas to as block FmManually classify according to object type, obtains several A foreground classification F1,....,Fn
Step S223. extracts the feature vector of each image block;Image is normalized to obtain normalization brightness Coefficient MSCN is fitted the distribution of MSCN, using fitting parameter and degree of fitting as feature;
Step S224. is generated the characteristic set of corresponding lossless image block by clustering objects;To object FiMiddle image block Feature vector obtains the lossless characteristic set of description each object into cluster Indicate the i-th class foreground pair The lossless characteristic set F of elephantmLossless characteristic set,Indicate background to as block BnLossless feature set;
Step S225. classifies all image blocks in sample set I according to step S222, and according to the step S223 is classified and extracts feature;
Step S226. calculates the feature difference of image block;To each image block in sample set I, according to its characteristic value, The characteristic value that a feature vector nearest with its distance is found in the lossless characteristic set of corresponding classification, then by image The characteristic value of block eigenvalue and the lossless feature found subtracts each other to obtain feature difference.
Preferably, the step S23 specifically includes following sub-step:
Step S231. establishes set of words;The feature difference of image block in image pattern collection I is carried out according to different classifications Cluster, obtains several set W1,...,Wi,WB, W1,...,Wi,WBRespectively represent the set of words of each object;
Step S232. characterizes image in image pattern collection I again;It is corresponded to according to each object classification in image Set of words W1,...,Wi,WBThe object is reconstructed, to each image block, distance therewith is focused to find out from the word of corresponding classification Nearest word is replaced;Final different object type is characterized again with the set of words of corresponding class, obtains corresponding image Document;
Step S233. calculates the potential applications regularity of distribution of image each object;PLSA topic models are established, it will be in image The result of each object reconstruct is brought into respectively in PLSA topic models, and the potential language of types of objects is solved by EM algorithms The adopted regularity of distribution.
Preferably, the step S24 specifically includes following sub-step:
The potential applications regularity of distribution of different objects in image is merged into new vector by step S241., as engineering Practise the required feature vector of algorithm;
Step S244. utilizes machine learning algorithm, and obtaining image efficiency by training derives model.
As a result of above technical scheme, the invention has the advantages that:
The invention discloses a kind of evaluation methods of image efficiency.Include often object of different nature in image, and uses The requirement for the information content and quality that family is included to these objects is different.Different from Generalized image quality evaluation, image efficiency is commented Valence lays particular emphasis on the comprehensive description to effective information contained by image.The present invention relates to a kind of efficiency to object-oriented to retouch The method stated learns the descriptive model to specific object, in turn by semantic analysis from the whole description to image efficiency It is evaluated with the object model of foundation and to the overall efficiency of unknown images.
By test, performance table of the method proposed by the present invention on the image measurement collection comprising multiple and different foreground objects Now very well, the image efficiency degree of fitting of the test set image score and people's subjective evaluation that are obtained by SVM model predictions reach compared with High level.
Description of the drawings
Fig. 1 is the flow chart of evaluation method of the present invention;
Fig. 2 is the overall flow figure of training process;
Fig. 3 is to train to obtain the flow chart of image efficiency derivation model using machine learning algorithm.
Specific implementation mode
Illustrate that embodiments of the present invention, those skilled in the art can be by this specification below by way of specific specific example Disclosed content understands other advantages and effect of the present invention easily.The present invention can also be by addition different specific Embodiment is embodied or practiced, and the various details in this specification can also be based on different viewpoints and application, not carry on the back Various modifications or alterations are carried out under spirit from the present invention.It should be noted that in the absence of conflict, following embodiment and Feature in embodiment can be combined with each other.
It should be noted that the diagram provided in following embodiment only illustrates the basic structure of the present invention in a schematic way Think, component count, shape and size when only display is with related component in the present invention rather than according to actual implementation in schema then Draw, when actual implementation kenel, quantity and the ratio of each component can be a kind of random change, and its assembly layout kenel It may also be increasingly complex.
The present invention has the overall efficiency of image when being mainly the object comprising different importance in image scene The assessment of effect.Image efficiency be to image integrity can description, rather than the description respectively to wherein object performance, thus During the foundation of efficiency evaluation model, the evaluation model of object is decomposited from the overall efficiency of image.
As shown in Figure 1, the present embodiment provides a kind of blind evaluation method of image efficiency of object-oriented, realize that the present invention is divided into Two parts of training and test.
Training process:
Step S1. is labeled the efficiency of these samples firstly the need of several image patterns of collection, then in mark sample Training valuation model in sheet.As shown in figure 3, specific implementation step is as follows:
Step S2. trains to obtain image efficiency derivation model using machine learning algorithm.Specifically, step S2 includes following Sub-step:
Step S21. obtains m foreground to as block F in image pattern collection ImWith n background to as block Bn
More specifically, it detects region of interest field object in image pattern collection I, is regarded as foreground, image remains Remaining part point is considered as background.Contain the interested object with rectangular box hard-pressed bale, which is noted as foreground to picture Block Fm, m expression detect foreground to the number as block;And it is background cutting is an equal amount of to picture block with foreground for several Region, the region are noted as background to as block Bn, n is background to the number as block.
In the present embodiment, foreground detection device is built using machine learning algorithm Adaboost and cascade detectors, mainly It is divided into three steps, is related to characterizing definition, the structure of strong classifier selection and cascade detectors.Specifically,
1) first, according to the different feature of specific foreground object type definition.In the situation that image pattern collection I is determined Under, foreground object type in image is first determined, then according to the specifying information of object come defined feature vector, as structure point The reference of class device.What can mainly be used has haar-like statistical natures, surf scale invariability features, color characteristic etc..
2) then, according to definition extraction character pair vector, it is preferable that classification performance is constructed by Adaboost algorithm Strong classifier.Adaboost can effectively be combined as multiple and different Weak Classifiers one strong classifier.Mainly solve The weight distribution problem of weight the adjustment problem and each Weak Classifier of each sample.Assuming that the weight of some weak typing is α, Classification error rate is ε in sample set, then final expression formula is as follows:
Classification results further according to the grader after weight, which solves, to be come are adjusted the weight of sample.And under The formula stated improves the sample weights correctly classified, and reduces the sample weights of mistake classification, wherein D indicates that sample is dividing Weight in class device.
Different objects can train many Weak Classifiers according to the characteristic value that definition extracts, it is only necessary to select it Then these Weak Classifiers are combined into several strong classification by middle error rate lower part by Adaboost frames Device.
3) finally, several strong classifiers obtained above are combined by cascade detectors, form precision higher Cascade classifier.Detector by cascade structure combination can be in effective position image foreground coordinate, rectangle side is used in combination The region tightly includes by frame.
According to above step, different object types can form different classes of detector, for example, can use Haar-like features generate Face detection device, and License Plate device etc. is formed with surf features and soble features.
Step S22. image deterioration feature extractions;All image patterns in image pattern collection I are classified and extracted Feature obtains feature vector;According to the characteristic value of each image block, the lossless feature of corresponding classification be focused to find out with its away from From one group of nearest feature, calculate that the characteristic value of the image block is focused to find out with corresponding lossless feature with its distance recently One group of feature characteristic value feature difference.Specifically, step S22 includes following sub-step:
Step S221. screens structure undistorted image collection from image pattern collection I.In view of the acquisition of complete undistorted image Difficulty is larger, according to the subjective measures of effectiveness of sample set image as a result, randomly selecting one close in undistorted picture from quality Part constitutes undistorted image collection as replacement.For example, being considered as nothing by 4.5 points or more in the image of ITU standards mark of image Distorted image.
Step S222. according in step S221 image segmentation as a result, concentrating foreground object block F to lossless imagemAccording to Wherein the size of smallest object block is as scaling so that the comparable dimensions of all object blocks, then by all pieces according to different classes of Manual sort obtains the classification F of i object1,....,Fi
The spatial feature of step S223. extraction image block, the present embodiment be the natural scene statistical model based on image come Extract spatial feature.It is as follows:
First, the normalization luminance factor MSCN for finding out image indicates the gray value of image script with I (i, j), Then indicate corresponding normalization luminance factor.
Wherein include two parameters σ and μ:
The correlation between image adjacent pixel can be eliminated by above-mentioned normalization operation.i∈1,2,...,M,j∈1, 2 ..., N is the index of image space, has respectively represented the height and width of image.W is the gaussian weighing function of image, and K, L are general Value is 3, and indicate image pixel faces domain size.C is a constant term, usually takes 1, flat for making Laplce to calculating It is sliding.Two parameters are respectively then the form parameter and variance of image, can react the regularity of distribution of MSCN to a certain extent.
In order to calculate two parameters of MSCN distributions, MSCN distributions are fitted by generalized Gaussian distribution GGD, then Calculate related parameter values.The expression formula of generalized Gaussian distribution is as follows:
α1, β is the intermediate variable in generalized Gaussian distribution, and Γ is Euler's integral, and expression formula is:
After with GGD fitting MSCN distributions, σ and μ are extracted by match by moment method, for reacting point of MSCN coefficients Cloth rule.
Then, according to the correlation between the MSCN of image adjacent pixel.It establishes main diagonal, secondary diagonal, horizontal, vertical Joint Distribution model on four direction, and utilize the rule of asymmetric Generalized Gaussian Distribution Model AGGD fitting Joint Distributions.
Modeling formula method between the MSCN of the adjacent pixel of four direction is as follows:
The regularity of distribution that they are obeyed is as follows, and ρ indicates the correlation between two adjacent MSCN, K0It is the second class Bezier Function.
The distribution can be fitted with used asymmetrical generalized Gaussian distribution AGGD, and the expression formula of AGGD is as follows:
Wherein, the expression formula of parameters is as follows:
It can be effectively fitted the Joint Distribution rule of image MSCN by AGGD distributions, and four are calculated by match by moment method A new characteristic parameter:Form parameter v, left variances sigmal, right variances sigmarAnd distribution mean value η.
Since there are the Joint Distribution models of four direction, 4*4=16 new characteristic parameters can be found out in total.
Finally, in order to describe the order of accuarcy of AGGD fittings, and a new characteristic parameter --- degree of fitting is introduced.The ginseng Number is obtained by calculating the fitting degree of MSCN distributions and the corresponding AGGD distributions of image.Computational methods are as follows:
R (u, v)=corr (D (u, v), M (u, v))
R () indicates the degree of fitting parameter of correspondence image block.Corr indicates to calculate the function of correlation.D () indicates the image The MSCN distributed models of block, M () indicate to find out the Generalized Gaussian Distribution Model come.Since fit procedure has carried out five in total It is secondary, the AGGD fittings in respectively GGD fittings and four direction.Therefore 5 new degrees of fitting can be found out in total as feature Value.
The feature vector that final each image block can extract includes 2+16+5=23 characteristic parameter in total.
Step S224. is generated the characteristic set of corresponding lossless image block by clustering objects;To object FiMiddle image block Feature vector obtains the lossless characteristic set of description each object into cluster Expression foreground is to as block Fm's Lossless characteristic set,Indicate background to as block BnLossless feature set.
Step S225. classifies all image blocks in sample set I according to step S222, and according to the step S223 is classified and extracts feature.
Step S226. is obtained in image pattern collection I after the feature vector of each image block.According to the big of Euclidean distance It is small, for foreground to finding one group of nearest feature of distance in the lossless feature set of corresponding class as block, and for background pair As block B is then in lossless feature setIn find one group of nearest feature of distance, then by image block characteristics and the lossless spy found Sign subtracts each other to obtain feature difference.
For example, background is to as block feature difference FVlComputational methods:
Step S23. establishes set of words;Image is reconstructed according to set of words;Calculate the potential language of image each object The adopted regularity of distribution.Specifically, step S23 specifically includes following sub-step:
Step S231. carries out latent semantic analysis firstly the need of establishing set of words.Image is compared to a document, it be by What one group of word was constituted, word is indicated with the feature vector of each small image block.Firstly the need of the classification point according to image block Set of words is not established, and the feature difference of each object images block is formed a certain size list by k-means clusterings by classification Set of words.Foreground object set of words size is selected according to the concrete condition of object, background set of words size selection 600.
Step S232. is reconstructed image according to set of words, and each image block can be focused to find out in corresponding word An immediate word is replaced.Different object type is characterized again with the set of words of corresponding class, obtains corresponding figure As document.
Different object classifications in image are projected to potential applications layer by step S233..Located by PLSA topic models The image document that reason reconstruct obtains, it is the decomposition algorithm of a text-processing, solves text by introducing potential applications layer The problem of middle polysemant and near synonym.Main operational process:Doctype is selected by certain probability P (d) first, then Pass through certain probability P (zk|di) select to constitute the theme of document, the probability P (W of word is finally selected according to themej|zk) raw At entire chapter document.The groundwork of PLSA is exactly to calculate P (zk|di) and P (Wj|zk) the two parameters, calculating process needs use The concept of Joint Distribution:
P(di,Wj)=P (Wj|di)*P(di)
It also needs to use Bayes's distribution formula:
The optimal solution of parameter is finally obtained by continuous iteration using EM algorithms according to then two formula.EM algorithms are divided into Two parts:
E steps (it is expected step), seek the posterior probability of hiding parameter:
M steps (maximize step), and optimal solution is sought using desired maximization approach:
Through the above steps, the theme distribution rule of a document may finally be obtained.
According to the something in common of image procossing and text-processing, PLSA is introduced into the present invention, in 2) image it is all kinds of The reconstruction result of object is all usedIt indicates, introduces the subject layer x that size is S, and such set of words is indicated with W, obtain pair The PLSA models answered:
According to object type different in image, using above-mentioned formula, classification finds out its potential applications point by EM algorithms Cloth rule.
Step S24. establishes image efficiency and derives model
Specifically, step S24 specifically includes following sub-step:
Step S241. merges the potential applications regularity of distribution of the types of objects of image.The calculated figure of PLSA algorithms will be used As the potential applications regularity of distribution of different objects merges into a new feature vector.
Step S242. is using image pattern collection I as training set, using support vector machines (SVM) come to training set image Relationship between feature vector and subjective mark (score) is modeled.Since feature vector contains different objects in image The potential applications regularity of distribution, it is different right in model to be adjusted with the function of automatically adjusting parameter weight using support vector machines As the influence to image efficiency.The overall flow of training process is as shown in Figure 2.
The model in a particular application when it is non real-time, complete offline, it is only necessary to training can once be used for a long time.
Test process:
Step S3. derives the input of model using test set image as image efficiency, predicts efficiency of the test as image. Step S3 specifically includes following steps:
Step S31. image characteristics extractions
Test image is split by step S311..Test image is extracted according to the grader obtained in training process Foreground is to as block Fm', and all according to foreground to as block Fm' in the sizes of smallest blocks zoomed in and out for standard, and according to difference Foreground category classification.Background image is pressed into the uniform piecemeal of standard size, obtains background to as block Bn'。
The spatial feature of step S312. extraction image blocks, the spatial feature of image is extracted according to the method in training process Vector.
Step S313. seeks new feature of the feature difference as image block.If in the lossless spy of Ganlei that training process obtains It during collection is closed, is found according to corresponding classification immediate with training set image block characteristics, calculates their difference as test Collect the new feature of image block.
Step S32. potential applications extract
Step S321. test images reconstruct.According to the set of words obtained in training to test set image reconstruction, different pairs As the set of words that classification corresponds to class characterizes and obtains corresponding image document again.
Step S322. calculates the potential applications distribution of different objects in test set image.According to what is obtained in training process PLSA model parameters and EM algorithms can find out the potential applications regularity of distribution of the image document of new incoming.
Step S33. test model performances
The potential applications regularity of distribution that step S331. merges different objects forms a new feature vector.
Step S332. derives model using the SVM that training process obtains, and the spy of test image is obtained in incoming step S331 Sign vector, prediction derivation is carried out to the efficiency of test image.
By test, performance table of the method proposed by the present invention on the image measurement collection comprising multiple and different foreground objects Now very well, the image efficiency degree of fitting of the test set image score and people's subjective evaluation that are obtained by SVM model predictions reach compared with High level.
The present invention relates to a kind of methods, are assessed for the efficiency of image.This method is suitable for having spy to image property Determine the occasion of Functional Requirement, can obtain compared with the more accurate evaluation result of image quality evaluation, more traditional quality evaluation side Method is more applicable for the image measures of effectiveness towards special object.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe The personage for knowing this technology can all carry out modifications and changes to above-described embodiment without violating the spirit and scope of the present invention.Cause This, those of ordinary skill in the art institute without departing from the spirit and technical ideas disclosed in the present invention such as All equivalent modifications completed or change should be covered by the claim of the present invention.

Claims (6)

1. a kind of blind evaluation method of image efficiency of object-oriented, which is characterized in that this method specifically includes:
Step S1. acquires several image patterns and constitutes image pattern collection I, is labeled to the efficiency of image pattern;
Step S2. trains to obtain image efficiency derivation model using machine learning algorithm;
Step S3. derives the input of model using test set image as image efficiency, predicts efficiency of the test as image.
2. a kind of blind evaluation method of image efficiency of object-oriented according to claim 1, which is characterized in that the profit It trains to obtain image efficiency with machine learning algorithm and derives model and specifically include following sub-step:
Step S21. obtains m foreground to as block F in image pattern collection ImWith n background to as block Bn
Step S22. is by the image block classification of all images in image pattern collection I and extracts feature, obtains feature vector;According to The characteristic value of each image block is focused to find out the one group feature nearest with its distance in the lossless feature of corresponding classification, and calculating should The feature of the characteristic value for the one group feature nearest with its distance that the characteristic value of image block is focused to find out with corresponding lossless feature Difference;
Step S23. establishes set of words;Image is reconstructed according to set of words;Calculate the potential applications point of image each object Cloth rule;
The potential applications regularity of distribution of different objects in image is merged into new vector by step S24., as machine learning algorithm Required feature vector;Using machine learning algorithm, the efficiency that image is obtained by training derives model.
3. a kind of blind evaluation method of image efficiency of object-oriented according to claim 2, which is characterized in that the step S21 specifically includes following sub-step:
Step S211. detects region of interest field object in image pattern collection I, is regarded as foreground, and remainder is considered as the back of the body Scape;
Foreground object in step S212. extraction image pattern collection I, is used in combination foreground of the hard-pressed bale containing the foreground object to as block FmMark Know this foreground object, m indicates the number of foreground;And by background cutting be several with foreground to as an equal amount of region of block, The region is denoted as background to as block Bn, n is background to the number as block.
4. a kind of blind evaluation method of image efficiency of object-oriented according to claim 3, which is characterized in that the step S22 specifically includes following sub-step:
Step S221. screens undistorted image structure lossless image collection from image pattern collection I;
Step S222. is by the foreground in lossless atlas to as block FmManually classify according to object type, obtains several foregrounds Classification F1,....,Fn
Step S223. extracts the feature vector of each image block;Image is normalized to obtain normalization luminance factor MSCN is fitted the distribution of MSCN, using fitting parameter and degree of fitting as feature;
Step S224. is generated the characteristic set of corresponding lossless image block by clustering objects;Respectively to object FiWith image block in B Feature vector into cluster, obtain the lossless characteristic set of description each objectB;i∈[1,n];Indicate the i-th class foreground The lossless characteristic set F of objectmLossless characteristic set,Indicate background to as block BnLossless feature set;
Step S225. classifies all image blocks in sample set I according to step S222, and according to the step S223 into Row classifies and extracts feature;
Step S226. calculates the feature difference of image block;To each image block in sample set I, according to its characteristic value, right The characteristic value that a feature vector nearest with its distance is found in the lossless characteristic set of classification is answered, then by image block characteristics The characteristic value for the lossless feature for being worth and finding subtracts each other to obtain feature difference.
5. a kind of blind evaluation method of image efficiency of object-oriented according to claim 4, which is characterized in that the step S23 specifically includes following sub-step:
Step S231. establishes set of words;The feature difference of image block in image pattern collection I is gathered according to different classifications Class obtains several set W1,...,Wi,WB, W1,...,Wi,WBRespectively represent the set of words of each object;
Step S232. characterizes image in image pattern collection I again;According to the corresponding list of each object classification in image Word set W1,...,Wi,WBThe object is reconstructed, to each image block, it is nearest to be focused to find out distance therewith from the word of corresponding classification Word is replaced;Final different object type is characterized again with the set of words of corresponding class, obtains corresponding image document;
Step S233. calculates the potential applications regularity of distribution of image each object;PLSA topic models are established, it will be each in image The result of object reconstruction is brought into respectively in PLSA topic models, and the potential applications distribution of types of objects is solved by EM algorithms Rule.
6. a kind of blind evaluation method of image efficiency of object-oriented according to claim 5, which is characterized in that the step S24 specifically includes following sub-step:
The potential applications regularity of distribution of different objects in image is merged into new vector by step S241., is calculated as machine learning The required feature vector of method;
Step S244. utilizes machine learning algorithm, and obtaining image efficiency by training derives model.
CN201810432104.8A 2018-05-08 2018-05-08 A kind of blind evaluation method of image efficiency of object-oriented Pending CN108805172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810432104.8A CN108805172A (en) 2018-05-08 2018-05-08 A kind of blind evaluation method of image efficiency of object-oriented

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810432104.8A CN108805172A (en) 2018-05-08 2018-05-08 A kind of blind evaluation method of image efficiency of object-oriented

Publications (1)

Publication Number Publication Date
CN108805172A true CN108805172A (en) 2018-11-13

Family

ID=64091964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810432104.8A Pending CN108805172A (en) 2018-05-08 2018-05-08 A kind of blind evaluation method of image efficiency of object-oriented

Country Status (1)

Country Link
CN (1) CN108805172A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313215A (en) * 2021-07-30 2021-08-27 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542302A (en) * 2010-12-21 2012-07-04 中国科学院电子学研究所 Automatic complicated target identification method based on hierarchical object semantic graph
CN106815839A (en) * 2017-01-18 2017-06-09 中国科学院上海高等研究院 A kind of image quality blind evaluation method
CN107220663A (en) * 2017-05-17 2017-09-29 大连理工大学 A kind of image automatic annotation method classified based on semantic scene
CN107481238A (en) * 2017-09-20 2017-12-15 众安信息技术服务有限公司 Image quality measure method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542302A (en) * 2010-12-21 2012-07-04 中国科学院电子学研究所 Automatic complicated target identification method based on hierarchical object semantic graph
CN106815839A (en) * 2017-01-18 2017-06-09 中国科学院上海高等研究院 A kind of image quality blind evaluation method
CN107220663A (en) * 2017-05-17 2017-09-29 大连理工大学 A kind of image automatic annotation method classified based on semantic scene
CN107481238A (en) * 2017-09-20 2017-12-15 众安信息技术服务有限公司 Image quality measure method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313215A (en) * 2021-07-30 2021-08-27 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Narihira et al. Learning lightness from human judgement on relative reflectance
CN104866829B (en) A kind of across age face verification method based on feature learning
CN104657718B (en) A kind of face identification method based on facial image feature extreme learning machine
CN101236608B (en) Human face detection method based on picture geometry
CN102629328B (en) Probabilistic latent semantic model object image recognition method with fusion of significant characteristic of color
CN107133955B (en) A kind of collaboration conspicuousness detection method combined at many levels
CN105787472B (en) A kind of anomaly detection method based on the study of space-time laplacian eigenmaps
CN106408030B (en) SAR image classification method based on middle layer semantic attribute and convolutional neural networks
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN105630901A (en) Knowledge graph representation learning method
WO2023159909A1 (en) Nutritional management method and system using deep learning-based food image recognition model
CN109816625A (en) A kind of video quality score implementation method
CN106228137A (en) A kind of ATM abnormal human face detection based on key point location
CN102422324B (en) Age estimation device and method
CN101833664A (en) Video image character detecting method based on sparse expression
CN106096551A (en) The method and apparatus of face part Identification
Shuai et al. Object detection system based on SSD algorithm
CN106203256A (en) A kind of low resolution face identification method based on sparse holding canonical correlation analysis
CN104008375A (en) Integrated human face recognition mehtod based on feature fusion
CN103971106A (en) Multi-view human facial image gender identification method and device
CN108229571A (en) Apple surface lesion image-recognizing method based on KPCA algorithms Yu depth belief network
CN105528620A (en) Joint robustness principal component feature learning and visual classification method and system
CN104778466A (en) Detection method combining various context clues for image focus region
CN106127234A (en) The non-reference picture quality appraisement method of feature based dictionary
CN109754390A (en) A kind of non-reference picture quality appraisement method based on mixing visual signature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181113