CN109829924A - A kind of image quality evaluating method based on body feature analysis - Google Patents

A kind of image quality evaluating method based on body feature analysis Download PDF

Info

Publication number
CN109829924A
CN109829924A CN201910046769.XA CN201910046769A CN109829924A CN 109829924 A CN109829924 A CN 109829924A CN 201910046769 A CN201910046769 A CN 201910046769A CN 109829924 A CN109829924 A CN 109829924A
Authority
CN
China
Prior art keywords
image
value
feature
follows
lab
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910046769.XA
Other languages
Chinese (zh)
Other versions
CN109829924B (en
Inventor
田昕
严吕
章浩然
李松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Shenzhen Research Institute of Wuhan University
Original Assignee
Wuhan University WHU
Shenzhen Research Institute of Wuhan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU, Shenzhen Research Institute of Wuhan University filed Critical Wuhan University WHU
Priority to CN201910046769.XA priority Critical patent/CN109829924B/en
Publication of CN109829924A publication Critical patent/CN109829924A/en
Application granted granted Critical
Publication of CN109829924B publication Critical patent/CN109829924B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of image quality evaluating methods based on body feature analysis, it solves conventional images quality evaluating method and deficiency, which leads to problems such as image quality evaluation accuracy rate not high, to be considered to image subject feature, this method mainly carries out image quality evaluation from body features and image blur global characteristics etc. such as the depth of field, space configuration feature, background color complexity, the Lab features of main body, and effect is better than general universality method.In addition, the present invention has chosen the method based on sparse representation as classifier, it is more preferable than traditional classifier effect.

Description

A kind of image quality evaluating method based on body feature analysis
Technical field
The invention belongs to image aesthetic qualities to evaluate field, specifically a kind of image matter based on body feature analysis Measure evaluation method.
Background technique
Image quality evaluation is the predecessor of image aesthetic quality evaluation, it simulates human visual system objectively to image Quality is evaluated, and traditional images quality evaluation is primarily used to whether detection image has distortion in transmission process, such as: mould Paste, distortion, noise etc..And the evaluation of image aesthetic quality is then that the aesthetic feeling degree of image is evaluated from sense organ, it is in computer With the crossing domain of the subjects such as psychology, design science, the evaluation index being related to is more abstract, such as: image depth, main body structure Figure, degree harmonious in colour, scene, shadow etc..
Datta in 2005 has delivered the article for calculating aesthetics in ECCV meeting, causes the extensive concern of industry, In subsequent 10 years, concern image aesthetic quality pricer is more and more, and the article of image aesthetic quality evaluation is major Accounting is also higher and higher in meeting, and due to lacking the database of aesthetic evaluation, early stage people only carry out two classification to image, After AVA database is born, image is classified more also gradually to be studied with the problem of image recurrence.Until 2015, image was aesthstic Quality evaluation encounters bottleneck, and accuracy rate cannot be obviously improved, and with the rise into 2 years deep learnings, image aesthetic quality is commented Valence becomes popular domain again.
Recently as the fast development of artificial intelligence, in image retrieval, streetscape navigation, U.S. face, advertisement design, media etc. The status in field, image aesthetic evaluation is also more and more important, and the core algorithm of many application demands is required with image aesthetics Evaluation.Currently, the application of image aesthetic quality evaluation is still in infancy, there has also been some relatively successful stories, such as: Online Video website: iqiyi.com can be automatically extracted out in video using image aesthetic quality evaluation algorithms most suitable as cover That frame image, a large amount of manual labor can be reduced after the technology maturation and accuracy rate will obtain the promotion of matter; Apple company is added to the function of automatic cutting, and the grinding in terms of composition by image aesthetics in handset image editor Study carefully, automatic cutting image finally obtains the highest image of score.
Summary of the invention
Problem to be solved by this invention is: considering for conventional images quality evaluating method image subject feature insufficient Lead to problems such as image quality evaluation accuracy rate not high, proposes a kind of image quality evaluation side based on body feature analysis Method, this method is mainly from body features and images such as the depth of field, space configuration feature, background color complexity, the Lab features of main body Fuzziness global characteristics etc. carry out image quality evaluation.
The technical scheme is that a kind of image quality evaluating method based on body feature analysis, including following steps It is rapid:
Step S1 assesses database by image aesthetic quality and completes image quality measure model training, including following son Step,
Step S11 carries out main body identification to every width training image in image aesthetic quality assessment database, determines main body Region and background area;
Step S12 carries out body feature extraction, wherein body feature in conjunction with the body region and background area detected It include: the Lab feature of the depth of field, space configuration feature, background color complexity, main body;
Step S13 calculates global characteristics image blur;
Step S14, picture quality classifying dictionary of the generation based on sparse representation are D;
Step S2 realizes picture quality discriminant classification, including following sub-step to test image;
Step S21 carries out body feature to the test image of input and global characteristics extracts, all features are connected Y can be expressed as by obtaining total feature vector;
Step S22 differentiates test image classification quality, provides the objective function of its optimization first:
In formula, x is variable,It is the result of equation Optimization Solution;λ is a regularization parameter, is a constant, and D is Picture quality classifying dictionary solves above-mentioned equation using Lasso method;
Calculate the error amount r that test image belongs to the i-th classi(y), calculation method is as follows:
Wherein,Indicate a column vector, it byIt generates, i.e. column vectorIn the value of corresponding i-th class be not 0, Other values are set to 0 entirely;The judgment method that the test image belongs to picture quality classification c is as follows:
Further, in step S11 using Bogdan Alexe salient region detection method determine body region and Background area, specific implementation is as follows,
(1.1) image preprocessing extracts R, G, B triple channel image of original image, carries out respectively to triple channel image more Image f under different scale transformation is further divided into several identical candidate frames of size by subdimension transformation, is determined significant The number U in property region;
(1.2) salient region detection is carried out to each candidate frame, detection method uses conspicuousness detection method, tool Body are as follows: Fourier transformation is carried out to the image f under different scale transformation first and obtains its amplitude spectrum A (f) and phase spectrum P (f), It defines Log amplitude spectrum L (f)=log (A (f)), h is the mean filter convolution kernel of a n*n, and g is a Gaussian Blur convolution Core, then salient region S (f) calculation method is as follows:
R (f)=L (f)-h*L (f)
S (f)=g*F-1[exp(R(f)+iP(f))]2
Here i represents imaginary number, and exp represents exponential function;
(1.3) corresponding salient region S (f) is calculated to each candidate frame under different scale transformation, is pressed in conjunction with threshold value The significance value of each candidate frame salient region is calculated according to following method, and the biggish preceding U candidate frame of significance value is made For main body region, other regions are as background area;Wherein the calculation formula of significance value is as follows:
In above formula, the transformed different scale of S representative image, ω indicates that candidate frame, p indicate the pixel in candidate frame,Indicate the value of p point in salient region, θSIndicate the threshold value at size S.
Further, the specific implementation of step S12 is as follows,
(2.1) space configuration feature: being divided into 9 small cuboids for original image, makes it in groined type, and the 4 of intersection A point, that is, golden section point determines that coordinate of main center's point in original image is (x1,y1), 4 golden section point coordinates point It Wei not (x2,y2)、(x3,y3)、(x4,y4)、(x5,y5);Specific step is as follows:
2.1a calculates separately main center's point and 4 golden section point distance L1、L2、L3、L4
Wherein, ω, h are respectively the width and height of original image;
The straight line and trunnion axis angulation α that 2.1b, calculating main center's point and 4 golden section points are linked to be12, α34
αm-1=arctan [(ym-y1)/(xm-x1)] m=2,3,4,5
2.1c calculates the different subjects area accounting be overlapped with the rectangle on four angles of groined type
Wherein,Width and height of respectively j-th of the body region in i-th of rectangle, ω, h are respectively image Width and height;
2.1d, space configuration feature is by L1,L2,L3,L41234,It is connected into a column Vector is constituted;
(2.2) depth of field: for the corresponding body region of significance value maximum in step (1.3);
2.2a is calculated image blur: being become respectively to R, G of image, channel B using fast fourier transform algorithm It changes, then selects maximum value therein, 1/5th of maximum value are set as to judge the threshold value of image blur, then calculate figure Accounting of the pixel number (being less than threshold value) in whole image is obscured as in;
2.2b calculates the depth of field: by step 2.2a, calculating separately the corresponding main body of maximum significance value in step (1.3) The fuzziness F in the region and fuzziness B of background area, using the ratio of F and B as the characteristic value of the depth of field;
(2.3) background color complexity: being divided into N number of equidistant section for R, G, B triple channel of original image respectively, this Sample then has N in the three-dimensional color space of RGB3Color space is planted, the corresponding background of maximum significance value in traversal step (1.3) Each pixel, count pixel in background and occupy the number ni of color space, then answered ni*100 as background color The characteristic value of miscellaneous degree;
(2.4) the Lab feature of main body: original image is converted to Lab space from rgb space first, then to Lab space Feature extraction is carried out, it is as follows that RGB is transformed into Lab space process:
The value first converted R, G, B of original image in the corresponding X, Y, Z axis of Lab space.
Wherein,
Conversion formula are as follows:
Wherein,
L is converted by the value in X, Y, Z axis*、a*、b*
Wherein,
In addition, L*、a*、b*Respectively represent brightness, tone, colour temperature, Xn、Yn、ZnGeneral default value are as follows: 95.047,100.0, 108.883 calculating separately following features value:
L (brightness):
Wherein, L1For average brightness of the corresponding main body of significance value maximum in step (1.3) in Lab space;L2For Average brightness of the image in Lab space;
A (tone characteristics):
Wherein, a1For tone average value of the corresponding main body of significance value maximum in step (1.3) in Lab space;a2For Tone average value of the image in Lab space;
B (colour temperature feature):
Wherein, b1For colour temperature average value of the corresponding main body of significance value maximum in step (1.3) in Lab space;b2For Colour temperature average value of the image in Lab space.
By A, B, C series connection constitute the Lab feature of main body for a column vector.
Further, the pyramidal natural image non-reference fuzzy evaluation of Fourier transformation and airspace is based in step S13 Method calculates global characteristics image blur, and calculating process is as follows:
Original image cutting is identical 9 blocks of size, is expressed as f by S13a1,..,f9
S13b calculates power image P to original image and nine blocks according to the following formula:
P=10 (| G |2+1)
Wherein, G represents the Fourier transformation of correspondence image or block;
S13c obtains 10 power images by step S13b;Each power image is divided into size identical 9 again A block, in addition itself, totally 10 power images;The value of power image is divided into K section according to size, counts each section The number of interior performance number, using the statistical result as the feature vector of global characteristics image blur, the length of this feature vector For 100K.
Further, the implementation that picture quality classifying dictionary D is obtained in step S14 is as follows,
Assuming that the classification comprising picture quality has N kind in image aesthetic quality assessment database, for the i-th class image, press Corresponding classifying dictionary D is constructed according to following methodsi, to the jth width training image in the i-th class, by it according to step S11, S12 and S13 step generates body feature (including: the Lab feature of the depth of field, space configuration feature, background color complexity, main body) and complete Office's feature (image blur), and all series connection obtains a column vector d by all featuresj, then Di=[d1,…,dM], M It is the number of training image in the i-th class;So total picture quality classifying dictionary can be expressed as D=[D1,…,DN]。
Compared with prior art, the invention has the advantages that and the utility model has the advantages that
1, for the picture comprising main body class, the present invention have chosen the depth of field, space configuration feature, background color complexity, The body features such as the Lab feature of main body carry out image quality evaluation, and effect is better than general universality method.
2, the present invention has chosen the method based on sparse representation as classifier, more preferable than traditional classifier effect.
Detailed description of the invention
Fig. 1 is the flow chart for the portrait aesthetic quality evaluation analyzed the present invention is based on body feature.
Fig. 2 is the image after marking main body after identifying main body.
Fig. 3 is space configuration feature schematic diagram.
Fig. 4 is corresponding image after Fourier transformation.
Fig. 5 is test image.
Specific embodiment
Implementation of the invention is illustrated with a specific example below:
The database that the invention uses is AADB, and the database is by 10000 images selecting on the website Flickr.com Composition, the score of each picture is obtained by 11 kinds of aesthetic evaluation indexs, we select portrait as study main body, 200 high quality portrait photos and 200 low quality portrait photos are picked from the data set, be all in two class images by 150 are used as training dataset, and 50 are used as test data set.It carries out in accordance with the following steps
S11: main body identification is carried out to every width training image, there is used herein the detections of the salient region of Bogdan Alexe Method (What is an object?, Bogdan Alexe, etc) and determine body position, specific steps include:
(1.1) image preprocessing: extracting R, G, B triple channel image of original portrait image, respectively to triple channel image into The transformation of 5 subdimension of row, i.e., zoom to corresponding scale size S*S for original image, here S=16, and 24,32,48,64.Every kind Threshold θ under scaleSAre as follows: 0.43,0.32,0.34,0.35,0.26, determine that body region number is 8.Further by it is above-mentioned not Several identical candidate frames of size are divided into the image f under change of scale;
(1.2) salient region detection is carried out to each candidate frame, detection method uses conspicuousness detection method (" salient region detection: spectrum Remanent Model ", Saliency Detection:A Spectral Residual Approach, Xiaodi Hou).In the method, its amplitude is obtained to by the image f progress Fourier transformation under different scale transformation first A (f) and phase spectrum P (f) are composed, as shown in Figure 4.It defines Log amplitude spectrum L (f)=log (A (f)), h is the mean value filter of a n*n Wave convolution kernel, n=3, g are a Gaussian Blur convolution kernel, size 10*10.Then salient region S (f) calculation method is as follows:
R (f)=L (f)-h*L (f)
S (f)=g*F-1[exp(R(f)+iP(f))]2
Here i represents imaginary number, and exp represents exponential function.
(1.3) corresponding salient region S (f) is calculated to each candidate frame, is calculated as follows in conjunction with threshold value every The significance value of a candidate frame salient region, and using the maximum candidate frame of significance value as body region, other regions are made For background area.The calculation formula of significance value is as follows:
In above formula, different scale after S representative image scaling, S=16 here, 24,32,48,64.ω indicates candidate Frame, p indicate the pixel in candidate frame,Indicate the value of p point in salient region, θSIndicate the threshold value at size S, point It Wei 0.43,0.32,0.34,0.35,0.26.Wherein a picture main body recognition result is as shown in Fig. 2.
S12: further, in conjunction with the body region and background area detected, body feature extraction is carried out, wherein main body Feature includes: the Lab feature of space configuration feature, the depth of field, background color complexity, main body.
(2.1) space configuration feature: as shown in figure 3, original image is divided into nine small cuboids by us, it is made to be in Groined type, four point, that is, golden section points of intersection.Determine that coordinate of main center's point in original image is (x1,y1), four Golden section point coordinate is respectively (x2,y2)、(x3,y3)、(x4,y4)、(x5,y5).Specific step is as follows:
2.1a calculates separately main center's point and four golden section point distance L1、L2、L3、L4,
Wherein, ω, h are respectively the width and height of original image.
The straight line and trunnion axis angulation α that 2.1b, calculating main center's point and four golden section points are linked to be12, α34
αm-1=arctan [(ym-y1)/(xm-x1)] m=2,3,4,5
2.1c calculates different subjects (have 8 in fig 2, therefore, j=1 ..., 8) and the length on four angles of groined type Rectangular be overlapped area accounting
Wherein,Width and height of respectively j-th of the body region in i-th of rectangle, ω, h are respectively original The width and height of image.
Space configuration feature is by L1、L2、L3、L41234 (j=1 ..., 8) it is connected into one Column vector is constituted.
(2.2) depth of field: later in three body features, our situation corresponding to selective analysis most significant feature walk Suddenly the corresponding areas case of maximum significance value in (1.3).The calculating of depth of field feature is divided into two steps, first step by us Suddenly the calculation method of fuzziness is described, main body and background are then carried out fuzziness calculating by second step respectively, and final arrives Depth of field characteristic value needed for us.
2.2a calculating image blur: being become respectively to R, G of image, channel B using fast fourier transform algorithm It changes, then selects maximum value therein, we are set as judging the threshold value of image blur for 1/5th of maximum value, then count Accounting of the pixel number (being less than threshold value) in whole image is obscured in nomogram picture, the accounting of attached drawing 2 is 0.0021.
2.2b calculates the depth of field: by step 2.2a, we calculate separately the fuzziness F and overall region of body region Fuzziness B, using the ratio of F and B as the characteristic value of the depth of field, the depth of field characteristic value of attached drawing 2 is 1.0476.
(2.3) background color complexity: R, G, B triple channel of original image are divided into 16 equidistant areas by we respectively Between, then there is 16*16*16=4096 kind color space in the three-dimensional color space of RGB in this way.We traverse each of background Pixel counts the number ni that pixel in background image occupies color space, then using ni*100 as background color complexity Characteristic value.Ni*100 is 3.1738 in example.
(2.4) the Lab feature of main body: original image is converted to Lab space from rgb space first by we, then to Lab Space carries out feature extraction.It is as follows that RGB is transformed into Lab space process:
The value first converted R, G, B of original image in the corresponding X, Y, Z axis of Lab space.
Wherein,
Conversion formula are as follows:
Wherein,
L is converted by the value in X, Y, Z axis*、a*、b*
Wherein,
Wherein L*、a*、b*Respectively represent brightness, tone, colour temperature, Xn、Yn、ZnGeneral default value are as follows: 95.047,100.0, 108.883.Calculate separately following features value:
L (brightness):
Wherein, L1Based on average brightness in Lab space;L2The average brightness for being image in Lab space.
A (tone characteristics):
Wherein, a1Based on tone average value in Lab space;a2The tone average value for being image in Lab space.
B (colour temperature feature):
Wherein, b1Based on colour temperature average value in Lab space;b2The colour temperature average value for being image in Lab space. L in the present embodiment, A and B are respectively 101.3082,132.9119,133.2750.
By A, B, C series connection constitute the Lab feature of main body for a column vector.
S13: further, it is based on the pyramidal natural image non-reference fuzzy assessment method of Fourier transformation and airspace (No-reference blur assessment in natural images using Fourier transform and Spatial pyramids, Eftichia Mavridaki, etc) global characteristics image blur is calculated, calculating process is such as Under:
Original image cutting is identical nine blocks of size, is expressed as f by S13a1,..,f9
S13b calculates power image P to original image and nine blocks according to the following formula:
P=10 (| G |2+1)
Wherein, G represents the Fourier transformation of correspondence image or block.
S13c, by step S13b, we can obtain ten power images.Each power image is divided into size again Identical nine blocks, in addition itself, totally ten power images.The value of power image is divided into 5 sections according to size, is counted The number of performance number in each section, using the statistical result as the feature vector of global characteristics image blur, this feature to The length of amount is 500.
S14: further, we generate the image quality measure model based on sparse representation.Assuming that in tranining database Classification comprising picture quality has N kind, for the i-th class image, constructs corresponding classifying dictionary D by the following methodi.To i-th Jth width training image in class, it (includes: the depth of field, space structure that it, which is generated body feature according to step S11, S12 and S13 step, Figure feature, background color complexity, the Lab feature of main body) and global characteristics (image blur), and all features are all gone here and there Join and obtains a column vector dj, then Di=[d1,…,dM].M is the number of training image in the i-th class.So total classification Dictionary can be expressed as D=[D1,…,DN]。
Classification in the present embodiment in tranining database comprising picture quality has 2 kinds, and total classifying dictionary can be expressed as D =[D1,D2]。
Step S2 specifically:
S21: body feature is carried out to input test image and global characteristics extract, the step and step S11, S12 and S13 Identical, we select a width test image, as shown in Fig. 5, calculate its feature vector y according to above-mentioned steps;
S22: differentiating the picture quality, provides the objective function of its optimization first:
In formula, x is variable,It is the result of equation Optimization Solution.λ is a regularization parameter, is a constant, λ =8 × 10-5, D is total classifying dictionary, by all single classifying dictionary DiIt cascades, i here takes 1 and 2 respectively.Into One step is by Lasso method (Least Absolute Shrinkage and Selection Operator) to above-mentioned equation It is solved.Calculate the error amount r that test image belongs to the 1st class and the 2nd classi(y), calculation method is as follows:
Wherein,Indicate a column vector, it byIt generates, i.e. column vectorIn the value of corresponding i-th class be not 0, His value is set to 0 entirely.The judgment method that the test image belongs to picture quality classification c is as follows:
In the present embodiment, it is high quality graphic that the image generic, which is finally calculated,.
Finally we are by our method and Kuo-Yen Lo (Assessment of Photo Aesthetics with Efficiency method) is compared, and uses identical SVM as classifier, the standard of Kuo-Yen Lo method in experiment True rate is 73%, our experimental method classification accuracy is 82%, it was demonstrated that the validity of proposed method.
1 Experimental comparison's data of table
Experimental method Classification accuracy
Kuo-Yen Lo 73%
Dictionary learning 82%
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (5)

1. a kind of image quality evaluating method based on body feature analysis, which comprises the steps of:
Step S1 assesses database by image aesthetic quality and completes image quality measure model training, including following sub-step,
Step S11 carries out main body identification to every width training image in image aesthetic quality assessment database, determines body region The background area and;
Step S12 carries out body feature extraction in conjunction with the body region and background area detected, and wherein body feature includes: The depth of field, space configuration feature, background color complexity, the Lab feature of main body;
Step S13 calculates global characteristics image blur;
Step S14, picture quality classifying dictionary of the generation based on sparse representation are D;
Step S2 realizes picture quality discriminant classification, including following sub-step to test image;
Step S21 carries out body feature to the test image of input and global characteristics extracts, all features are connected to obtain Total feature vector is expressed as y;
Step S22 differentiates test image classification quality, provides the objective function of its optimization first:
In formula, x is variable,It is the result of equation Optimization Solution;λ is a regularization parameter, is a constant, D is image Quality classification dictionary solves above-mentioned equation using Lasso method;
Calculate the error amount r that test image belongs to the i-th classi(y), calculation method is as follows:
Wherein,Indicate a column vector, it byIt generates, i.e. column vectorIn the value of corresponding i-th class be not 0, it is other Value is set to 0 entirely;The judgment method that the test image belongs to picture quality classification c is as follows:
2. a kind of image quality evaluating method based on body feature analysis as described in claim 1, it is characterised in that: step Body region and background area, specific implementation are determined using the salient region detection method of Bogdan Alexe in S11 It is as follows,
(1.1) image preprocessing extracts R, G, B triple channel image of original image, carries out multiple ruler to triple channel image respectively Image f under different scale transformation is further divided into several identical candidate frames of size, determines conspicuousness area by degree transformation The number U in domain;
(1.2) salient region detection is carried out to each candidate frame, detection method uses conspicuousness detection method, specifically Are as follows: Fourier transformation is carried out to the image f under different scale transformation first and obtains its amplitude spectrum A (f) and phase spectrum P (f), it is fixed Adopted Log amplitude spectrum L (f)=log (A (f)), h are the mean filter convolution kernels of a n*n, and g is a Gaussian Blur convolution kernel, Then salient region S (f) calculation method is as follows:
R (f)=L (f)-h*L (f)
S (f)=g*F-1[exp(R(f)+iP(f))]2
Here i represents imaginary number, and exp represents exponential function;
(1.3) corresponding salient region S (f) is calculated to each candidate frame under different scale transformation, in conjunction with threshold value according to such as Lower method calculates the significance value of each candidate frame salient region, and using the biggish preceding U candidate frame of significance value as master Body region, other regions are as background area;Wherein the calculation formula of significance value is as follows:
In above formula, the transformed different scale of S representative image, ω indicates that candidate frame, p indicate the pixel in candidate frame, Indicate the value of p point in salient region, θSIndicate the threshold value at size S.
3. a kind of image quality evaluating method based on body feature analysis as described in claim 1, it is characterised in that: step The specific implementation of S12 is as follows,
(2.1) space configuration feature: being divided into 9 small cuboids for original image, makes it in groined type, 4 points of intersection That is golden section point determines that coordinate of main center's point in original image is (x1,y1), 4 golden section point coordinates are respectively (x2,y2)、(x3,y3)、(x4,y4)、(x5,y5);Specific step is as follows:
2.1a calculates separately main center's point and 4 golden section point distance L1、L2、L3、L4
Wherein, ω, h are respectively the width and height of original image;
The straight line and trunnion axis angulation α that 2.1b, calculating main center's point and 4 golden section points are linked to be1234
αm-1=arctan [(ym-y1)/(xm-x1)] m=2,3,4,5
2.1c calculates the different subjects area accounting be overlapped with the rectangle on four angles of groined type
Wherein,Width and height of respectively j-th of the body region in i-th of rectangle, ω, h are respectively the width of image And height;
2.1d, space configuration feature is by L1,L2,L3,L41234,It is connected into a column vector structure At;
(2.2) depth of field: for the corresponding body region of significance value maximum in step (1.3);
2.2a is calculated image blur: being converted respectively to R, G of image, channel B using fast fourier transform algorithm, Then maximum value therein is selected, 1/5th of maximum value are set as to judge the threshold value of image blur, then calculates image In obscure accounting of the pixel number (be less than threshold value) in whole image;
2.2b calculates the depth of field: by step 2.2a, calculating separately the corresponding body region of maximum significance value in step (1.3) Fuzziness F and background area fuzziness B, using the ratio of F and B as the characteristic value of the depth of field;
(2.3) background color complexity: R, G, B triple channel of original image are divided into N number of equidistant section respectively, existed in this way Then there is N in the three-dimensional color space of RGB3Color space is planted, the corresponding background of maximum significance value is every in traversal step (1.3) One pixel counts the number ni that pixel in background occupies color space, then using ni*100 as background color complexity Characteristic value;
(2.4) original image: being converted to Lab space from rgb space first by the Lab feature of main body, is then carried out to Lab space Feature extraction, it is as follows that RGB is transformed into Lab space process:
The value first converted R, G, B of original image in the corresponding X, Y, Z axis of Lab space;
Wherein,
Conversion formula are as follows:
Wherein,
L is converted by the value in X, Y, Z axis*、a*、b*
Wherein,
In addition, wherein L*、a*、b*Respectively represent brightness, tone, colour temperature, Xn、Yn、ZnGeneral default value are as follows: 95.047,100.0, 108.883 calculating separately following features value:
L (brightness):
Wherein, L1For average brightness of the corresponding main body of significance value maximum in step (1.3) in Lab space;L2For image Average brightness in Lab space;
A (tone characteristics):
Wherein, a1For tone average value of the corresponding main body of significance value maximum in step (1.3) in Lab space;a2For image Tone average value in Lab space;
B (colour temperature feature):
Wherein, b1For colour temperature average value of the corresponding main body of significance value maximum in step (1.3) in Lab space;b2For image Colour temperature average value in Lab space;
By A, B, C series connection constitute the Lab feature of main body for a column vector.
4. a kind of image quality evaluating method based on body feature analysis as described in claim 1, it is characterised in that: step Global characteristics image mould is calculated based on the pyramidal natural image non-reference fuzzy assessment method of Fourier transformation and airspace in S13 Paste degree, calculating process are as follows:
Original image cutting is identical 9 blocks of size, is expressed as f by S13a1,..,f9
S13b calculates power image P to original image and nine blocks according to the following formula:
P=10 (| G |2+1)
Wherein, G represents the Fourier transformation of correspondence image or block;
S13c obtains 10 power images by step S13b;Each power image is divided into identical 9 blocks of size again, In addition itself, totally 10 power images;The value of power image is divided into K section according to size, counts each section internal strength The number of rate value, using the statistical result as the feature vector of global characteristics image blur, the length of this feature vector is 100K。
5. a kind of image quality evaluating method based on body feature analysis as described in claim 1, it is characterised in that: step The implementation that picture quality classifying dictionary D is obtained in S14 is as follows,
Assuming that the classification comprising picture quality has N kind in image aesthetic quality assessment database, for the i-th class image, under It states method and constructs corresponding classifying dictionary Di, to the jth width training image in the i-th class, by it according to step S11, S12 and S13 Step generates body feature (including: the Lab feature of the depth of field, space configuration feature, background color complexity, main body) and the overall situation is special It levies (image blur), and all series connection obtains a column vector d by all featuresj, then Di=[d1,…,dM], M is The number of training image in i class;So total picture quality classifying dictionary can be expressed as D=[D1,…,DN]。
CN201910046769.XA 2019-01-18 2019-01-18 Image quality evaluation method based on principal feature analysis Expired - Fee Related CN109829924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910046769.XA CN109829924B (en) 2019-01-18 2019-01-18 Image quality evaluation method based on principal feature analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910046769.XA CN109829924B (en) 2019-01-18 2019-01-18 Image quality evaluation method based on principal feature analysis

Publications (2)

Publication Number Publication Date
CN109829924A true CN109829924A (en) 2019-05-31
CN109829924B CN109829924B (en) 2020-09-08

Family

ID=66860861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910046769.XA Expired - Fee Related CN109829924B (en) 2019-01-18 2019-01-18 Image quality evaluation method based on principal feature analysis

Country Status (1)

Country Link
CN (1) CN109829924B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580678A (en) * 2019-09-10 2019-12-17 北京百度网讯科技有限公司 image processing method and device
CN110807759A (en) * 2019-09-16 2020-02-18 幻想动力(上海)文化传播有限公司 Method and device for evaluating photo quality, electronic equipment and readable storage medium
CN110853032A (en) * 2019-11-21 2020-02-28 北京航空航天大学 Unmanned aerial vehicle video aesthetic quality evaluation method based on multi-mode deep learning
CN110889718A (en) * 2019-11-15 2020-03-17 腾讯科技(深圳)有限公司 Method and apparatus for screening program, medium, and electronic device
CN111507941A (en) * 2020-03-24 2020-08-07 杭州电子科技大学 Composition characterization learning method for aesthetic quality evaluation
CN111784702A (en) * 2020-06-16 2020-10-16 南京理工大学 Grading method for image segmentation quality
CN112991308A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Image quality determination method and device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127744A (en) * 2016-06-17 2016-11-16 广州市幸福网络技术有限公司 Display foreground and background border Salience estimation and system
CN106485259A (en) * 2015-08-26 2017-03-08 华东师范大学 A kind of image classification method based on high constraint high dispersive principal component analysiss network
CN106570862A (en) * 2016-10-25 2017-04-19 中国人民解放军信息工程大学 Super-resolution reconstruction quality evaluation method and apparatus thereof
CN106778788A (en) * 2017-01-13 2017-05-31 河北工业大学 The multiple features fusion method of aesthetic evaluation is carried out to image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485259A (en) * 2015-08-26 2017-03-08 华东师范大学 A kind of image classification method based on high constraint high dispersive principal component analysiss network
CN106127744A (en) * 2016-06-17 2016-11-16 广州市幸福网络技术有限公司 Display foreground and background border Salience estimation and system
CN106570862A (en) * 2016-10-25 2017-04-19 中国人民解放军信息工程大学 Super-resolution reconstruction quality evaluation method and apparatus thereof
CN106778788A (en) * 2017-01-13 2017-05-31 河北工业大学 The multiple features fusion method of aesthetic evaluation is carried out to image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周正 等: "基于角点特征检测的视频图像质量评价方法", 《计算机工程》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580678A (en) * 2019-09-10 2019-12-17 北京百度网讯科技有限公司 image processing method and device
CN110807759A (en) * 2019-09-16 2020-02-18 幻想动力(上海)文化传播有限公司 Method and device for evaluating photo quality, electronic equipment and readable storage medium
CN110807759B (en) * 2019-09-16 2022-09-06 上海甜里智能科技有限公司 Method and device for evaluating photo quality, electronic equipment and readable storage medium
CN110889718A (en) * 2019-11-15 2020-03-17 腾讯科技(深圳)有限公司 Method and apparatus for screening program, medium, and electronic device
CN110853032A (en) * 2019-11-21 2020-02-28 北京航空航天大学 Unmanned aerial vehicle video aesthetic quality evaluation method based on multi-mode deep learning
CN110853032B (en) * 2019-11-21 2022-11-01 北京航空航天大学 Unmanned aerial vehicle video tag acquisition method based on multi-mode deep learning
CN111507941A (en) * 2020-03-24 2020-08-07 杭州电子科技大学 Composition characterization learning method for aesthetic quality evaluation
CN111784702A (en) * 2020-06-16 2020-10-16 南京理工大学 Grading method for image segmentation quality
CN111784702B (en) * 2020-06-16 2022-09-27 南京理工大学 Grading method for image segmentation quality
CN112991308A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Image quality determination method and device, electronic equipment and medium
CN112991308B (en) * 2021-03-25 2023-11-24 北京百度网讯科技有限公司 Image quality determining method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN109829924B (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN109829924A (en) A kind of image quality evaluating method based on body feature analysis
CN106778788B (en) The multiple features fusion method of aesthetic evaluation is carried out to image
CN106529447B (en) Method for identifying face of thumbnail
CN108629338B (en) Face beauty prediction method based on LBP and convolutional neural network
CN104050471B (en) Natural scene character detection method and system
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN104008375B (en) The integrated face identification method of feature based fusion
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN106960176B (en) Pedestrian gender identification method based on transfinite learning machine and color feature fusion
CN101789005A (en) Image searching method based on region of interest (ROI)
CN104268590B (en) The blind image quality evaluating method returned based on complementary combination feature and multiphase
CN107092884B (en) Rapid coarse-fine cascade pedestrian detection method
CN108960404B (en) Image-based crowd counting method and device
CN110400293B (en) No-reference image quality evaluation method based on deep forest classification
CN103336835B (en) Image retrieval method based on weight color-sift characteristic dictionary
CN108829711B (en) Image retrieval method based on multi-feature fusion
CN109344818B (en) Light field significant target detection method based on deep convolutional network
CN112906550B (en) Static gesture recognition method based on watershed transformation
CN110796022B (en) Low-resolution face recognition method based on multi-manifold coupling mapping
CN105956570B (en) Smiling face's recognition methods based on lip feature and deep learning
CN108615229B (en) Collision detection optimization method based on curvature point clustering and decision tree
CN103473545A (en) Text-image similarity-degree measurement method based on multiple features
CN107169508A (en) A kind of cheongsam Image emotional semantic method for recognizing semantics based on fusion feature
CN109918542A (en) A kind of convolution classification method and system for relationship diagram data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200908