CN101609500B - Quality estimation method of exit-entry digital portrait photos - Google Patents

Quality estimation method of exit-entry digital portrait photos Download PDF

Info

Publication number
CN101609500B
CN101609500B CN2008102279269A CN200810227926A CN101609500B CN 101609500 B CN101609500 B CN 101609500B CN 2008102279269 A CN2008102279269 A CN 2008102279269A CN 200810227926 A CN200810227926 A CN 200810227926A CN 101609500 B CN101609500 B CN 101609500B
Authority
CN
China
Prior art keywords
image
face
detection
exit
entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008102279269A
Other languages
Chinese (zh)
Other versions
CN101609500A (en
Inventor
宛根训
田青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vimicro Corp
First Research Institute of Ministry of Public Security
Original Assignee
Vimicro Corp
First Research Institute of Ministry of Public Security
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp, First Research Institute of Ministry of Public Security filed Critical Vimicro Corp
Priority to CN2008102279269A priority Critical patent/CN101609500B/en
Publication of CN101609500A publication Critical patent/CN101609500A/en
Application granted granted Critical
Publication of CN101609500B publication Critical patent/CN101609500B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a quality estimation method of exit-entry digital portrait photos. The quality estimation method comprises the following steps: the format size and file size detection of each digital portrait photo, face detection, eye positioning, face horizontal centering detection, the exposure detection of global luminance deviation and partial luminance deviation, the clearness detection of contrast evaluation and ambiguity evaluation and the color detection and the background detection for detecting the skin naturalness in an Lab color space and judging whether colors in an image face area are qualified or not. The quality estimation method has the advantages that a quality estimation module of the digital portrait photos, specific quality detection items, detection flows, estimation indexes and estimation methods are established, which is different from the prior quality estimation method of the digital portrait photos relying on subjective estimation. The invention judges the quality of each digital portrait photo without relying on a standard image, obtains the judging result in accordance with the subjective feeling of people and the requirements of a face recognition system, thereby favorably satisfying the business requirements for managing the exit-entry digital portrait photos.

Description

Quality estimation method of exit-entry digital portrait photos
Technical field
The present invention relates to the computer software application field, particularly a kind of quality estimation method of exit-entry digital portrait photos.
Background technology
Along with informationization, multimedia and development of Communication Technique; Increasing digital portrait photograph begins to appear at a plurality of scopes of business of entry-exit management; The authentication of people's face biometric identity becomes one of important channel of identity validation, becomes a requisite link of entry and exit photograph management to the quality assessment of entry and exit Applied Digital Studio Portraits.In the accreditation link, the sector-meeting of low-quality digital portrait phase influences the attractive in appearance of certificate making, causes the true appearance difference of identity photographs and holder bigger on the one hand, increases difficulty for the verification of certificate; On the other hand, non-compliant image possibly cause the thrashing of people's face biological characteristic authentication, reduces the reliability of face authentication system.
Usually image quality evaluation can be divided into subjective assessment and objective evaluation.Subjective assessment mainly comes the quality of checking image through the people; In traditional entry-exit management business; The quality assessment of digital portrait photograph mainly realizes through subjective assessment, usually by the harmony of foreground admissibility staff or the logical ratio from the head of experienced photographic quality authentication personnel, whether people's face placed in the middle, whether image clear, whether photo has levels sense, photo color and expression whether naturally etc. the aspect judge whether photo can put in storage.Because subjective assessment deficiency in objective, science, unified judgment basis cause putting in storage photographic quality on the one hand and differ greatly, and on the other hand along with the extensive granting of the various certificates of entry and exit, on efficient, are difficult to adapt to the needs of business development.Objective evaluation is the quality through relatively coming checking image with benchmark image usually; Square error method (MSE) commonly used and Y-PSNR analogy method for objectively evaluating such as (PSNR); Be used for " fidelity " of evaluate image the new images after promptly restoring and the degree of approximation of original image.And aspect the quality assessment of entry-exit management digital portrait photograph; Because what be directed against is the photo that a width of cloth width of cloth is submitted to, have no " standard picture " can be for reference, and quality evaluation result and entry-exit management business be closely connected; Should make the result of quality assessment meet most people's subjective sensation; Make the result of quality assessment meet the operation requirement of certificate making again, guarantee that the certificate of making is attractive in appearance, satisfy the requirement of face authentication system.
Face recognition technology is the focus of studying in recent years; Face recognition technology is to utilize to analyze to compare the computer technology that people's face visual signature information is carried out the identity discriminating; Promptly seek the people's face position in the input picture automatically and extract the identity characteristic that contains in everyone face through computing machine, thereby then through comparing the identity of confirming everyone face with known people's face.People's face detects and the location is an important technology in the face recognition technology.In face identification system, at first confirm the position of people's face in the image through human face detection tech, be positioned at the position of locating facial characteristics such as eyes, nose (nostril) (eyebrow) (mouth) (lip) ear etc. in definite people's face through people's face then.Patent CN03147472.1 " based on the human eye localization method of GaborEye model " and patent 200510089006 " a kind of Combination of Multiple Classifiers face identification method that divides into groups based on characteristic " provide correlation technique.
Aspect the background detection technique; Traditional method at first uses the Sobel operator to obtain gradient map to former figure; On gradient map, carry out binaryzation preserving edge information, use seed fill algorithm to obtain the background area then, use the background area from former figure, to propose background image at last as blindage.Relevant document has:
" Sobel operator " (Ruan Qiuqi. Digital Image Processing. Beijing: electronics industry is published, and 2003)
" binaryzation " (Ostu N.A threshold selection method from gray2level histogram [J] .IEEE Trans SMC, 1979,9:62266)
" seed filling " (Sun Jiaguang, Yang Changgui. computer graphics (the 3rd edition) [M]. Beijing: publishing house of Tsing-Hua University, 1999)
In automatic gathering field; Various sharpness evaluation functions commonly used are commonly used to assess the quality of one group of sequence image, and aspect the quality assessment of entry-exit management digital portrait photograph, are the photos that a width of cloth width of cloth is submitted to owing to what be directed against; Have no " standard picture " can be for reference; And require evaluation result and entry-exit management business to be closely connected, one side requires the result of image quality evaluation to meet most people's subjective sensation, and the result of requirement evaluation meets the operation requirement of certificate making on the other hand; Guarantee that the certificate of making is attractive in appearance, satisfy the requirement of face authentication system.Therefore, can satisfy the quality estimation method of exit-entry digital portrait photos appearance as yet at present of exit-entry digital portrait management service requirement well.
Summary of the invention
The objective of the invention is to overcome and judge otherness and the poor efficiency that the phase tablet quality exists according to the personal experience; A kind of do not need to rely on " standard picture " is provided; Realized the quantification of digital photo quality judging; Result of determination had both met people's subjective feeling, met the requirement of face authentication system again fully, can satisfy the quality estimation method of exit-entry digital portrait photos of exit-entry digital portrait management service requirement well.
Quality estimation method of exit-entry digital portrait photos provided by the invention; This method is provided with computing machine exit-entry digital Studio Portraits quality evaluation model the digital portrait in the system of being input to is carried out the judgement of the whether satisfied traffic criteria of entering and leaving the border, and comprises the steps:
The step that digital portrait photograph form size and file size detect;
The step of detection of people's face and eye location;
The step that the portrait horizontal center detects; With
The step that background detects;
This method also is included in and carries out following steps between portrait horizontal center detection step and the background detection step:
1) step of exposure tests is used to detect the reliability of face image overall exposing:
(1) step of overall luminance deviation detection, it is excessive or not enough to be used to detect face's overall exposing:
A) oriented face area is carried out the color conversion from the RGB color space conversion to the Lab color space;
B) the brightness average P of calculating face area in the LAB color space;
C) if brightness average P is less than normal, promptly P<L1 confirms that the figure kine bias is dark, under-exposure; If P is bigger than normal for the brightness average, i.e. P>L2, then image is too bright, and is over-exposed;
(2) step of local luminance deviation detection, it is excessive to be used to detect partial exposure:
Human face region is divided into 8 * 8 sub-pieces, adds up the brightness average P of every sub-block respectively Ij, and calculate symmetrical luminance deviation respectively
▿ P i , j = | P i , j - P i , 9 - j | I=1...8 wherein, j=1...4
Add up maximum symmetric deviations, symmetric deviations average and variance respectively
Maximum symmetric deviations δ 2 = 1 - Max ▿ P i , j 255
The symmetric deviations average m = Σ i = 1 8 Σ j = 1 4 ▿ P i , j 32
Variance s = ( Σ i = 1 8 Σ j = 1 4 ( ▿ P i , j - m ) 2 32 ) 1 2 ,
Utilize the face area symmetry, detect the homogeneity of exposure respectively through maximum symmetric deviations, symmetric deviations average and variance, wherein maximum symmetric deviations δ 2The reflection partial exposure excessively causes image detail to be lost;
2) step that detects of sharpness is used to detect diffusion and the fog-level of detected image around each pixel:
(1) contrast evaluation is used to detect each pixel diffusion on every side:
A) oriented face area is scaled the identical standard face of area;
B) the standard face is transformed to gray level image from coloured image;
C) define the contrast of each pixel:
c(i,j)=|f(i,j)-f(i-1,j)|+|f(i,j)-f(i+1,j)|+|f(i,j)-f(i-1,j-1)|+|f(i,j)-f(i-1,j)|+|f(i,j)-f(i-1,j+1)|+|f(i,j)-f(i+1,j-1)|+|f(i,j)-f(i+1,j)|+|f(i,j)-f(i+1,j+1)|
D) lower limit and the upper limit of setting contrast threshold are used for getting rid of respectively non-marginal information and interference of noise, preserving edge information:
c ′ ( i , j ) = 0 c ( i , j ) ≤ c 1 c ( i , j ) c 1 ≤ c ( i , j ) ≤ c 2 0 c ( i , j ) ≥ c 2
The average contrast of edge calculation information and variance;
E) process decision chart similarly is not clear: if the contrast average is bigger, variance is bigger, then is picture rich in detail; If contrast is less, and variance is less, then is blurred picture;
(2) blur level evaluation, confirm the blur level of image with the Robert gradient under the different dynamic thresholds:
A) with the standard face long-pending be that the coloured image of the human face region of 28960 pixels is transformed to gray level image;
B) define the Robert gradient of each pixel:
g ( x , y ) = [ ( f ( x , y ) - f ( x + 1 , y + 1 ) ) 2 + ( f ( x + 1 , y ) - f ( x , y + 1 ) ) 2 ] 1 2 ;
C) definition dynamic threshold c keeps initial value when Grad surpasses threshold value c, represent that this is the edge, otherwise be changed to 0:
g ( x , y ) = 0 g ( x , y ) < c g ( x , y ) g ( x , y ) &GreaterEqual; c
D) statistics edge gradient histogram, and carry out normalization:
hist(i)=count(g(x,y)=i)
p ( i ) = hist ( i ) &Sigma; i = 1 255 hist ( i )
The i value is 1 in the formula ... 255, (x y) representes all marginal points to g;
E) blur level of calculating and definite image:
blur = &Sigma; i = 1 T p ( i )
Wherein T is an empirical value, and the values of ambiguity of image is at 0 to 1, and values of ambiguity is big more, and image is fuzzy more;
3) step of color detection; Whether as the color of face area qualified: at the Lab color space in Lab color space detection colour of skin naturalness and process decision chart if being used for; Use a value representation by green to red progressively transition, by indigo plant to Huang progressively transition use the b value representation, calculate a average and the b average of each image face area respectively; And draw the areal map of a, b at two-dimensional space, judge then: if
( a - 7.5 ) 2 7.5 2 + ( b - 5 ) 2 10 2 > 1
It then is the defective image of color.
Method of the present invention; Wherein said digital portrait photograph form size and file size detect carries out following steps: computing machine reads the DID and the attribute of input; Comprise width and the height of form, the image of image, the size of file, judge that whether satisfying the professional standard of entry and exit is that the form of image is JPG, picture traverse is 354 pixels; Highly be 472 pixels, the file size scope is 18-25KB.
Method of the present invention; Wherein said people's face detects with eye location and carries out following steps: at first adopt the recognition of face field in people's face detect and feature location technological; From digital photo, find the position of human face region and eyes, human face region comprises the zone between forehead, chin and the left and right sides auricle; The human eye location comprises the position of locating pupil center; Do not comprise people's face in the image and maybe can't navigate to eyes if detect, confirm that then image does not comprise people's face or picture quality is relatively poor.
Method of the present invention, wherein said portrait horizontal center detect carries out following steps:
1) position of eyes in the definition image, the width of establishing image is W, highly is H, left eye is (X apart from position, the image upper left corner L, Y L), right eye is (X apart from the image upper left corner R, Y R):
d1=|X L+X R-W|×0.5
d2=|Y L+Y R|×0.5
d3=|X R-X L|
Angle=atan(|Y L-Y R|/d3)
Wherein, d1 representes the distance of portrait off-set from horizontal centerline, and d2 representes the position distance of eye areas apart from image top, and d3 representes two oculocentric distances, the anglec of rotation of Angle presentation video;
2) judge:
When d1 surpassed 0.03 times of picture traverse, image was not placed in the middle;
In certificate photo, the distance that d2 is suitable is between the 0.3H to 0.5H;
The scope of d3 is W/4 to W/3;
The anglec of rotation is no more than 3 degree.
Method of the present invention; Wherein said background detects carries out following steps: at first use the Sobel operator to obtain gradient map to original image; On gradient map, carry out binaryzation preserving edge information; Use seed fill algorithm to obtain the background area then, use the background area from former figure, to propose background image at last as blindage.
The advantage of quality estimation method of exit-entry digital portrait photos of the present invention is: based on the harmony of two oculocentric physical locations and range estimation people face position and ratio; Exposure tests, sharpness detection process decision chart based on face area similarly are to deny the clear sense that has levels; Judge based on face's Face Detection whether the photo color is natural; Realized the quantification of digital photo quality judging; It is different from the past that the evaluation of the quality of digital portrait photograph mainly realizes by subjective assessment, has set up entry-exit management with the unified objective criterion of digital portrait phase tablet quality.Exposure provided by the invention, sharpness and color appraisal procedure can be judged the quality of every width of cloth digital portrait photograph; Do not need to rely on " standard picture "; Result of determination meets most people's subjective feeling; Meet the requirement of face authentication system fully, can satisfy the requirement of exit-entry digital portrait management service well.The present invention has set up the assessment models of entry-exit management with digital portrait phase tablet quality first, has set up concrete quality testing item, testing process, evaluation index and appraisal procedure.
Description of drawings
Below in conjunction with accompanying drawing the present invention is further described.
Fig. 1 is the process flow diagram of quality estimation method of exit-entry digital portrait photos of the present invention;
Fig. 2 is the test item table of quality estimation method of exit-entry digital portrait photos of the present invention;
Fig. 3 is that people's face detects and human eye location synoptic diagram in the quality estimation method of exit-entry digital portrait photos of the present invention;
Fig. 4 is a) to Fig. 4 c) be exposure check result figure in the quality estimation method of exit-entry digital portrait photos of the present invention;
Fig. 5 is a) to Fig. 5 b) be contrast check result figure in the quality estimation method of exit-entry digital portrait photos of the present invention;
Fig. 6 is a whole figure contrast mean variance distribution plan in the quality estimation method of exit-entry digital portrait photos of the present invention;
Fig. 7 is a quality estimation method of exit-entry digital portrait photos contrast mean variance distribution plan of the present invention;
Fig. 8 is the normalization histogram of gradients of different images in the quality estimation method of exit-entry digital portrait photos of the present invention;
Fig. 9 A to Fig. 9 D is the digital portrait photograph of different fog-levels;
Figure 10 is that the different quality coloured image is at the ab of Lab color space color space distribution plan.
Embodiment
The present invention passes through exit-entry digital Studio Portraits quality evaluation model according to entry-exit management own service demand; Require necessary clear, the levels are rich of portrait of digital portrait photograph of warehouse-in; The expression nature; Portrait is horizontal center in photograph, runs counter in this standard each and will will not put in storage as image off quality.
The appraisal procedure that the present invention adopts according to the attribute of exit-entry digital portrait standard and digital portrait photograph self, has been formulated exit-entry digital Studio Portraits quality check project and concrete inspection flow process.The present invention is different with general digital photo assessment, and we are the position of location human face region and eyes from digital photo at first, the quality of emphasis assessment face area and eye areas.Inspection item is as shown in Figure 2.Testing process is as shown in Figure 1.
Referring to Fig. 1 and Fig. 2; Quality estimation method of exit-entry digital portrait photos of the present invention; Setting up the automatic assessment models of exit-entry digital Studio Portraits quality utilizes computing machine that the digital picture in the system of being input to is assessed automatically; Whether the digital picture of judging input satisfies the professional relevant standard of entry and exit, comprises the steps:
The step that digital portrait photograph form size and file size detect:
With reference to Fig. 2, stipulate that in the entry and exit business form of image is JPG, picture traverse is 354 pixels, highly is 472 pixels, the file size scope is 18-25KB.Computing machine at first reads the DID and the attribute of input, comprises width and the height of form, the image of image, the size of file etc., judges whether satisfy the professional standard of entry and exit.
The step of detection of people's face and eye location:
The present invention introduces the correlation technique of recognition of face; Utilize the recognition of face field in people's face detect with feature location technological; From digital photo, accurately find the position of human face region and eyes, about the concrete realization of detection of people's face and location can be with reference to recognition of face field pertinent literature.With reference to Fig. 3, human face region comprises the zone between forehead, chin and the left and right sides auricle, and the position of pupil center can accurately be located in the human eye location.Do not comprise people's face in the image and maybe can't navigate to eyes if detect, show that image does not comprise people's face or picture quality is relatively poor.The emphasis of assessment is the face area to portrait, guarantee the portrait exposure evenly, face feature is clear is rich in level.
The step that the portrait horizontal center detects:
Whether during horizontal center, portrait area commonly used judges that from the distance of image boundary still, hair style etc. are disturbed bigger to testing result in detected image, and fluffy hair can cause human face region shared zone in image too little.Among the present invention, the horizontal center line of image is divided two oculocentric lines equally; The ratio of two oculocentric distance reflection heads in entire image, two eye distances are from too small, and head image is less than normal, and two eye distances are from excessive, and head image is bigger than normal; Two eye distances are from the coordination degree of the distance reflection portrait of image bottom; In addition, the rotation degree of the position of eyes reflection image.
If the width of image is W, highly be H, left eye is (X apart from position, the image upper left corner L, Y L), right eye is (X apart from the image upper left corner R, Y R), it is following to define three distances:
d1=|X L+X R-W|×0.5
d2=|Y L+Y R|×0.5
d3=|X R-X L|
Angle=atan(|Y L-Y R|/d3)
Wherein d1 representes the distance of portrait off-set from horizontal centerline, and when d1 surpassed 0.03 times of picture traverse, human eye can feel that obviously image is not placed in the middle; D2 representes the distance of eye areas apart from image top, and in certificate photo, human eye is positioned at the top of image, draws from large-scale statistical, and the suitable distance of d2 is between 0.3H to 0.5H; D3 representes two oculocentric distances, and the scope of d3 is W/4 to W/3; The anglec of rotation of Angle presentation video, after the anglec of rotation surpassed 3 degree usually, human eye can be discovered.
The step of exposure tests:
Exposure tests adopts overall luminance deviation and local luminance deviation to be respectively applied for the reliability that detects the face image overall exposing, wherein:
(1) overall luminance deviation
A) oriented face area is carried out the color conversion from the RGB color space conversion to the Lab color space;
B) the brightness average P of calculating face area in the LAB color space;
C) if brightness average P is less than normal, promptly P<L1 confirms that the figure kine bias is dark, under-exposure; If P is bigger than normal for the brightness average, i.e. P>L2, then image is too bright, and is over-exposed.
In an embodiment of the present invention, with reference to Fig. 4 a) to Fig. 4 c), be depicted as the result of exposure inspection.With reference to Fig. 4 .a), brightness average P=167.35, it is suitable to make public, with reference to Fig. 4 b), brightness average P=212.52, brightness is bigger than normal, and is over-exposed, with reference to Fig. 4 .c), brightness average P=75.90, dark partially, under-exposure.
(2) local luminance deviation
Human face region is divided into 8 * 8 sub-pieces, adds up the brightness average P of every sub-block respectively Ij, and calculate symmetrical luminance deviation respectively
&dtri; P i , j = | P i , j - P i , 9 - j | I=1...8 wherein, j=1...4
Add up maximum symmetric deviations, symmetric deviations average and variance respectively
Maximum symmetric deviations &delta; 2 = 1 - Max &dtri; P i , j 255
The symmetric deviations average m = &Sigma; i = 1 8 &Sigma; j = 1 4 &dtri; P i , j 32
Variance s = ( &Sigma; i = 1 8 &Sigma; j = 1 4 ( &dtri; P i , j - m ) 2 32 ) 1 2 ,
Utilize the face area symmetry, detect the homogeneity of exposure respectively through maximum symmetric deviations, symmetric deviations average and variance, wherein maximum symmetric deviations δ 2The reflection partial exposure excessively causes image detail to be lost, and is defective image.
Sharpness detects:
2) sharpness detects, and sharpness detects and comprises contrast evaluation and blur level evaluation, wherein:
(1) contrast evaluation:
A), oriented face area is scaled the identical standard face of area with reference to Fig. 3;
B) the standard face is transformed to gray level image from coloured image;
C) define the contrast of each pixel:
c(i,j)=|f(i,j)-f(i-1,j)|+|f(i,j)-f(i+1,j)|+|f(i,j)-f(i-1,j-1)|+|f(i,j)-f(i-1,j)|+|f(i,j)-f(i-1,j+1)|+|f(i,j)-f(i+1,j-1)|+|f(i,j)-f(i+1,j)|+|f(i,j)-f(i+1,j+1)|
D) lower limit and the upper limit of setting contrast threshold are used for getting rid of respectively non-marginal information and interference of noise, preserving edge information:
c &prime; ( i , j ) = 0 c ( i , j ) &le; c 1 c ( i , j ) c 1 &le; c ( i , j ) &le; c 2 0 c ( i , j ) &GreaterEqual; c 2
The average contrast of edge calculation information and variance;
E) process decision chart similarly is not clear: if contrast average and variance are bigger, then be picture rich in detail, with reference to Fig. 5 a), like contrast P=25.64, variance S=30.34; If contrast is less, and variance is less, then is blurred picture, with reference to Fig. 5 b), like contrast average P=16.79, variance S=21.65.
In an embodiment of the present invention, be divided into two groups to reality test photograph, one group of blurred picture, one group of picture rich in detail in advance.To every width of cloth photograph, at first calculate the contrast average and the variance of view picture photograph, use the present invention's defined method statistic contrast average and variance then, with reference to Fig. 6 and Fig. 7, the contrast mean variance distribution situation of difference drawing image.
With reference to Fig. 6 and Fig. 7, wherein transverse axis is the contrast average, and the longitudinal axis is the contrast variance, "+" expression blurred picture of lighter color, ". " expression picture rich in detail that color is darker.The contrast mean variance distribution situation that Fig. 7 obtains for this paper evaluation method, as can be seen from the figure, the contrast average of picture rich in detail is bigger; Variance is bigger; And blurred picture, because contrast concentrates on less contrast part, the contrast average is less; And variance is less, has only a spot of overlapping between the two picture group pictures.Therefore, use the contrast evaluation method of the present invention's definition to make a distinction blurred picture and picture rich in detail preferably.Fig. 6 is traditional contrast mean variance distribution situation of taking entire image, and as can be seen from the figure, the distribution of two picture group pictures is overlapped, is difficult to distinguish.
(2) blur level evaluation:
With reference to Fig. 3, oriented face area is scaled the identical standard face of area;
A) the standard face is transformed to gray level image from coloured image;
B) define the Robert gradient of each pixel:
g ( x , y ) = [ ( f ( x , y ) - f ( x + 1 , y + 1 ) ) 2 + ( f ( x + 1 , y ) - f ( x , y + 1 ) ) 2 ] 1 2 ;
C) definition dynamic threshold c keeps initial value when Grad surpasses threshold value c, represent that this is the edge, otherwise is changed to 0, marginal information account for that the standard face amasss 0.05:
g ( x , y ) = 0 g ( x , y ) < c g ( x , y ) g ( x , y ) &GreaterEqual; c
D) statistics edge gradient histogram, and carry out normalization:
hist(i)=count(g(x,y)=i)
p ( i ) = hist ( i ) &Sigma; i = 1 255 hist ( i )
The i value is 1 in the formula ... 255, (x y) representes all marginal points to g;
E) blur level of calculating and definite image:
blur = &Sigma; i = 1 T p ( i )
Wherein T is an empirical value, and the values of ambiguity of image is at 0 to 1, and values of ambiguity is big more, and image is fuzzy more.
In an embodiment of the present invention, with reference to Fig. 5, draw the normalization histogram of gradients of picture rich in detail and blurred picture respectively, blurred picture histogram distribution major part concentrates on less gradient part, and blur level is bigger; Picture rich in detail histogram distribution broad, few part appears at less gradient part, and blur level is bigger.
In an embodiment of the present invention, with reference to Fig. 9 A-Fig. 9 D, be depicted as the blur level of different fog-level digital portrait photographs.With reference to Fig. 9 A, blur level A=0.495156, blur level is minimum, and image is the most clear; With reference to Fig. 9 B to Fig. 9 D, blur level B=0.822690, blur level C=0.907083, blur level D=0.951220, the strain of blur level phase is big, image is also more and more fuzzyyer.Can find out that from the result blur level can correctly reflect the fog-level of image.
Color detection:
Whether color detection is used in Lab color space detection colour of skin naturalness and process decision chart qualified as the color of face area.In the Lab color space, use a value representation by green to red progressively transition, by indigo plant to Huang progressively transition use the b value representation, calculate a average and the b average of each image face area respectively, and draw the areal map of a, b at two-dimensional space, judge then: if
( a - 7.5 ) 2 7.5 2 + ( b - 5 ) 2 10 2 > 1
It then is the defective image of color.
In an embodiment of the present invention; In order to confirm the distribution of the colour of skin on the Lab color space; At first to the set of number portrait photo, be divided into 5 grades to color nature degree according to the effect of certificate making and give a mark respectively: serious colour cast, colour cast, badly confirm, the colour of skin normally, very natural.In 5 ranks, it is defective that serious colour cast and colour cast belong to color, bad confirm to be defined as can declare and can not declare, the colour of skin normal with the qualified image of genus color very naturally.Then,, calculate a average and the b average of people's face area in every width of cloth image respectively, project to respectively by class in the ab space of Lab color space with reference to Figure 10.Can find out, be distributed in X-Y scheme left-half for serious colour cast and colour cast image, the defective image of genus color; Its excess-three class is distributed in the elliptic region of right half part, between have tangible discrimination, can distinguish the defective image of color.
The step that background detects:
We realize the background test section through traditional method, and at first the digital photo to input carries out rim detection, promptly uses the Sobel operator to obtain gradient map to former figure; On gradient map, carry out binaryzation preserving edge information; Use seed fill algorithm to fill on scheming on the edge of then, obtain the background area, from former figure, propose background image then; Check the homogeneity of background image at last, use the background area from former figure, to propose background image as blindage.Use this method,, also can more accurately from Studio Portraits, extract the background area even the change color scope of background is very big.
The present invention is directed to the requirement of exit-entry digital portrait management service, and the certificate quality standard that combines International Civil Aviation Organization to propose recently, set up the quality evaluating method of entry-exit management digital portrait photograph; Through estimating qualified portrait photo; Satisfy the professional requirement of entry and exit well, evaluation result meets most people's subjective sensation on the one hand, meets the operation requirement of certificate making on the other hand; Guarantee that the certificate of making is attractive in appearance, satisfy the requirement of follow-up face authentication system simultaneously.
Above-described embodiment; Only be that preferred implementation of the present invention is described; Be not that scope of the present invention is limited; Design under the prerequisite of spirit not breaking away from the present invention, various distortion and improvement that the common engineering technical personnel in this area make technical scheme of the present invention all should fall in the protection domain that claims of the present invention confirm.

Claims (4)

1. quality estimation method of exit-entry digital portrait photos, this method are provided with computing machine exit-entry digital Studio Portraits quality evaluation model the digital portrait in the system of being input to are carried out whether satisfying the judgement of entry and exit traffic criteria, comprise the steps:
The step that digital portrait photograph form size and file size detect;
The step of detection of people's face and eye location;
The step that the portrait horizontal center detects; With
The step that background detects;
It is characterized in that wherein said portrait horizontal center detects carries out following steps:
1) position of eyes in the definition image, the width of establishing image is W, highly is H, left eye apart from position, the image upper left corner be (XL, YL), right eye apart from the image upper left corner be (XR, YR):
d1=|XL+XR-W|×0.5
d2=|YL+YR|×0.5
d3=|XR-XL|
Angle=atan(|YL-YR|/d3)
Wherein, d1 representes the distance of portrait off-set from horizontal centerline, and d2 representes the position distance of eye areas apart from image top, and d3 representes two oculocentric distances, the anglec of rotation of Angle presentation video;
2) judge:
When d1 surpassed 0.03 times of picture traverse, image was not placed in the middle;
In certificate photo, the distance that d2 is suitable is between the 0.3H to 0.5H;
The scope of d3 is W/4 to W/3;
The anglec of rotation is no more than 3 degree;
This method also is included in and carries out following steps between portrait horizontal center detection step and the background detection step:
1) step of exposure tests is used to detect the reliability of face image overall exposing:
(1) step of overall luminance deviation detection, it is excessive or not enough to be used to detect face's overall exposing:
A) oriented face area is carried out the color conversion from the RGB color space conversion to the Lab color space;
B) the brightness average P of calculating face area in the Lab color space;
C) if brightness average P is less than normal, promptly P<L1 confirms that the figure kine bias is dark, under-exposure; If P is bigger than normal for the brightness average, i.e. P>L2, then image is too bright, and is over-exposed; Brightness average when wherein L1 is under-exposed, the brightness average when L2 is over-exposed;
(2) step of local luminance deviation detection, it is excessive to be used to detect partial exposure:
Human face region is divided into 8 * 8 sub-pieces, adds up the brightness average Pij of every sub-block respectively, and calculate symmetrical luminance deviation respectively
&dtri; P i , j = | P i , j - P i , 9 - j | I=1...8 wherein, j=1...4
Add up maximum symmetric deviations, symmetric deviations average and variance respectively
Maximum symmetric deviations &delta; 2 = 1 - Max &dtri; P i , j 255
The symmetric deviations average m = &Sigma; i = 1 8 &Sigma; j = 1 4 &dtri; P i , j 32
Variance s = ( &Sigma; i = 1 8 &Sigma; j = 1 4 ( &dtri; P i , j - m ) 2 32 ) 1 2 ,
Utilize the face area symmetry, detect the homogeneity of exposure respectively through maximum symmetric deviations, symmetric deviations average and variance, wherein maximum symmetric deviations δ 2The reflection partial exposure excessively causes image detail to be lost;
2) step that detects of sharpness is used to detect diffusion and the fog-level of detected image around each pixel:
(1) contrast evaluation is used to detect each pixel diffusion on every side:
A) oriented face area is scaled the identical standard face of area;
B) the standard face is transformed to gray level image from coloured image;
C) define the contrast of each pixel:
c(i,j)=|f(i,j)-f(i-1,j)|+|f(i,j)-f(i+1,j)|+|f(i,j)-f(i-1,j-1)|+|f(i,j)-f(i-1,j)|+|f(i,j)-f(i-1,j+1)|+|f(i,j)-f(i+1,j-1)|+|f(i,j)-f(i+1,j)|+|f(i,j)-f(i+1,j+1)|
D) lower limit and the upper limit of setting contrast threshold are used for getting rid of respectively non-marginal information and interference of noise, preserving edge information:
c &prime; ( i , j ) = 0 c ( i , j ) &le; c 1 c ( i , j ) c 1 &le; c ( i , j ) &le; c 2 0 c ( i , j ) &GreaterEqual; c 2
The average contrast of edge calculation information and variance;
E) process decision chart similarly is not clear: if the contrast average is bigger, variance is bigger, then is picture rich in detail; If contrast is less, and variance is less, then is blurred picture;
(2) blur level evaluation, confirm the blur level of image with the Robert gradient under the different dynamic thresholds:
A) with the standard face long-pending be that the coloured image of the human face region of 28960 pixels is transformed to gray level image;
B) define the Robert gradient of each pixel:
g ( x , y ) = [ ( f ( x , y ) - f ( x + 1 , y + 1 ) ) 2 + ( f ( x + 1 , y ) - f ( x , y + 1 ) ) 2 ] 1 2
C) definition dynamic threshold c keeps initial value when Grad surpasses threshold value c, represent that this is the edge, otherwise be changed to 0:
g ( x , y ) = 0 g ( x , y ) < c g ( x , y ) g ( x , y ) &GreaterEqual; c
D) statistics edge gradient histogram, and carry out normalization:
hist(i)=count(g(x,y)=i)
p ( i ) = hist ( i ) &Sigma; i = 1 255 hist ( i )
The i value is 1 in the formula ... 255, (x y) representes all marginal points to g;
E) blur level of calculating and definite image:
blur = &Sigma; i = 1 T p ( i )
Wherein T is an empirical value, and the values of ambiguity of image is at 0 to 1, and values of ambiguity is big more, and image is fuzzy more;
3) step of color detection; Whether as the color of face area qualified: at the Lab color space in Lab color space detection colour of skin naturalness and process decision chart if being used for; Use a value representation by green to red progressively transition, by indigo plant to Huang progressively transition use the b value representation, calculate a average and the b average of each image face area respectively; And draw the areal map of a, b at two-dimensional space, judge then: if
( a - 7.5 ) 2 7.5 2 + ( b - 5 ) 2 10 2 > 1
It then is the defective image of color.
2. quality estimation method of exit-entry digital portrait photos according to claim 1; It is characterized in that; Wherein said digital portrait photograph form size and file size detect carries out following steps: computing machine reads the DID and the attribute of input; Comprise width and the height of form, the image of image, the size of file, judge that whether satisfying the professional standard of entry and exit is that the form of image is JPG, picture traverse is 354 pixels; Highly be 472 pixels, the file size scope is 18-25KB.
3. quality estimation method of exit-entry digital portrait photos according to claim 1 and 2; It is characterized in that; Wherein said people's face detects and eye location is carried out following steps: at first adopt the people's face in the recognition of face field to detect and the feature location technology; From digital photo, find the position of human face region and eyes, human face region comprises the zone between forehead, chin and the left and right sides auricle; The human eye location comprises the position of locating pupil center; Do not comprise people's face in the image and maybe can't navigate to eyes if detect, confirm that then image does not comprise people's face or picture quality is relatively poor.
4. quality estimation method of exit-entry digital portrait photos according to claim 3; It is characterized in that; Wherein said background detects carries out following steps: at first use the Sobel operator to obtain gradient map to original image; On gradient map, carry out binaryzation preserving edge information, use seed fill algorithm to obtain the background area then, use the background area from former figure, to propose background image at last as blindage.
CN2008102279269A 2008-12-01 2008-12-01 Quality estimation method of exit-entry digital portrait photos Expired - Fee Related CN101609500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102279269A CN101609500B (en) 2008-12-01 2008-12-01 Quality estimation method of exit-entry digital portrait photos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102279269A CN101609500B (en) 2008-12-01 2008-12-01 Quality estimation method of exit-entry digital portrait photos

Publications (2)

Publication Number Publication Date
CN101609500A CN101609500A (en) 2009-12-23
CN101609500B true CN101609500B (en) 2012-07-25

Family

ID=41483252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102279269A Expired - Fee Related CN101609500B (en) 2008-12-01 2008-12-01 Quality estimation method of exit-entry digital portrait photos

Country Status (1)

Country Link
CN (1) CN101609500B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104458597A (en) * 2014-12-03 2015-03-25 东莞市神州视觉科技有限公司 Camera-based method, device and system for detecting product color based on
CN105187721A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 An identification camera and method for rapidly extracting portrait features

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137271A (en) * 2010-11-04 2011-07-27 华为软件技术有限公司 Method and device for evaluating image quality
FR2968811A1 (en) * 2010-12-13 2012-06-15 Univ Paris Diderot Paris 7 METHOD OF DETECTING AND QUANTIFYING FLOUD IN A DIGITAL IMAGE
CN102063660B (en) * 2010-12-28 2012-09-05 广州商景网络科技有限公司 Acquisition method for electronic photograph, client, server and system
CN102063659B (en) * 2010-12-28 2012-09-05 广州商景网络科技有限公司 Method, server and system for collecting and making electronic photo
CN102622741B (en) * 2011-01-30 2015-04-29 联想(北京)有限公司 Method for detecting image file and apparatus thereof
CN103988207B (en) * 2011-12-14 2018-03-13 英特尔公司 Methods, devices and systems for colour of skin activation
CN103455994A (en) * 2012-05-28 2013-12-18 佳能株式会社 Method and equipment for determining image blurriness
CN102867179A (en) * 2012-08-29 2013-01-09 广东铂亚信息技术股份有限公司 Method for detecting acquisition quality of digital certificate photo
CN103927753B (en) * 2014-04-21 2016-03-02 中国人民解放军国防科学技术大学 The absolute blur level method of estimation of a kind of image based on multiple dimensioned restructuring DCT coefficient
CN105654451A (en) * 2014-11-10 2016-06-08 中兴通讯股份有限公司 Image processing method and device
CN105139404B (en) * 2015-08-31 2018-12-21 广州市幸福网络技术有限公司 A kind of the license camera and shooting quality detection method of detectable shooting quality
CN105120167B (en) * 2015-08-31 2018-11-06 广州市幸福网络技术有限公司 A kind of license camera and license image pickup method
CN106874306B (en) * 2015-12-14 2020-10-09 公安部户政管理研究中心 Method for evaluating key performance index of population information portrait comparison system
CN105915791B (en) * 2016-05-03 2019-02-05 Oppo广东移动通信有限公司 Electronic apparatus control method and device, electronic device
CN106372651B (en) * 2016-08-22 2018-03-06 平安科技(深圳)有限公司 The detection method and device of picture quality
CN106446851A (en) * 2016-09-30 2017-02-22 厦门大图智能科技有限公司 Visible light based human face optimal selection method and system
CN107169507A (en) * 2017-01-06 2017-09-15 华南理工大学 A kind of certificate photo exposure directions detection algorithm extracted based on grid search-engine
EP3629286A4 (en) * 2017-05-01 2021-01-13 Kowa Company, Ltd. Image analysis evaluation method, computer program, and image analysis evaluation device
JP6879841B2 (en) * 2017-06-28 2021-06-02 株式会社 東京ウエルズ Image processing method and defect inspection method
CN109215010B (en) * 2017-06-29 2021-08-31 沈阳新松机器人自动化股份有限公司 Image quality judgment method and robot face recognition system
CN109584198B (en) * 2017-09-26 2022-12-23 浙江宇视科技有限公司 Method and device for evaluating quality of face image and computer readable storage medium
CN107977648B (en) * 2017-12-20 2020-05-12 武汉大学 Identification card definition distinguishing method and system based on face recognition
CN110363753B (en) * 2019-07-11 2021-06-22 北京字节跳动网络技术有限公司 Image quality evaluation method and device and electronic equipment
CN110363180A (en) * 2019-07-24 2019-10-22 厦门云上未来人工智能研究院有限公司 A kind of method and apparatus and equipment that statistics stranger's face repeats
CN110738656B (en) * 2019-10-28 2022-05-31 公安部交通管理科学研究所 Definition evaluation method of certificate photo, storage medium and processor
CN111199527B (en) * 2020-01-04 2021-02-02 圣点世纪科技股份有限公司 Finger vein image noise detection method based on multi-direction self-adaptive threshold
CN115115504A (en) * 2021-03-23 2022-09-27 深圳市腾讯计算机系统有限公司 Certificate photo generation method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567369A (en) * 2003-06-18 2005-01-19 佳能株式会社 Image processing method and device
CN1841405A (en) * 2005-04-01 2006-10-04 上海银晨智能识别科技有限公司 Three-dimensional portrait imaging device and distinguishing method for three-dimensional human face

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567369A (en) * 2003-06-18 2005-01-19 佳能株式会社 Image processing method and device
CN1841405A (en) * 2005-04-01 2006-10-04 上海银晨智能识别科技有限公司 Three-dimensional portrait imaging device and distinguishing method for three-dimensional human face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
沈冯怡等.基于肤色及脸部特征的脸像检测及其应用.《微计算机信息》.2007,(第31期),203-205. *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104458597A (en) * 2014-12-03 2015-03-25 东莞市神州视觉科技有限公司 Camera-based method, device and system for detecting product color based on
CN104458597B (en) * 2014-12-03 2017-09-22 东莞市神州视觉科技有限公司 A kind of product colour detection method, device and system based on camera
CN105187721A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 An identification camera and method for rapidly extracting portrait features
CN105187721B (en) * 2015-08-31 2018-09-21 广州市幸福网络技术有限公司 A kind of the license camera and method of rapid extraction portrait feature

Also Published As

Publication number Publication date
CN101609500A (en) 2009-12-23

Similar Documents

Publication Publication Date Title
CN101609500B (en) Quality estimation method of exit-entry digital portrait photos
CN108038456B (en) Anti-deception method in face recognition system
CN104834898B (en) A kind of quality classification method of personage&#39;s photographs
CN105046246B (en) It can carry out the license camera and portrait pose detection method of the shooting prompt of portrait posture
CN105359162B (en) For the pattern mask of the selection and processing related with face in image
US7715596B2 (en) Method for controlling photographs of people
CN105139404B (en) A kind of the license camera and shooting quality detection method of detectable shooting quality
CN108010024B (en) Blind reference tone mapping image quality evaluation method
CN107862299A (en) A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera
CN101390128B (en) Detecting method and detecting system for positions of face parts
US20050117802A1 (en) Image processing method, apparatus, and program
US20070122034A1 (en) Face detection in digital images
KR100857463B1 (en) Face Region Detection Device and Correction Method for Photo Printing
CN101655981A (en) Method for detecting and adjusting inversion of certificate image
Ananto et al. Color transformation for color blind compensation on augmented reality system
CN111079688A (en) Living body detection method based on infrared image in face recognition
CN112528939A (en) Quality evaluation method and device for face image
CN111222433A (en) Automatic face auditing method, system, equipment and readable storage medium
JP4619762B2 (en) Image processing method, apparatus, and program
CN116704579A (en) Student welcome new photo analysis system and method based on image processing
US20190347469A1 (en) Method of improving image analysis
RU2365995C2 (en) System and method of recording two-dimensional images
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113221606B (en) Face recognition method based on IMS video conference login
KR102634477B1 (en) Diagnosis system for using machine learning-based 2d skin image information and method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120725

Termination date: 20121201