CN107590807A - Method and apparatus for detection image quality - Google Patents
Method and apparatus for detection image quality Download PDFInfo
- Publication number
- CN107590807A CN107590807A CN201710911138.0A CN201710911138A CN107590807A CN 107590807 A CN107590807 A CN 107590807A CN 201710911138 A CN201710911138 A CN 201710911138A CN 107590807 A CN107590807 A CN 107590807A
- Authority
- CN
- China
- Prior art keywords
- image
- probability
- key point
- face
- mentioned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for detection image quality.One embodiment of this method includes:Obtain image to be detected and obtained by being carried out in advance to the image to be detected after Face datection, human face region information for indicating the human face region in the image to be detected;Based on the human face region information, facial image is extracted from the image to be detected;By the convolutional neural networks of facial image input training in advance, image feature information is obtained, wherein, the convolutional neural networks are used to extract characteristics of image;The image feature information is parsed, determines face key point information;Based on the face key point information, the up-to-standard probability of the facial image is determined;Based on the probability, determine whether the quality of the facial image is qualified.This embodiment improves picture quality detection efficiency.
Description
Technical field
The application is related to field of computer technology, and in particular to Internet technical field, more particularly, to detection image
The method and apparatus of quality.
Background technology
Recognition of face has more application scenarios, such as face payment, face authentication, beautifying faces etc..In face
In identification process, if picture quality including face is unqualified (such as multiple positions of face be blocked, be unintelligible etc.),
It may result in identification error, or the appearance of the abnormal conditions such as system crash.If before recognition of face is carried out, first detection includes
Whether the quality of the image of face is qualified, then can avoid carrying out the underproof image of quality follow-up recognition of face flow.
Recognition of face efficiency can so be improved.Therefore, quality testing is carried out to the image including face to be particularly important.
The content of the invention
The purpose of the embodiment of the present application is to propose a kind of method and apparatus for detection image quality.
In a first aspect, the embodiment of the present application provides a kind of method for detection image quality, this method includes:Obtain
Image to be detected and in advance to above-mentioned image to be detected carry out Face datection after gained, for indicating in above-mentioned image to be detected
Human face region human face region information;Based on above-mentioned human face region information, facial image is extracted from above-mentioned image to be detected;
By the convolutional neural networks of above-mentioned facial image input training in advance, image feature information is obtained, wherein, above-mentioned convolutional Neural net
Network is used to extract characteristics of image;Above-mentioned image feature information is parsed, determines face key point information;Based on above-mentioned face
Key point information, determine the up-to-standard probability of above-mentioned facial image;Based on above-mentioned probability, the matter of above-mentioned facial image is determined
Whether amount is qualified.
In certain embodiments, each face that above-mentioned face key point information is included included by above-mentioned facial image is crucial
The probability that point is blocked;And it is above-mentioned above-mentioned image feature information is parsed, determine face key point information, including:Will
First probability calculation model of above-mentioned image feature information input training in advance, obtains each individual included by above-mentioned facial image
The probability that face key point is blocked, wherein, the image that above-mentioned first probability calculation model is used to characterize the image for including face is special
Reference ceases the corresponding relation for the probability being blocked with face key point.
In certain embodiments, each face that above-mentioned face key point information is included included by above-mentioned facial image is crucial
The coordinate of point;And it is above-mentioned above-mentioned image feature information is parsed, determine face key point information, including:By above-mentioned figure
As the face key point location model of characteristic information input training in advance, obtain each face included by above-mentioned facial image and close
The coordinate of key point, wherein, above-mentioned face key point location model be used for characterize include face image image feature information and
The corresponding relation of the coordinate of face key point.
In certain embodiments, it is above-mentioned to be based on above-mentioned face key point information, determine the up-to-standard of above-mentioned facial image
Probability, including:The of the probability input training in advance that each face key point included by above-mentioned facial image is blocked
Two probability calculation models, the up-to-standard probability of above-mentioned facial image is obtained, wherein, above-mentioned second probability calculation model is used for
Characterizing includes pair between probability that each face key point included by the image of face is blocked and up-to-standard probability
It should be related to.
In certain embodiments, it is above-mentioned to be based on above-mentioned face key point information, determine the up-to-standard of above-mentioned facial image
Probability, including:Centered on each face key point, extracted from above-mentioned facial image with default height and width
Image block;By the 3rd probability calculation model of each image block extracted input training in advance, above-mentioned facial image is obtained
Up-to-standard probability, wherein, above-mentioned 3rd probability calculation model be used for characterize extracted from the image including face,
Centered on face key point and have each image block of default height and width is corresponding with up-to-standard probability to close
System.
In certain embodiments, above-mentioned second probability calculation model trains to obtain by following training step:Extraction
Probability that each face key point in sample image that is preset including showing face is blocked and for characterizing sample graph
First training sample of the first whether qualified data markers of the quality of picture;Using machine learning method, based on the above-mentioned first instruction
Practice sample, default first Classification Loss function and back-propagation algorithm to train to obtain the second probability calculation model, wherein, it is above-mentioned
First Classification Loss function is used for the probability and the difference of above-mentioned first data markers for characterizing above-mentioned second probability calculation model output
DRS degree.
In certain embodiments, above-mentioned 3rd probability calculation model trains to obtain by following training step:Extraction
It is preset including being extracted from the sample image for show face, centered on face key point and there is default height
Degree and each image block of width and second for characterizing the second whether qualified data markers of the quality of sample image are trained
Sample;Using machine learning method, calculated based on above-mentioned second training sample, default second Classification Loss function and backpropagation
Method trains to obtain the 3rd probability calculation model, wherein, above-mentioned second Classification Loss function is used to characterize above-mentioned 3rd probability calculation
The probability and the difference degree of above-mentioned second data markers of model output.
In certain embodiments, it is above-mentioned to be based on above-mentioned human face region information, face figure is extracted from above-mentioned image to be detected
Picture, including:Expand the scope of the human face region indicated by above-mentioned human face region information, obtain the first human face region;Intercept above-mentioned
First human face region obtains above-mentioned facial image.
In certain embodiments, human face region is rectangular area;And indicated by the above-mentioned above-mentioned human face region information of expansion
Human face region scope, including:The height of human face region indicated by above-mentioned human face region information and width expansion are preset
Multiple or increase default value.
Second aspect, the embodiment of the present application provide a kind of device for detection image quality, and the device includes:Obtain
Unit, be configured to obtain image to be detected and in advance to above-mentioned image to be detected carry out Face datection after obtained by, for referring to
Show the human face region information of the human face region in above-mentioned image to be detected;Extraction unit, it is configured to be based on above-mentioned human face region
Information, facial image is extracted from above-mentioned image to be detected;Input block, it is configured to above-mentioned facial image inputting instruction in advance
Experienced convolutional neural networks, obtain image feature information, wherein, above-mentioned convolutional neural networks are used to extract characteristics of image;First
Determining unit, it is configured to parse above-mentioned image feature information, determines face key point information;Second determining unit,
It is configured to be based on above-mentioned face key point information, determines the up-to-standard probability of above-mentioned facial image;3rd determining unit,
It is configured to be based on above-mentioned probability, determines whether the quality of above-mentioned facial image is qualified.
In certain embodiments, each face that above-mentioned face key point information is included included by above-mentioned facial image is crucial
The probability that point is blocked;And above-mentioned first determining unit includes:First input subelement, is configured to above-mentioned characteristics of image
Information inputs the first probability calculation model of training in advance, obtains each face key point included by above-mentioned facial image and is hidden
The probability of gear, wherein, above-mentioned first probability calculation model is used for the image feature information and face for characterizing the image for including face
The corresponding relation for the probability that key point is blocked.
In certain embodiments, each face that above-mentioned face key point information is included included by above-mentioned facial image is crucial
The coordinate of point;And above-mentioned first determining unit includes:Second input subelement, is configured to above-mentioned image feature information is defeated
Enter the face key point location model of training in advance, obtain the coordinate of each face key point included by above-mentioned facial image,
Wherein, above-mentioned face key point location model is used for image feature information and the face key point for characterizing the image for including face
The corresponding relation of coordinate.
In certain embodiments, above-mentioned second determining unit includes:3rd input subelement, is configured to above-mentioned face
Second probability calculation model of the probability input training in advance that each face key point included by image is blocked, is obtained above-mentioned
The up-to-standard probability of facial image, wherein, the image that above-mentioned second probability calculation model includes face for characterizing is wrapped
Corresponding relation between probability and up-to-standard probability that each face key point included is blocked.
In certain embodiments, above-mentioned second determining unit includes:Subelement is extracted, is configured to crucial with each face
Centered on point, the image block with default height and width is extracted from above-mentioned facial image;4th input subelement, matches somebody with somebody
The 3rd probability calculation model for each image block extracted to be inputted to training in advance is put, obtains the matter of above-mentioned facial image
Measure qualified probability, wherein, above-mentioned 3rd probability calculation model be used for characterize extracted from the image including face, with people
Centered on face key point and each image block with default height and width and up-to-standard probability corresponding relation.
In certain embodiments, above-mentioned second probability calculation model trains to obtain by following training step:Extraction
Probability that each face key point in sample image that is preset including showing face is blocked and for characterizing sample graph
First training sample of the first whether qualified data markers of the quality of picture;Using machine learning method, based on the above-mentioned first instruction
Practice sample, default first Classification Loss function and back-propagation algorithm to train to obtain the second probability calculation model, wherein, it is above-mentioned
First Classification Loss function is used for the probability and the difference of above-mentioned first data markers for characterizing above-mentioned second probability calculation model output
DRS degree.
In certain embodiments, above-mentioned 3rd probability calculation model trains to obtain by following training step:Extraction
It is preset including being extracted from the sample image for show face, centered on face key point and there is default height
Degree and each image block of width and second for characterizing the second whether qualified data markers of the quality of sample image are trained
Sample;Using machine learning method, calculated based on above-mentioned second training sample, default second Classification Loss function and backpropagation
Method trains to obtain the 3rd probability calculation model, wherein, above-mentioned second Classification Loss function is used to characterize above-mentioned 3rd probability calculation
The probability and the difference degree of above-mentioned second data markers of model output.
In certain embodiments, said extracted unit includes:Expand subelement, be configured to expand above-mentioned human face region letter
The scope of the indicated human face region of breath, obtains the first human face region;Subelement is intercepted, is configured to intercept above-mentioned first face
Region obtains above-mentioned facial image.
In certain embodiments, human face region is rectangular area;And above-mentioned expansion subelement includes:Extension module, match somebody with somebody
Put for by the height of the human face region indicated by above-mentioned human face region information and width expansion preset multiple or increase present count
Value.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, and the electronic equipment includes:One or more processing
Device;Storage device, for storing one or more programs;When said one or multiple programs are by said one or multiple processors
Perform so that the method for said one or the realization of multiple processors as described in any implementation in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, are stored thereon with computer journey
Sequence, the method as described in any implementation in first aspect is realized when said procedure is executed by processor.
The embodiment of the present application provide the method and apparatus for detection image quality, by obtain image to be detected and in advance
Gained, face area for indicating the human face region in the image to be detected after Face datection are first carried out to the image to be detected
Domain information, so as to which based on the human face region information, facial image is extracted from the image to be detected.It is then that the facial image is defeated
Enter the convolutional neural networks of training in advance, to obtain image feature information.Then by being solved to the image feature information
Analysis, it may be determined that face key point information.It is finally based on the face key point information and determines the up-to-standard of above-mentioned facial image
Probability, so that whether the quality based on the above-mentioned facial image of the determine the probability qualified.So as to be effectively utilized to facial image
Extraction, can reduce detection range, improve picture quality detection efficiency.
Moreover, convolutional neural networks can generally extract the image feature information of the various dimensions of image.Based on training in advance
The image feature information of facial image that extracts of convolutional neural networks, to determine face key point information, and be based on the people
Face key point information determines the up-to-standard probability of facial image, can improve the degree of accuracy of the probability, and then can improve
The degree of accuracy of picture quality testing result.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that the application can apply to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for detection image quality of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for detection image quality of the application;
Fig. 4 is the flow chart according to another embodiment of the method for detection image quality of the application;
Fig. 5 is the structural representation according to one embodiment of the device for detection image quality of the application;
Fig. 6 is adapted for the structural representation of the computer system of the electronic equipment for realizing the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Be easy to describe, illustrate only in accompanying drawing to about the related part of invention.
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the application in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the dress for the method for detection image quality or for detection image quality that can apply the application
The exemplary system architecture 100 for the embodiment put.
As shown in figure 1, system architecture 100 can include data storage server 101, network 102 and image procossing clothes
Business device 103.Network 102 is in data storage offer communication link between server 101 and image processing server 103
Medium.Network 102 can include various connection types, such as wired, wireless communication link or fiber optic cables etc..
Data storage server 101 can be to provide the server of various services, such as include face for storing
The server of image and the human face region information for indicating the human face region in the image.Alternatively, data storage services
Device 101 can also have Face datection function, and the human face region information can be that data storage server 101 enters to the image
The information generated after row Face datection.
Image processing server 103 can be to provide the server of various services, such as from data storage server 101
Image to be detected and the human face region information for indicating the human face region in the image to be detected are obtained, and it is to be detected based on this
Image and the human face region information carry out corresponding detection operation, obtain testing result.
It should be noted that the method for detection image quality that the embodiment of the present application is provided is typically by image procossing
Server 103 is performed, and correspondingly, the device for detection image quality is generally positioned in image processing server 103.
If it is pointed out that image to be detected that image processing server 103 to be obtained and for indicating that this is to be checked
The human face region information of human face region in altimetric image is stored in advance in image processing server 103 locally, then system architecture 100
In can not include data storage server 101.
It should be understood that the number of the data storage server, network and image processing server in Fig. 1 is only to illustrate
Property.According to needs are realized, can have any number of data storage server, network and image processing server.
With continued reference to Fig. 2, the stream of one embodiment of the method for detection image quality according to the application is shown
Journey 200.This is used for the flow 200 of the method for detection image quality, comprises the following steps:
Step 201, obtain image to be detected and in advance to image to be detected carry out Face datection after obtained by, for indicating
The human face region information of human face region in image to be detected.
In the present embodiment, the method operation electronic equipment thereon for detection image quality is (such as shown in Fig. 1
Image processing server 103) it can be serviced by wired connection mode or radio connection from the data storage connected
Device (such as data storage server 101 shown in Fig. 1) obtains image to be detected and enters pedestrian to the image to be detected in advance
Human face region information obtained by after face detection, for indicating the human face region in the image to be detected.Certainly, if this is to be detected
Image and the human face region information are stored in advance in above-mentioned electronic equipment local, and it is to be checked that above-mentioned electronic equipment can locally obtain this
Altimetric image and the human face region information.
It should be noted that human face region can be the human face region for having arbitrary shape (such as circular, rectangle etc.).
Here, when the human face region in above-mentioned image to be detected is border circular areas, above-mentioned human face region information can for example include should
The coordinate of the central point of human face region and the radius of the human face region.When the human face region in above-mentioned image to be detected is rectangle region
During domain, above-mentioned human face region information is such as can include the coordinate at least one summit of the human face region, height and width
Deng.
Actively obtained it is pointed out that above-mentioned image to be detected and above-mentioned human face region information can be above-mentioned electronic equipments
What electronic equipment take or above-mentioned passively obtained (is, for example, that above-mentioned data storage is sent to above-mentioned electronics with server
Equipment), the present embodiment does not do any restriction to content in this respect.
In some optional implementations of the present embodiment, above-mentioned electronic equipment can also be from the terminal device connected
Obtain above-mentioned image to be detected and above-mentioned human face region information.It should be noted that the present embodiment is not to above-mentioned image to be detected
Any restriction is done with the source of above-mentioned human face region information.
Step 202, based on human face region information, facial image is extracted from image to be detected.
In the present embodiment, above-mentioned electronic equipment is after above-mentioned image to be detected and above-mentioned human face region information is got,
Above-mentioned electronic equipment can be based on above-mentioned human face region information, and facial image is extracted from above-mentioned image to be detected.As an example,
The human face region that above-mentioned electronic equipment can intercept indicated by above-mentioned human face region information in above-mentioned image to be detected obtains people
Face image.
Step 203, facial image is inputted to the convolutional neural networks of training in advance, obtains image feature information.
In the present embodiment, for above-mentioned electronic equipment after facial image is obtained, above-mentioned electronic equipment can be by the face figure
As the convolutional neural networks of input training in advance, image feature information is obtained.Wherein, the convolutional neural networks can be used for extracting
Characteristics of image.Herein, image feature information can be the information for being characterized to the feature of image, and the feature of image can be with
It is the various fundamentals (such as color, lines, texture etc.) of image.In practice, convolutional neural networks (Convolutional
Neural Network, CNN) it is a kind of feedforward neural network, its artificial neuron can be responded in a part of coverage
Surrounding cells, have outstanding performance for image procossing.Therefore, it is possible to carry out image feature information using convolutional neural networks
Extraction.
It should be noted that above-mentioned convolutional neural networks can be to existing using machine learning method and training sample
Depth convolutional neural networks (such as DenseBox, VGGNet, ResNet, SegNet etc.) are carried out obtained from Training.
It is pointed out that above-mentioned convolutional neural networks can include at least one convolutional layer and at least one pond layer.Wherein, convolution
Layer can be used for extracting characteristics of image, and pond layer can be used for carrying out down-sampled (downsample) to the information of input.In addition,
Above-mentioned convolutional neural networks can also use various nonlinear activation functions (such as ReLU (Rectified Linear Units,
Correct linear unit) function, Sigmoid functions etc.) NONLINEAR CALCULATION is carried out to information.
Step 204, image feature information is parsed, determines face key point information.
In the present embodiment, above-mentioned electronic equipment is after the image feature information of above-mentioned facial image is obtained, above-mentioned electronics
Equipment can parse to the image feature information, determine face key point information.Here the face key point information is for example
Probability or coordinate that each face key point in above-mentioned facial image is blocked can be included.
As an example, when above-mentioned electronic equipment face key point information to be determined is including each in above-mentioned facial image
During the probability that individual face key point is blocked, above-mentioned electronic equipment local or the service being connected with above-mentioned electronic equipment telecommunication
The first mapping table can be previously stored with device.First mapping table can include substantial amounts of image feature information and
The probability that each face key point corresponding with the image feature information is blocked.Above-mentioned electronic equipment can be in first correspondence
Search the target image characteristics information that is matched with the image feature information of above-mentioned facial image in relation table, and by the target image
The determine the probability that each face key point corresponding to characteristic information is blocked is that each face in above-mentioned facial image is crucial
The probability that point is blocked.
For another example when above-mentioned electronic equipment face key point information to be determined is including each in above-mentioned facial image
During the coordinate of face key point, above-mentioned electronic equipment is local or the server that is connected with above-mentioned electronic equipment telecommunication in can be with
It is previously stored with the second mapping table.Second mapping table can include substantial amounts of image feature information and with the image
The coordinate of each face key point corresponding to characteristic information.Above-mentioned electronic equipment can be searched in second mapping table with
The target image characteristics information of the image feature information matching of above-mentioned facial image, and by corresponding to the target image characteristics information
The coordinate of each face key point be defined as the coordinate of each face key point in above-mentioned facial image.
It should be noted that face key point can be the point with stronger semantic information in preassigned, face
(such as point in canthus, the corners of the mouth, wing of nose position, profile etc.).In practice, the quantity of face key point can be 72, also may be used
To be other default numerical value, the present embodiment does not do any restriction to content in this respect.
In some optional implementations of the present embodiment, when above-mentioned face key point information includes above-mentioned facial image
During the probability that included each face key point is blocked, above-mentioned electronic equipment can be by the characteristics of image of above-mentioned facial image
Information inputs the first probability calculation model of training in advance, obtains each face key point included by above-mentioned facial image and is hidden
The probability of gear.Wherein, the first probability calculation model can be used for image feature information and the people for characterizing the image for including face
The corresponding relation for the probability that face key point is blocked.
It is based on a large amount of statistics calculating as an example, above-mentioned first probability calculation model can be technical staff and pre-establishes
, the mapping table of the corresponding relation of the probability being blocked for characterizing image feature information and face key point;Can also
It is using model-naive Bayesian (Naive Bayesian Model, NBM), SVMs (Support Vector
Machine, SVM) etc. be used for the model classified or be trained to obtain using classification function (such as softmax functions etc.) etc.
's.
Step 205, based on face key point information, the up-to-standard probability of facial image is determined.
In the present embodiment, above-mentioned electronic equipment is it is determined that after above-mentioned face key point information, and above-mentioned electronic equipment can be with
Based on above-mentioned face key point information, the up-to-standard probability of above-mentioned facial image is determined.
As an example, when above-mentioned electronic equipment above-mentioned face key point information to be determined is included in above-mentioned facial image
Each face key point be blocked probability when, above-mentioned electronic equipment is local or is connected with above-mentioned electronic equipment telecommunication
Calculation formula can be previously stored with server.The calculation formula can be that technical staff pre-establishes, for based on bag
The formula of the up-to-standard probability for the probability calculation image for including that each face key point in the image of face is blocked.On
The probability that each face key point included by above-mentioned facial image is blocked can be brought into above-mentioned calculating public affairs by stating electronic equipment
Formula, calculate the up-to-standard probability of above-mentioned facial image.
For another example when above-mentioned electronic equipment above-mentioned face key point information to be determined is included in above-mentioned facial image
During the coordinate of each face key point, in above-mentioned electronic equipment local or the server being connected with above-mentioned electronic equipment telecommunication
The 3rd mapping table can be previously stored with.3rd mapping table can include the image institute for largely showing face
Including each face key point coordinate and the image up-to-standard probability.Above-mentioned electronic equipment can be from the 3rd pair
That answers in relation table that the coordinate of inquiry and each face key point included by above-mentioned facial image matches is up-to-standard general
Rate.
In some optional implementations of the present embodiment, when above-mentioned face key point information includes above-mentioned facial image
During the probability that included each face key point is blocked, above-mentioned electronic equipment can determine above-mentioned face in the following manner
The up-to-standard probability of image:The probability input that each face key point included by above-mentioned facial image is blocked is advance
Second probability calculation model of training, obtains the up-to-standard probability of above-mentioned facial image.Wherein, above-mentioned second probability calculation
Model, which can be used for characterizing, includes probability that each face key point included by the image of face is blocked and up-to-standard
Corresponding relation between probability.
It is based on a large amount of statistics calculating as an example, above-mentioned second probability calculation model can be technical staff and presets and formulate
, the mapping table of corresponding relation for characterizing probability that each face key point is blocked and up-to-standard probability.
In some optional implementations of the present embodiment, above-mentioned second probability calculation model can be by following instruction
Practice step and train what is obtained:
First, above-mentioned electronic equipment can extract preset including show each face in the sample image of face
The probability that key point is blocked and the first training sample for characterizing the first whether qualified data markers of the quality of sample image
This.Up-to-standard data markers that wherein, first data markers can include being used for characterizing the sample image (such as numeral
" 1 " etc.) and for characterizing the data markers (such as digital " 0 " etc.) off quality of the sample image.
Then, above-mentioned electronic equipment can utilize machine learning method, based on above-mentioned first training sample, default first
Classification Loss function and back-propagation algorithm train to obtain the second probability calculation model.Wherein, the first Classification Loss function can
For characterizing the probability and the difference degree of above-mentioned first data markers of above-mentioned second probability calculation model output.This first point
Class loss function can be various loss function (such as Hinge Loss functions or the Softmax Loss function for being used to classify
Deng).In the training process, above-mentioned first Classification Loss function can constrain mode and the direction of convolution kernel modification, the mesh of training
It is designated as making the value of above-mentioned first Classification Loss function minimum.Thus, the parameter of the second probability calculation model obtained after training is i.e.
Value for above-mentioned first Classification Loss function parameter corresponding when being minimum value.
It should be noted that above-mentioned back-propagation algorithm (Back Propgation Algorithm, BP algorithm) can also claim
For error back propagation (Error Back Propagation, BP) algorithm, or Back Propagation Algorithm.BP algorithm is by learning
Process is made up of the forward-propagating of signal and two processes of backpropagation of error.In feedforward network, input signal is through input
Layer input, is calculated by hidden layer and is exported by output layer, output valve is compared with mark value, if there is error, by error 25 reversely by defeated
Go out layer to input Es-region propagations, in this process, gradient descent algorithm (such as stochastic gradient descent algorithm) can be utilized to god
It is adjusted through first weights (such as parameter of convolution kernel etc. in convolutional layer).Herein, above-mentioned first Classification Loss function can be used for
Characterize the error of output valve and mark value.
Step 206, the up-to-standard probability based on facial image, determine whether the quality of facial image is qualified.
In the present embodiment, above-mentioned electronic equipment is it is determined that after the up-to-standard probability of above-mentioned facial image, above-mentioned electricity
Sub- equipment can be based on the probability, determine whether the quality of above-mentioned facial image is qualified.As an example, above-mentioned electronic equipment can be with
By the probability compared with default probability threshold value (such as 0.5 etc.), if up-to-standard probability is less than the probability threshold value,
Above-mentioned electronic equipment can determine the off quality of above-mentioned facial image;Otherwise, above-mentioned electronic equipment can determine above-mentioned people
Face image it is up-to-standard.It should be noted that the probability threshold value can modify according to being actually needed, the present embodiment
Any restriction is not done to content in this respect.
With continued reference to Fig. 3, Fig. 3 is one according to the application scenarios of the method for detection image quality of the present embodiment
Schematic diagram.In Fig. 3 application scenarios, image processing server 301 can obtain from the data storage server 302 connected
Take image to be detected 303 and in advance to image to be detected 303 carry out Face datection after, for indicating in image to be detected 303
The human face region information 304 of human face region.Then, image processing server 301 can intercept face in image to be detected 303
Human face region indicated by area information 304, obtain facial image 305.Afterwards, image processing server 301 can be by face
Image 305 inputs the convolutional neural networks of training in advance, obtains image feature information 306.Then, image processing server 301
Image feature information 306 can be parsed, determine face key point information 307.Then, image processing server 301 can
To determine the up-to-standard probability 308 of facial image 305 based on face key point information 307.Finally, image processing server
301 can draw the detection of the quality for facial image 305 by up-to-standard probability 308 compared with probability threshold value
As a result 309.
The method that above-described embodiment of the application provides is effectively utilized the extraction to facial image, can reduce detection model
Enclose, improve picture quality detection efficiency.
Moreover, convolutional neural networks can generally extract the image feature information of the various dimensions of image.Based on training in advance
The image feature information of facial image that extracts of convolutional neural networks, to determine face key point information, and be based on the people
Face key point information determines the up-to-standard probability of facial image, can improve the degree of accuracy of the probability, and then can improve
The degree of accuracy of picture quality testing result.
With further reference to Fig. 4, it illustrates the flow 400 of another embodiment of the method for detection image quality.
This is used for the flow 400 of the method for detection image quality, comprises the following steps:
Step 401, obtain image to be detected and in advance to image to be detected carry out Face datection after obtained by, for indicating
The human face region information of human face region in image to be detected.
In the present embodiment, the method operation electronic equipment thereon for detection image quality is (such as shown in Fig. 1
Image processing server 103) it can be serviced by wired connection mode or radio connection from the data storage connected
Device (such as data storage server 101 shown in Fig. 1) obtains image to be detected and enters pedestrian to the image to be detected in advance
Human face region information obtained by after face detection, for indicating the human face region in the image to be detected.Certainly, if this is to be detected
Image and the human face region information are stored in advance in above-mentioned electronic equipment local, and it is to be checked that above-mentioned electronic equipment can locally obtain this
Altimetric image and the human face region information.It is pointed out that human face region can be rectangular area.
Step 402, expand the scope of the human face region indicated by human face region information, obtain the first human face region, and cut
The first human face region is taken to obtain facial image.
In the present embodiment, above-mentioned electronic equipment is after above-mentioned image to be detected and above-mentioned human face region information is got,
Above-mentioned electronic equipment can expand the scope of the human face region indicated by human face region information, obtain the first human face region.It is above-mentioned
Electronic equipment can intercept first human face region and obtain facial image.
In the present embodiment, above-mentioned electronic equipment can be by the height of the human face region indicated by above-mentioned human face region information
With width expansion preset multiple or increase default value, using the human face region after expansion as the first human face region.Here, should
Preset multiple numerical value such as can be 1.Moreover, this highly can correspond to same default value with the width, can also be right
Answer different default values.For example, default value corresponding with the height is and the height identical numerical value;It is corresponding with the width
Default value be and the height identical numerical value.Preset multiple and default value are can be modified according to being actually needed
, the present embodiment does not do any restriction to content in this respect.
Step 403, facial image is inputted to the convolutional neural networks of training in advance, obtains image feature information.
In the present embodiment, above-mentioned electronic equipment is after above-mentioned facial image is obtained, and above-mentioned electronic equipment can will be above-mentioned
Facial image inputs the convolutional neural networks of training in advance, obtains image feature information.Wherein, the convolutional neural networks can be used
In extraction characteristics of image.Herein, image feature information can be the information for being characterized to the feature of image, the spy of image
Sign can be the various fundamentals (such as color, lines, texture etc.) of image.Here, for the solution of the convolutional neural networks
Explanation is released to will not be repeated here referring to the related description of the step 203 in embodiment illustrated in fig. 2.
Step 404, image feature information is inputted to the face key point location model of training in advance, obtains facial image institute
Including each face key point coordinate.
In the present embodiment, after above-mentioned image feature information is obtained, above-mentioned electronic equipment can incite somebody to action above-mentioned electronic equipment
The face key point location model of above-mentioned image feature information input training in advance, is obtained each included by above-mentioned facial image
The coordinate of face key point.Wherein, the face key point location model can be used for the image spy for characterizing the image for including face
Reference ceases the corresponding relation with the coordinate of face key point.
It is based on a large amount of statistics it should be noted that above-mentioned face key point location model can be technical staff and makes in advance
The image feature information of fixed, to include face for sign image is corresponding with the corresponding relation of the coordinate of face key point to close
It is table;Can be trained what is obtained using existing various logic regression model (Logistic Regression, LR);
Can also be that Training is carried out to obtain to existing depth convolutional neural networks by machine learning method and training sample
Arrive.
Step 405, centered on each face key point, extracted from facial image with default height and width
Image block.
In the present embodiment, above-mentioned electronic equipment is it is determined that the seat of each face key point included by above-mentioned facial image
After mark, above-mentioned electronic equipment can be extracted with default centered on each face key point from above-mentioned facial image
The image block of height and width.As an example, can be provided with stingy figure class application on above-mentioned electronic equipment in advance, above-mentioned electronics is set
Standby can be applied using the stingy figure class takes above-mentioned image block from above-mentioned facial image.It should be noted that the height of image block
Degree and width be able to can also be differed with identical.And this can highly modify with the width according to being actually needed,
The present embodiment does not do any restriction to content in this respect.
Step 406, each image block extracted is inputted to the 3rd probability calculation model of training in advance, obtains face figure
The up-to-standard probability of picture.
In the present embodiment, above-mentioned electronic equipment is after execution of step 405, and above-mentioned electronic equipment will can extract
3rd probability calculation model of each image block input training in advance, obtains the up-to-standard probability of facial image.Wherein, should
3rd probability calculation model can be used for characterizing it is being extracted from the image including face, centered on face key point and have
There is the corresponding relation of each image block of default height and width and up-to-standard probability.
It should be noted that above-mentioned 3rd probability calculation model can be that existing depth convolutional neural networks are instructed
Get.Above-mentioned 3rd probability calculation model can include at least one convolutional layer, at least one pond layer and at least one
Full articulamentum.As an example, above-mentioned 3rd probability calculation model can include 5 convolutional layers, 3 pond layers and 1 full connection
Layer.
Here, the 3rd probability calculation model can train to obtain by following training step:
First, above-mentioned electronic equipment can extract it is preset including extracted from the sample image for show face,
Centered on face key point and each image block with default height and width and the quality for characterizing sample image
Whether the second training sample of the second qualified data markers.Wherein, the second data markers can include being used to characterize sample graph
The up-to-standard data markers (such as digital " 1 " etc.) of picture and the data markers off quality for characterizing sample image
(such as digital " 0 " etc.).Here, sample image off quality for example may have following at least one situation:Face mould
At least one position in paste, face is blocked, face distortion etc..
Then, above-mentioned electronic equipment can utilize machine learning method, based on above-mentioned second training sample, default second
Classification Loss function and back-propagation algorithm train to obtain the 3rd probability calculation model.Wherein, the second Classification Loss function can
For characterizing the probability of the 3rd probability calculation model output and the difference degree of above-mentioned second data markers.Second classification
Loss function can be the various loss functions (such as Hinge Loss functions or Softmax Loss functions etc.) for being used to classify.
In the training process, the second Classification Loss function can constrain mode and the direction of convolution kernel modification, and the target of training is to make
The value of the second Classification Loss function is minimum.Thus, the parameter of the 3rd probability calculation model obtained after training be this second
The value of Classification Loss function parameter corresponding when being minimum value.
It should be noted that above-mentioned back-propagation algorithm is alternatively referred to as error backpropagation algorithm, or error Back-Propagation is calculated
Method.Above-mentioned back-propagation algorithm is made up of learning process the forward-propagating of signal and two processes of backpropagation of error.
In feedforward network, input signal inputs through input layer, is calculated by hidden layer and is exported by output layer, output valve and mark value ratio
Compared with, if there is error, by error 25 reversely from output layer to input Es-region propagations, in this process, can utilize gradient decline calculate
Method (such as stochastic gradient descent algorithm) is adjusted to neuron weights (such as parameter of convolution kernel etc. in convolutional layer).This
Place, above-mentioned second Classification Loss function can be used for the error for characterizing output valve and mark value.
Step 407, the up-to-standard probability based on facial image, determine whether the quality of facial image is qualified.
In the present embodiment, above-mentioned electronic equipment is it is determined that after the up-to-standard probability of above-mentioned facial image, above-mentioned electricity
Sub- equipment can be based on the probability, determine whether the quality of above-mentioned facial image is qualified.As an example, above-mentioned electronic equipment can be with
By the probability compared with default probability threshold value (such as 0.5 etc.), if up-to-standard probability is less than the probability threshold value,
Above-mentioned electronic equipment can determine the off quality of above-mentioned facial image;Otherwise, above-mentioned electronic equipment can determine above-mentioned people
Face image it is up-to-standard.It should be noted that the probability threshold value can modify according to being actually needed, the present embodiment
Any restriction is not done to content in this respect.
Figure 4, it is seen that compared with embodiment corresponding to Fig. 2, it is used for detection image quality in the present embodiment
The flow 400 of method highlight the step of being enlarged to the scope of the human face region in image to be detected (i.e. step 402), with
And the step of being determined to the up-to-standard probability of facial image (i.e. step 404- steps 406).Thus, the present embodiment is retouched
The scheme stated can expand the area coverage of facial image, use up facial image by being enlarged to the scope of human face region
Possibly include more face key point.The up-to-standard probability of facial image is determined by step 404- steps 406,
It can make the probability that there is the higher degree of accuracy.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, it is used to detect figure this application provides one kind
As one embodiment of the device of quality, the device embodiment is corresponding with the embodiment of the method shown in Fig. 2, and the device specifically may be used
With applied in various electronic equipments.
As shown in figure 5, the device 500 for detection image quality shown in the present embodiment includes:Acquiring unit 501, carry
Take unit 502, input block 503, the first determining unit 504, the second determining unit 505 and the 3rd determining unit 506.Wherein,
Acquiring unit 501 be configured to obtain image to be detected and in advance to above-mentioned image to be detected carry out Face datection after obtained by,
For indicating the human face region information of the human face region in above-mentioned image to be detected;Extraction unit 502 is configured to based on above-mentioned
Human face region information, facial image is extracted from above-mentioned image to be detected;Input block 503 is configured to above-mentioned facial image
The convolutional neural networks of training in advance are inputted, obtain image feature information, wherein, above-mentioned convolutional neural networks are used to extract image
Feature;First determining unit 504 is configured to parse above-mentioned image feature information, determines face key point information;The
Two determining units 505 are configured to be based on above-mentioned face key point information, determine the up-to-standard probability of above-mentioned facial image;
And the 3rd determining unit 506 is configured to be based on above-mentioned probability, determine whether the quality of above-mentioned facial image is qualified.
In the present embodiment, in the device 500 of detection image quality:Acquiring unit 501, extraction unit 502, input
Unit 503, the first determining unit 504, the second determining unit 505 and the 3rd determining unit 506 specific processing and its brought
Technique effect can be respectively with reference to the step 201 in the corresponding embodiment of figure 2, step 202, step 203, step 204, step 205
With the related description of step 206, will not be repeated here.
In some optional implementations of the present embodiment, above-mentioned face key point information can include above-mentioned face figure
The probability being blocked as included each face key point;And above-mentioned first determining unit 504 can include:First input
Subelement (not shown), it is configured to the first probability calculation model of above-mentioned image feature information input training in advance,
The probability that each face key point included by above-mentioned facial image is blocked is obtained, wherein, above-mentioned first probability calculation model
The corresponding relation of probability that the image feature information and face key point that can be used for characterizing the image for including face are blocked.
In some optional implementations of the present embodiment, above-mentioned face key point information can include above-mentioned face figure
As the coordinate of included each face key point;And above-mentioned first determining unit 504 can include:Second input subelement
(not shown), it is configured to, by the face key point location model of above-mentioned image feature information input training in advance, obtain
The coordinate of each face key point included by above-mentioned facial image, wherein, above-mentioned face key point location model can be used for
Characterize the corresponding relation of the image feature information for the image for including face and the coordinate of face key point.
In some optional implementations of the present embodiment, above-mentioned second determining unit 505 can include:3rd input
Subelement (not shown), the probability for being configured to each face key point included by above-mentioned facial image being blocked are defeated
Enter the second probability calculation model of training in advance, obtain the up-to-standard probability of above-mentioned facial image, wherein, above-mentioned second is general
Rate computation model, which can be used for characterizing, includes probability and the quality that each face key point included by the image of face is blocked
Corresponding relation between qualified probability.
In some optional implementations of the present embodiment, above-mentioned second determining unit 505 can include:Extraction is single
First (not shown), it is configured to centered on each face key point, is extracted from above-mentioned facial image with default
Height and width image block;4th input subelement (not shown), is configured to each image block that will be extracted
The 3rd probability calculation model of training in advance is inputted, obtains the up-to-standard probability of above-mentioned facial image, wherein, the above-mentioned 3rd
Probability calculation model can be used for characterizing it is being extracted from the image including face, centered on face key point and with pre-
If height and width each image block and up-to-standard probability corresponding relation.
In some optional implementations of the present embodiment, above-mentioned second probability calculation model can be by following instruction
Practice step and train what is obtained:Each face key point that is preset including showing in the sample image of face is extracted to be blocked
Probability and the first training sample for characterizing the first whether qualified data markers of the quality of sample image;Utilize engineering
Learning method, train to obtain second based on above-mentioned first training sample, default first Classification Loss function and back-propagation algorithm
Probability calculation model, wherein, above-mentioned first Classification Loss function is used for the probability for characterizing above-mentioned second probability calculation model output
With the difference degree of above-mentioned first data markers.
In some optional implementations of the present embodiment, above-mentioned 3rd probability calculation model can be by following instruction
Practice step and train what is obtained:Extract it is preset including extracted from the sample image for show face, with face key point
Centered on and each image block with default height and width and whether qualified for characterizing the quality of sample image the
Second training sample of two data markers;Using machine learning method, based on above-mentioned second training sample, default second classification
Loss function and back-propagation algorithm train to obtain the 3rd probability calculation model, wherein, above-mentioned second Classification Loss function can be with
For characterizing the probability and the difference degree of above-mentioned second data markers of above-mentioned 3rd probability calculation model output.
In some optional implementations of the present embodiment, said extracted unit 502 can include:Expand subelement
(not shown), it is configured to expand the scope of the human face region indicated by above-mentioned human face region information, obtains the first face
Region;Subelement (not shown) is intercepted, is configured to intercept above-mentioned first human face region and obtains above-mentioned facial image.
In some optional implementations of the present embodiment, human face region can be rectangular area;And above-mentioned expansion
Subelement can include:Extension module (not shown), it is configured to the face area indicated by by above-mentioned human face region information
The height and width expansion preset multiple or increase default value in domain.
The device that above-described embodiment of the application provides is effectively utilized the extraction to facial image, can reduce detection model
Enclose, improve picture quality detection efficiency.
Moreover, convolutional neural networks can generally extract the image feature information of the various dimensions of image.Based on training in advance
The image feature information of facial image that extracts of convolutional neural networks, to determine face key point information, and be based on the people
Face key point information determines the up-to-standard probability of facial image, can improve the degree of accuracy of the probability, and then can improve
The degree of accuracy of picture quality testing result.
Below with reference to Fig. 6, it illustrates suitable for for realizing the computer system 600 of the electronic equipment of the embodiment of the present application
Structural representation.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes CPU (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into program in random access storage device (RAM) 603 from storage part 608 and
Perform various appropriate actions and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interfaces 605 are connected to lower component:Importation 606 including keyboard, mouse etc.;Penetrated including such as negative electrode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage part 608 including hard disk etc.;
And the communications portion 609 of the NIC including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net performs communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc., it is arranged on as needed on driver 610, in order to read from it
Computer program be mounted into as needed storage part 608.
Especially, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, it includes being carried on computer-readable medium
On computer program, the computer program include be used for execution flow chart shown in method program code.In such reality
To apply in example, the computer program can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is performed by CPU (CPU) 601, perform what is limited in the system of the application
Above-mentioned function.
It should be noted that the computer-readable medium shown in the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer-readable recording medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.Meter
The more specifically example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more wires, just
Take formula computer disk, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer-readable recording medium can any include or store journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for
By instruction execution system, device either device use or program in connection.Included on computer-readable medium
Program code can be transmitted with any appropriate medium, be included but is not limited to:Wirelessly, electric wire, optical cable, RF etc., or it is above-mentioned
Any appropriate combination.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation
The part of one module of table, program segment or code, a part for above-mentioned module, program segment or code include one or more
For realizing the executable instruction of defined logic function.It should also be noted that some as replace realization in, institute in square frame
The function of mark can also be with different from the order marked in accompanying drawing generation.For example, two square frames succeedingly represented are actual
On can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also
It is noted that the combination of each square frame and block diagram in block diagram or flow chart or the square frame in flow chart, can use and perform rule
Fixed function or the special hardware based system of operation are realized, or can use the group of specialized hardware and computer instruction
Close to realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set within a processor, for example, can be described as:A kind of processor bag
Include acquiring unit, extraction unit, input block, the first determining unit, the second determining unit and the 3rd determining unit.Wherein, this
The title of a little units does not form the restriction to the unit in itself under certain conditions, for example, acquiring unit can also be described
For " obtain image to be detected and in advance to image to be detected carry out Face datection after obtained by, for indicating in image to be detected
Human face region human face region information unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, the computer-readable medium can be
Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.
Above computer computer-readable recording medium carries one or more program, and when said one or multiple programs, by one, the electronics is set
During standby execution so that the electronic equipment includes:Obtain image to be detected and Face datection is carried out to above-mentioned image to be detected in advance
Human face region information obtained by afterwards, for indicating the human face region in above-mentioned image to be detected;Believed based on above-mentioned human face region
Breath, facial image is extracted from above-mentioned image to be detected;By the convolutional neural networks of above-mentioned facial image input training in advance, obtain
To image feature information, wherein, above-mentioned convolutional neural networks are used to extract characteristics of image;Above-mentioned image feature information is solved
Analysis, determines face key point information;Based on above-mentioned face key point information, the up-to-standard general of above-mentioned facial image is determined
Rate;Based on above-mentioned probability, determine whether the quality of above-mentioned facial image is qualified.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, carried out by above-mentioned technical characteristic or its equivalent feature
The other technical schemes for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical scheme that the technical characteristic of energy is replaced mutually and formed.
Claims (20)
- A kind of 1. method for detection image quality, it is characterised in that methods described includes:Obtain image to be detected and in advance to described image to be detected carry out Face datection after obtained by, it is described to be checked for indicating The human face region information of human face region in altimetric image;Based on the human face region information, facial image is extracted from described image to be detected;By the convolutional neural networks of facial image input training in advance, image feature information is obtained, wherein, the convolution god It is used to extract characteristics of image through network;Described image characteristic information is parsed, determines face key point information;Based on the face key point information, the up-to-standard probability of the facial image is determined;Based on the probability, determine whether the quality of the facial image is qualified.
- 2. according to the method for claim 1, it is characterised in that the face key point information includes the facial image institute Including the probability that is blocked of each face key point;AndIt is described that described image characteristic information is parsed, face key point information is determined, including:The first probability calculation model of described image characteristic information input training in advance is obtained included by the facial image The probability that each face key point is blocked, wherein, the first probability calculation model includes the image of face for characterizing The corresponding relation for the probability that image feature information is blocked with face key point.
- 3. according to the method for claim 1, it is characterised in that the face key point information includes the facial image institute Including each face key point coordinate;AndIt is described that described image characteristic information is parsed, face key point information is determined, including:The face key point location model of described image characteristic information input training in advance is obtained included by the facial image Each face key point coordinate, wherein, the face key point location model be used for characterize include face image figure As characteristic information and the corresponding relation of the coordinate of face key point.
- 4. according to the method for claim 2, it is characterised in that it is described to be based on the face key point information, it is determined that described The up-to-standard probability of facial image, including:Second probability meter of the probability input training in advance that each face key point included by the facial image is blocked Model is calculated, obtains the up-to-standard probability of the facial image, wherein, the second probability calculation model includes for sign Corresponding relation between probability and up-to-standard probability that each face key point included by the image of face is blocked.
- 5. according to the method for claim 3, it is characterised in that it is described to be based on the face key point information, it is determined that described The up-to-standard probability of facial image, including:Centered on each face key point, the image with default height and width is extracted from the facial image Block;By the 3rd probability calculation model of each image block extracted input training in advance, the quality of the facial image is obtained Qualified probability, wherein, the 3rd probability calculation model be used for characterize extracted from the image including face, with face Centered on key point and each image block with default height and width and up-to-standard probability corresponding relation.
- 6. according to the method for claim 4, it is characterised in that the second probability calculation model is walked by following training What rapid training obtained:Extract preset including show probability that each face key point in the sample image of face is blocked and for table Levy the first training sample of whether qualified the first data markers of quality of sample image;Using machine learning method, calculated based on first training sample, default first Classification Loss function and backpropagation Method trains to obtain the second probability calculation model, wherein, the first Classification Loss function is used to characterize second probability calculation The probability and the difference degree of first data markers of model output.
- 7. according to the method for claim 5, it is characterised in that the 3rd probability calculation model is walked by following training What rapid training obtained:Extract it is preset including extracted from the sample image for show face, centered on face key point and have Each image block of default height and width and for characterizing the second whether qualified data markers of the quality of sample image Second training sample;Using machine learning method, calculated based on second training sample, default second Classification Loss function and backpropagation Method trains to obtain the 3rd probability calculation model, wherein, the second Classification Loss function is used to characterize the 3rd probability calculation The probability and the difference degree of second data markers of model output.
- 8. according to the method for claim 1, it is characterised in that it is described to be based on the human face region information, from described to be checked Facial image is extracted in altimetric image, including:Expand the scope of the human face region indicated by the human face region information, obtain the first human face region;Intercept first human face region and obtain the facial image.
- 9. according to the method for claim 8, it is characterised in that human face region is rectangular area;AndThe scope for expanding the human face region indicated by the human face region information, including:By the height of the human face region indicated by the human face region information and width expansion preset multiple or increase default value.
- 10. a kind of device for detection image quality, it is characterised in that described device includes:Acquiring unit, it is configured to obtain image to be detected and carries out gained after Face datection to described image to be detected in advance , human face region information for indicating the human face region in described image to be detected;Extraction unit, it is configured to be based on the human face region information, facial image is extracted from described image to be detected;Input block, it is configured to, by the convolutional neural networks of facial image input training in advance, obtain characteristics of image letter Breath, wherein, the convolutional neural networks are used to extract characteristics of image;First determining unit, it is configured to parse described image characteristic information, determines face key point information;Second determining unit, it is configured to be based on the face key point information, determines the up-to-standard of the facial image Probability;3rd determining unit, it is configured to be based on the probability, determines whether the quality of the facial image is qualified.
- 11. device according to claim 10, it is characterised in that the face key point information includes the facial image The probability that included each face key point is blocked;AndFirst determining unit includes:First input subelement, it is configured to the first probability calculation model of described image characteristic information input training in advance, The probability that each face key point included by the facial image is blocked is obtained, wherein, the first probability calculation model The corresponding relation for the probability that image feature information for characterizing the image for including face is blocked with a face key point.
- 12. device according to claim 10, it is characterised in that the face key point information includes the facial image The coordinate of included each face key point;AndFirst determining unit includes:Second input subelement, it is configured to the face key point positioning mould of described image characteristic information input training in advance Type, the coordinate of each face key point included by the facial image is obtained, wherein, the face key point location model is used In the image feature information and the corresponding relation of the coordinate of face key point that characterize the image for including face.
- 13. device according to claim 11, it is characterised in that second determining unit includes:3rd input subelement, the probability for being configured to each face key point included by the facial image being blocked are defeated Enter the second probability calculation model of training in advance, obtain the up-to-standard probability of the facial image, wherein, described second is general Rate computation model be used to characterizing include probability that each face key point included by the image of face is blocked with it is up-to-standard Probability between corresponding relation.
- 14. device according to claim 12, it is characterised in that second determining unit includes:Subelement is extracted, is configured to centered on each face key point, is extracted from the facial image with default Height and width image block;4th input subelement, it is configured to the 3rd probability calculation mould of each image block extracted input training in advance Type, the up-to-standard probability of the facial image is obtained, wherein, the 3rd probability calculation model is used to characterize from including people It is being extracted in the image of face, centered on face key point and each image block and quality with default height and width The corresponding relation of qualified probability.
- 15. device according to claim 13, it is characterised in that the second probability calculation model is by following training Step trains what is obtained:Extract preset including show probability that each face key point in the sample image of face is blocked and for table Levy the first training sample of whether qualified the first data markers of quality of sample image;Using machine learning method, calculated based on first training sample, default first Classification Loss function and backpropagation Method trains to obtain the second probability calculation model, wherein, the first Classification Loss function is used to characterize second probability calculation The probability and the difference degree of first data markers of model output.
- 16. device according to claim 14, it is characterised in that the 3rd probability calculation model is by following training Step trains what is obtained:Extract it is preset including extracted from the sample image for show face, centered on face key point and have Each image block of default height and width and for characterizing the second whether qualified data markers of the quality of sample image Second training sample;Using machine learning method, calculated based on second training sample, default second Classification Loss function and backpropagation Method trains to obtain the 3rd probability calculation model, wherein, the second Classification Loss function is used to characterize the 3rd probability calculation The probability and the difference degree of second data markers of model output.
- 17. device according to claim 10, it is characterised in that the extraction unit includes:Expand subelement, be configured to expand the scope of the human face region indicated by the human face region information, obtain the first Face region;Subelement is intercepted, is configured to intercept first human face region and obtains the facial image.
- 18. device according to claim 17, it is characterised in that human face region is rectangular area;AndThe expansion subelement includes:Extension module, it is configured to the height of the human face region indicated by the human face region information and width expansion is default again Number or increase default value.
- 19. a kind of electronic equipment, it is characterised in that including:One or more processors;Storage device, for storing one or more programs,When one or more of programs are by one or more of computing devices so that one or more of processors are real The now method as described in any in claim 1-9.
- 20. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that described program is processed The method as described in any in claim 1-9 is realized when device performs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710911138.0A CN107590807A (en) | 2017-09-29 | 2017-09-29 | Method and apparatus for detection image quality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710911138.0A CN107590807A (en) | 2017-09-29 | 2017-09-29 | Method and apparatus for detection image quality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107590807A true CN107590807A (en) | 2018-01-16 |
Family
ID=61052278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710911138.0A Pending CN107590807A (en) | 2017-09-29 | 2017-09-29 | Method and apparatus for detection image quality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107590807A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537787A (en) * | 2018-03-30 | 2018-09-14 | 中国科学院半导体研究所 | A kind of quality judging method of facial image |
CN108876858A (en) * | 2018-07-06 | 2018-11-23 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling image |
CN108986169A (en) * | 2018-07-06 | 2018-12-11 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling image |
CN109117736A (en) * | 2018-07-19 | 2019-01-01 | 厦门美图之家科技有限公司 | A kind of method and calculating equipment of judgement face visibility of a point |
CN109350100A (en) * | 2018-09-27 | 2019-02-19 | 上海联影医疗科技有限公司 | Medical imaging procedure, medical imaging devices and computer readable storage medium |
CN110046633A (en) * | 2018-11-23 | 2019-07-23 | 阿里巴巴集团控股有限公司 | A kind of data quality checking method and device |
CN110046652A (en) * | 2019-03-18 | 2019-07-23 | 深圳神目信息技术有限公司 | Face method for evaluating quality, device, terminal and readable medium |
CN110309768A (en) * | 2019-06-28 | 2019-10-08 | 上海眼控科技股份有限公司 | The staff's detection method and equipment of car test station |
CN110349152A (en) * | 2019-07-16 | 2019-10-18 | 广州图普网络科技有限公司 | Method for detecting quality of human face image and device |
CN110363159A (en) * | 2019-07-17 | 2019-10-22 | 腾讯科技(深圳)有限公司 | Image-recognizing method, device, electronic equipment and computer readable storage medium |
CN110570400A (en) * | 2019-08-19 | 2019-12-13 | 河北极目楚天微电子科技有限公司 | Information processing method and device for chip 3D packaging detection |
CN110879981A (en) * | 2019-11-14 | 2020-03-13 | 深圳市华付信息技术有限公司 | Method and device for evaluating quality of key points of human face, computer equipment and storage medium |
CN111488774A (en) * | 2019-01-29 | 2020-08-04 | 北京搜狗科技发展有限公司 | Image processing method and device for image processing |
CN112347911A (en) * | 2020-11-05 | 2021-02-09 | 北京达佳互联信息技术有限公司 | Method and device for adding special effects of fingernails, electronic equipment and storage medium |
CN112686847A (en) * | 2020-12-23 | 2021-04-20 | 平安银行股份有限公司 | Identity card image shooting quality evaluation method and device, computer equipment and medium |
CN113011345A (en) * | 2021-03-11 | 2021-06-22 | 百度在线网络技术(北京)有限公司 | Image quality detection method and device, electronic equipment and readable storage medium |
CN113011356A (en) * | 2021-03-26 | 2021-06-22 | 杭州朗和科技有限公司 | Face feature detection method, device, medium and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102201061A (en) * | 2011-06-24 | 2011-09-28 | 常州锐驰电子科技有限公司 | Intelligent safety monitoring system and method based on multilevel filtering face recognition |
CN102262727A (en) * | 2011-06-24 | 2011-11-30 | 常州锐驰电子科技有限公司 | Method for monitoring face image quality at client acquisition terminal in real time |
US8254647B1 (en) * | 2012-04-16 | 2012-08-28 | Google Inc. | Facial image quality assessment |
CN103824068A (en) * | 2014-03-19 | 2014-05-28 | 上海看看智能科技有限公司 | Human face payment authentication system and method |
CN106503614A (en) * | 2016-09-14 | 2017-03-15 | 厦门幻世网络科技有限公司 | A kind of photo acquisition methods and device |
CN106803067A (en) * | 2016-12-28 | 2017-06-06 | 浙江大华技术股份有限公司 | A kind of quality of human face image appraisal procedure and device |
-
2017
- 2017-09-29 CN CN201710911138.0A patent/CN107590807A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102201061A (en) * | 2011-06-24 | 2011-09-28 | 常州锐驰电子科技有限公司 | Intelligent safety monitoring system and method based on multilevel filtering face recognition |
CN102262727A (en) * | 2011-06-24 | 2011-11-30 | 常州锐驰电子科技有限公司 | Method for monitoring face image quality at client acquisition terminal in real time |
US8254647B1 (en) * | 2012-04-16 | 2012-08-28 | Google Inc. | Facial image quality assessment |
CN103824068A (en) * | 2014-03-19 | 2014-05-28 | 上海看看智能科技有限公司 | Human face payment authentication system and method |
CN106503614A (en) * | 2016-09-14 | 2017-03-15 | 厦门幻世网络科技有限公司 | A kind of photo acquisition methods and device |
CN106803067A (en) * | 2016-12-28 | 2017-06-06 | 浙江大华技术股份有限公司 | A kind of quality of human face image appraisal procedure and device |
Non-Patent Citations (3)
Title |
---|
KAIPENG ZHANG ER AL: "Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks", 《IEEE SIGNAL PROCESSING LETTERS》 * |
VIGNESH S ET AL: "Face Image Quality Assessment for Face Selection in Surveillance Video using Convolutional Neural Networks", 《2015 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING》 * |
XAVIER P.BURGOS-ARTIZZU ET AL: "Robust face landmark estimation under occlusion", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537787A (en) * | 2018-03-30 | 2018-09-14 | 中国科学院半导体研究所 | A kind of quality judging method of facial image |
CN108876858A (en) * | 2018-07-06 | 2018-11-23 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling image |
CN108986169A (en) * | 2018-07-06 | 2018-12-11 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling image |
CN109117736A (en) * | 2018-07-19 | 2019-01-01 | 厦门美图之家科技有限公司 | A kind of method and calculating equipment of judgement face visibility of a point |
CN109117736B (en) * | 2018-07-19 | 2020-11-06 | 厦门美图之家科技有限公司 | Method and computing device for judging visibility of face points |
CN109350100A (en) * | 2018-09-27 | 2019-02-19 | 上海联影医疗科技有限公司 | Medical imaging procedure, medical imaging devices and computer readable storage medium |
CN110046633A (en) * | 2018-11-23 | 2019-07-23 | 阿里巴巴集团控股有限公司 | A kind of data quality checking method and device |
CN110046633B (en) * | 2018-11-23 | 2023-05-02 | 创新先进技术有限公司 | Data quality detection method and device |
CN111488774A (en) * | 2019-01-29 | 2020-08-04 | 北京搜狗科技发展有限公司 | Image processing method and device for image processing |
CN110046652A (en) * | 2019-03-18 | 2019-07-23 | 深圳神目信息技术有限公司 | Face method for evaluating quality, device, terminal and readable medium |
CN110309768A (en) * | 2019-06-28 | 2019-10-08 | 上海眼控科技股份有限公司 | The staff's detection method and equipment of car test station |
CN110349152A (en) * | 2019-07-16 | 2019-10-18 | 广州图普网络科技有限公司 | Method for detecting quality of human face image and device |
CN110363159A (en) * | 2019-07-17 | 2019-10-22 | 腾讯科技(深圳)有限公司 | Image-recognizing method, device, electronic equipment and computer readable storage medium |
CN110570400A (en) * | 2019-08-19 | 2019-12-13 | 河北极目楚天微电子科技有限公司 | Information processing method and device for chip 3D packaging detection |
CN110570400B (en) * | 2019-08-19 | 2022-11-11 | 河北极目楚天微电子科技有限公司 | Information processing method and device for chip 3D packaging detection |
CN110879981A (en) * | 2019-11-14 | 2020-03-13 | 深圳市华付信息技术有限公司 | Method and device for evaluating quality of key points of human face, computer equipment and storage medium |
CN112347911A (en) * | 2020-11-05 | 2021-02-09 | 北京达佳互联信息技术有限公司 | Method and device for adding special effects of fingernails, electronic equipment and storage medium |
CN112686847A (en) * | 2020-12-23 | 2021-04-20 | 平安银行股份有限公司 | Identity card image shooting quality evaluation method and device, computer equipment and medium |
CN112686847B (en) * | 2020-12-23 | 2024-05-14 | 平安银行股份有限公司 | Identification card image shooting quality evaluation method and device, computer equipment and medium |
CN113011345A (en) * | 2021-03-11 | 2021-06-22 | 百度在线网络技术(北京)有限公司 | Image quality detection method and device, electronic equipment and readable storage medium |
CN113011345B (en) * | 2021-03-11 | 2024-03-05 | 百度在线网络技术(北京)有限公司 | Image quality detection method, image quality detection device, electronic equipment and readable storage medium |
CN113011356A (en) * | 2021-03-26 | 2021-06-22 | 杭州朗和科技有限公司 | Face feature detection method, device, medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107590807A (en) | Method and apparatus for detection image quality | |
CN107679490B (en) | Method and apparatus for detection image quality | |
CN108038469B (en) | Method and apparatus for detecting human body | |
CN107609536A (en) | Information generating method and device | |
CN107590482A (en) | information generating method and device | |
CN107578034A (en) | information generating method and device | |
CN107644209A (en) | Method for detecting human face and device | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN107168952A (en) | Information generating method and device based on artificial intelligence | |
CN107622252A (en) | information generating method and device | |
CN107578017A (en) | Method and apparatus for generating image | |
CN109325541A (en) | Method and apparatus for training pattern | |
CN108229575A (en) | For detecting the method and apparatus of target | |
CN107908789A (en) | Method and apparatus for generating information | |
CN108229485A (en) | For testing the method and apparatus of user interface | |
CN108038880A (en) | Method and apparatus for handling image | |
CN109063587A (en) | data processing method, storage medium and electronic equipment | |
CN107609506A (en) | Method and apparatus for generating image | |
CN110443222A (en) | Method and apparatus for training face's critical point detection model | |
CN107679466A (en) | Information output method and device | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
CN109472264A (en) | Method and apparatus for generating object detection model | |
CN108491808A (en) | Method and device for obtaining information | |
CN108509921A (en) | Method and apparatus for generating information | |
CN107729928A (en) | Information acquisition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |