CN103914676B - A kind of method and apparatus used in recognition of face - Google Patents

A kind of method and apparatus used in recognition of face Download PDF

Info

Publication number
CN103914676B
CN103914676B CN201210592215.8A CN201210592215A CN103914676B CN 103914676 B CN103914676 B CN 103914676B CN 201210592215 A CN201210592215 A CN 201210592215A CN 103914676 B CN103914676 B CN 103914676B
Authority
CN
China
Prior art keywords
face
user images
image
characteristic point
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210592215.8A
Other languages
Chinese (zh)
Other versions
CN103914676A (en
Inventor
李晓燕
李鹏
胡光龙
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixian Advanced Technology Co., Ltd.
Original Assignee
Hangzhou Langhe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Langhe Technology Co Ltd filed Critical Hangzhou Langhe Technology Co Ltd
Priority to CN201210592215.8A priority Critical patent/CN103914676B/en
Publication of CN103914676A publication Critical patent/CN103914676A/en
Application granted granted Critical
Publication of CN103914676B publication Critical patent/CN103914676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present invention provide a kind of method used in recognition of face.This method includes:User images are obtained into face characteristic point coordinates by facial feature points detection;Face and the differential seat angle of horizontal direction are calculated according to face characteristic point coordinates, the differential seat angle that the user images are rotated to face and horizontal direction is met into preset standard angle;The dimensional ratios of face and preset standard face are calculated according to face characteristic point coordinates, the user images are scaled according to dimensional ratios;According to position of the human face characteristic point in user images, user images by rotation and scaling are cut to standard area size and human face characteristic point is in the normal place of the standard area, by the way that user images are pre-processed, obtain the facial image of alignment in the same size, the face characteristic of same user is set to be more nearly unanimously, influence of the low-quality image to recognition of face is eliminated, in addition, additionally providing a kind of device used in recognition of face.

Description

A kind of method and apparatus used in recognition of face
Technical field
Embodiments of the present invention are related to identification field, exist more specifically, embodiments of the present invention are related to one kind The method and apparatus used in recognition of face.
Background technology
This part is it is intended that the embodiments of the present invention stated in claims provide background or context.Herein Description may include the concept that can be probed into, but be not necessarily the concept for having contemplated that or having probed into before.Therefore, unless This points out that otherwise the content described in this part is not existing skill for the description and claims of this application Art, and not because being included in this part just recognize it is prior art.
In daily life, the demand to people's identification is widely present in all trades and professions, and such as financial service, customs goes out The fields such as immigration, national security, are required for continually recognizing the identity of people.Conventional means of identification include signature, password, manually Photo comparison etc., but these methods are the drawbacks of have respective.Such as signature is easily forged, and password may be stolen, and people Work photo comparison is wasted time and energy.With the development of science and technology, advantage of the biometrics in identification field is more and more obvious, its In, recognition of face is to develop a more rapid research direction in recent years.
Recognition of face determines identity by analyzing the face information of user, in the prior art, has occurred some Face identification technology, for example, face work attendance application technology, gathers user images, when detecting face, to user by camera Image carries out face characteristic extraction, and the characteristic information that face characteristic is preserved with database is compared, and then realizes to user Recognition of face.
The content of the invention
But, the environment for recognition of face practical application is more complicated, and the user images quality collected is unstable Reason, the feature for the user images that prior art is extracted can not ensure uniformity, it is impossible to meet the application of recognition of face.
Therefore in the prior art, it is very bothersome to carry out recognition of face based on the unstable user images of quality Process.
Therefore, a kind of improved technology used in recognition of face is highly desirable to, to eliminate low-quality user images Influence, it is ensured that it is high-quality user images to enter face recognition module, to reduce identification difficulty, improves accuracy of identification.
In the present context, embodiments of the present invention are expected to provide a kind of method used in recognition of face and dress Put.
There is provided a kind of method used in recognition of face in the first aspect of embodiment of the present invention, for example, can With including:
User images are obtained into face characteristic point coordinates by facial feature points detection;
The differential seat angle of face and horizontal direction is calculated according to face characteristic point coordinates, by the user images rotate to face with The differential seat angle of horizontal direction meets preset standard angle;
The dimensional ratios of face and preset standard face are calculated according to face characteristic point coordinates, by the user images according to chi Very little ratio scaling;
According to position of the human face characteristic point in user images, standard will be cut to by the user images for rotating and scaling The normal place of area size and human face characteristic point in the standard area.
Alternatively, wherein the human face characteristic point coordinate can be left eye feature point coordinates and right eye feature point coordinates.
Alternatively, specifically can be with wherein the differential seat angle that face and horizontal direction are calculated according to face characteristic point coordinates To calculate two characteristic point lines and the differential seat angle of horizontal direction according to left eye feature point coordinates and right eye feature point coordinates;
It is described that the differential seat angle that the user images are rotated to face and horizontal direction is met into preset standard angle, specifically can be with It is zero degree that the user images, which are rotated to two characteristic point lines and the differential seat angle of horizontal direction,.
Alternatively, wherein the dimensional ratios that face and preset standard face are calculated according to face characteristic point coordinates, tool Body can calculate two characteristic point distances and default first gauged distance according to left eye feature point coordinates and right eye feature point coordinates Ratio.
Alternatively, wherein the position according to human face characteristic point in user images, will pass through the use of rotation and scaling Family image cropping is in the normal place of the standard area to standard area size and human face characteristic point, is specifically as follows according to two Position of the characteristic point in user images, will be cut to standard area size and two features by the user images for rotating and scaling Normal place of the point in the standard area;
The outer rim of the user images is that the height and width that preset standard size is specifically as follows user images are pre- bidding Object staff cun.
Alternatively, the user images can also be obtained by following steps:
Extract the characteristics of image of the user images;
Judge described image feature whether in the range of level threshold value;
If it is, obtaining the user images.
Alternatively, wherein, described image feature is specifically as follows the left face of user images and the grey level histogram of right face;
The level threshold value scope is specifically as follows illumination threshold range.
Alternatively, wherein, it is described extract user images characteristics of image be specifically as follows using gradient operator calculate user The image quality evaluation index degree of image;
The level threshold value scope is specifically as follows image quality evaluation index degree threshold range.
Alternatively, wherein, described image feature is specifically as follows facial image grey level histogram light and shade distribution ratio;
The level threshold value scope is specifically as follows standard ratio threshold range.
Alternatively, it is described judge characteristics of image whether in the range of the level threshold value before, can also include:Judge the user Image is used to register or for certification;
If for registering, it is described to judge whether characteristics of image is specifically as follows in the range of level threshold value and judge image spy Levy whether in the range of the first level threshold value for registration;
If for certification, it is described to judge whether characteristics of image is specifically as follows in the range of level threshold value and judge image spy Levy whether in the range of the second level threshold value for certification.
Alternatively, this method can also include:
Gamma transformation is carried out to the image by cutting;
High and low frequency part is filtered out using wave filter, the user images updated are obtained.
Alternatively, this method can also include:
The user images by cutting are covered using standard faces template,
Portion intercepts of the capped user images in default template effective coverage are come out as renewal by cutting out The user images cut.
Alternatively, this method can also include:Extract through the Gabor characteristic of user images cutting, LBP features and HOG features are used as face characteristic information.
Alternatively, this method can also include:It is special using AdaBoost algorithms selections Gabor characteristic, LBP features and HOG Levy as face characteristic information.
Alternatively, this method can also include:User's registration information is obtained, judges whether the user's registration information state is full Sufficient update condition, if it is satisfied, being replaced with the face characteristic information by the user images cut in user's registration information Comprising former face characteristic information.
There is provided a kind of device used in recognition of face in the second aspect of embodiment of the present invention, for example, can With including:
Face datection unit:It is configured to user images obtaining face characteristic point coordinates by facial feature points detection;
Pretreatment unit:It is configured to calculate face and the differential seat angle of horizontal direction according to face characteristic point coordinates, by this User images, which are rotated to the differential seat angle of face and horizontal direction, meets preset standard angle;People is calculated according to face characteristic point coordinates The dimensional ratios of face and preset standard face, the user images are scaled according to dimensional ratios;According to human face characteristic point in user Position in image, will be cut to standard area size by the user images for rotating and scaling and human face characteristic point is in the mark The normal place in quasi- region.
Alternatively, the Face datection unit:Specifically it may be configured to user images passing through facial feature points detection Obtain left eye feature point coordinates and right eye feature point coordinates.
Alternatively, the pretreatment unit:Specifically it may be configured to according to left eye feature point coordinates and right eye characteristic point Coordinate calculates two characteristic point lines and the differential seat angle of horizontal direction, and the user images are rotated to two characteristic point lines and level side To differential seat angle be zero.
Alternatively, the pretreatment unit:Specifically it may be configured to according to left eye feature point coordinates and right eye characteristic point Coordinate calculates two characteristic point distances and the ratio of default first gauged distance.
Alternatively, the pretreatment unit:The specific position that may be configured to according to two characteristic points in user images, User images by rotation and scaling are cut to standard area size and two characteristic points are in the normal bit of the standard area Put.
Alternatively, the device can also include:
Image quality evaluation unit:It is configured to extract the characteristics of image of the user images, judges described image feature Whether in the range of level threshold value, if it is, obtaining the user images.
Alternatively, described image quality evaluation unit:The specific left face that may be configured to extract the user images and The grey level histogram of right face, judges the grey level histogram whether in standard illumination threshold range, if it is, obtaining described use Family image.
Alternatively, described image quality evaluation unit:It is specific to may be configured to calculate user images using gradient operator Image quality evaluation index degree, judge described image quality evaluation index degree whether in image quality evaluation index degree threshold value model In enclosing, if it is, obtaining the user images.
Alternatively, described image quality evaluation unit:The specific facial image ash that may be configured to extract user images Histogram light and shade distribution ratio is spent, the ratio is judged whether in standard ratio threshold range, if it is, obtaining the user Image.
Alternatively, described image quality evaluation unit:Can also be configured to judge the user images be used for register still For certification, if for registering, it is described to judge whether characteristics of image is specifically as follows judgement image in the range of level threshold value Whether whether feature is in the range of the first level threshold value for registration, if being used for certification, described to judge characteristics of image in mark It is specifically as follows in quasi- threshold range and judges characteristics of image whether in the range of the second level threshold value for certification.
Alternatively, the device can also include:Photo-irradiation treatment unit:It is configured to carry out gamma to the image by cutting Conversion, high and low frequency part is filtered out using wave filter, obtains the user images updated.
Alternatively, the device can also include:Except interference unit:It is configured to using the covering of standard faces template by cutting out The user images cut, portion intercepts of the capped user images in default template effective coverage are come out and are used as the warp updated Cross the user images cut.
Alternatively, the device can also include:Feature extraction unit:It is configured to user images of the extraction by cutting Gabor characteristic, LBP features and HOG features are used as face characteristic information.
Alternatively, the feature extraction unit:Can also be configured to using AdaBoost algorithms selections Gabor characteristic, LBP features and HOG features are used as face characteristic information.
Alternatively, the device can also include:Updating block:It is configured to obtain user's registration information, judges the user Whether log-on message state meets update condition, if it is satisfied, with described by the face characteristic information of the user images cut Replace the former face characteristic information included in user's registration information.
By the description to above-mentioned technical proposal, easily learn, the present invention has the advantages that:
Because the method and apparatus of embodiment of the present invention are special by facial feature points detection acquisition face by user images Levy after point coordinates, according to position of the face characteristic point coordinates in user images, user images have been carried out to include adjusting angle Degree, adjustment size, adjustment face location and the normalization face processing for cutting image size, and then cause picture quality unstable Fixed user images can obtain the facial image of alignment in the same size, the people for extracting same user images after treatment Face feature is more nearly unanimously, eliminates influence of the low-quality image to recognition of face, to reduce follow-up recognition of face difficulty, raising Accuracy of identification.
Brief description of the drawings
Detailed description below, above-mentioned and other mesh of exemplary embodiment of the invention are read by reference to accompanying drawing , feature and advantage will become prone to understand.In the accompanying drawings, if showing the present invention's by way of example, and not by way of limitation Dry embodiment, wherein:
Fig. 1 schematically shows the block diagram for the exemplary computer system 100 for being adapted for carrying out embodiment of the present invention;
Fig. 2 schematically shows the user images schematic diagram under the application scenarios of the present invention;
Fig. 3 has schematically used a kind of method flow diagram used in recognition of face according to embodiments of the present invention;
Fig. 4 schematically shows the schematic diagram of left eye characteristic point according to embodiments of the present invention;
Fig. 5 schematically shows the schematic diagram of standard faces template according to embodiments of the present invention;
Fig. 6 schematically shows a kind of device composition figure used in recognition of face according to embodiments of the present invention;
In the accompanying drawings, identical or corresponding label represents identical or corresponding part.
Embodiment
The principle and spirit of the present invention is described below with reference to some illustrative embodiments.It should be appreciated that providing this A little embodiments are used for the purpose of better understood when those skilled in the art and then realizing the present invention, and not with any Mode limits the scope of the present invention.On the contrary, these embodiments are provided so that the disclosure is more thorough and complete, and energy It is enough that the scope of the present disclosure is intactly conveyed into those skilled in the art.
Fig. 1 shows the block diagram for the exemplary computer system 100 for being adapted for carrying out embodiment of the present invention.As shown in figure 1, meter Calculation system 100 can include:CPU(CPU)101st, random access memory(RAM)102nd, read-only storage(ROM) 103rd, system bus 104, hard disk controller 105, KBC 106, serial interface controller 107, parallel interface controller 108th, display controller 109, hard disk 110, keyboard 111, serial peripheral equipment 112, concurrent peripheral equipment 113 and display 114. In these equipment, what is coupled with system bus 104 has CPU101, RAM102, ROM103, hard disk controller 105, KBC 106th, serialization controller 107, parallel controller 108 and display controller 109.Hard disk 110 is coupled with hard disk controller 105, key Disk 111 is coupled with KBC 106, and serial peripheral equipment 112 is coupled with serial interface controller 107, concurrent peripheral equipment 113 are coupled with parallel interface controller 108, and display 114 is coupled with display controller 109.It should be appreciated that described in Fig. 1 Structured flowchart be used for the purpose of example purpose, without limiting the scope of the present invention.In some cases, can basis Concrete condition increases or decreases some equipment.
Art technology technical staff knows that embodiments of the present invention can be implemented as a kind of system, method or calculating Machine program product.Therefore, the disclosure can be implemented as following form, i.e.,:Complete hardware, complete software(Including solid Part, resident software, microcode etc.), or the form that hardware and software is combined, it is referred to generally herein as " circuit ", " module " or " is System ".In addition, in certain embodiments, the present invention is also implemented as the calculating in one or more computer-readable mediums Computer-readable program code is included in the form of machine program product, the computer-readable medium.
Any combination of one or more computer-readable media can be used.Computer-readable medium can be calculated Machine readable signal medium or computer-readable recording medium.Computer-readable recording medium for example may be, but not limited to, Electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.It is computer-readable The more specifically example of storage medium(Non-exhaustive examples)It can such as include:Electrical connection with one or more wires, just Take formula computer disk, hard disk, random access memory(RAM), read-only storage (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In this document, computer-readable recording medium can any include or store journey The tangible medium of sequence, the program can be commanded execution system, device or device and use or in connection.
Computer-readable signal media can be included in a base band or as the data-signal of carrier wave part propagation, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limit In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for Used by instruction execution system, device or device or program in connection.
The program code included on computer-readable medium can use any appropriate medium to transmit, including but not limited to without Line, electric wire, optical cable, RF etc., or above-mentioned any appropriate combination.
It can be write with one or more programming languages or its combination for performing the computer that the present invention is operated Program code, described program design language includes object oriented program language-such as Java, Smalltalk, C++, Also include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with Fully perform, partly perform on the user computer on the user computer, as independent software kit execution, a portion Divide part execution or the execution completely on remote computer or server on the remote computer on the user computer. It is related in the situation of remote computer, remote computer can be by the network of any kind(Including LAN (LAN) or wide area Net (WAN))Subscriber computer is connected to, or, it may be connected to outer computer(For example using ISP come Pass through Internet connection).
Below with reference to the flow chart and equipment of the method for the embodiment of the present invention(Or system)Block diagram description the present invention Embodiment.It should be appreciated that in each square frame and flow chart and/or block diagram of flow chart and/or block diagram each square frame combination It can be realized by computer program instructions.These computer program instructions can be supplied to all-purpose computer, special-purpose computer Or the processor of other programmable data processing units, so as to produce a kind of machine, these computer program instructions pass through meter Calculation machine or other programmable data processing units are performed, and generate work(specified in the square frame in implementation process figure and/or block diagram The device of energy/operation.
These computer program instructions can also be stored in can cause computer or other programmable data processing units In the computer-readable medium worked in a specific way, so, the instruction being stored in computer-readable medium just produces one The product of the command device of function/operation specified in the individual square frame including in implementation process figure and/or block diagram.
Computer program instructions can also be loaded into computer, other programmable data processing units or miscellaneous equipment On so that series of operation steps is performed on computer, other programmable data processing units or miscellaneous equipment, in terms of producing The process that calculation machine is realized, so that the instruction performed on computer or other programmable devices can provide implementation process figure And/or the process of function/operation specified in the square frame in block diagram.
According to the embodiment of the present invention, it is proposed that a kind of method and apparatus used in recognition of face.
Herein, it is to be understood that any number of elements in accompanying drawing is used to example and unrestricted and any Name is only used for distinguishing, without any limitation.
Below with reference to the principle and spirit of some representative embodiments of the present invention, in detail the explaination present invention.
Summary of the invention
The inventors discovered that, in recognition of face, unstable user images quality is the main cause for causing recognition failures, And the unstable angle for being mainly reflected in face in user images of user images quality, size, position and image are not of uniform size Cause, if it is possible to solve consistency problem of the face in user images, it becomes possible to greatly improve the user images of various quality Feature uniformity.
After the general principle of the present invention is described, lower mask body introduces the various non-limiting embodiment party of the present invention Formula.
Application scenarios overview
With reference first to Fig. 2, Fig. 2 is the user images for recognition of face, poor, the of the invention embodiment party of the picture quality Formula can improve the uniformity of user images feature under the application scenarios.
Illustrative methods
With reference to Fig. 2 application scenarios, be described with reference to Figure 3 is used for the application according to exemplary embodiment of the invention The method of scene.Understand spirit and principles of the present invention it should be noted that above-mentioned application scenarios are for only for ease of and show, Embodiments of the present invention are unrestricted in this regard.On the contrary, embodiments of the present invention can apply to applicable appoint What scene.
It is a kind of method exemplary process diagram used in recognition of face of the present invention, as illustrated, this shows referring to Fig. 3 Example property method can for example include:
S301, by user images by facial feature points detection obtain face characteristic point coordinates;
S302, the differential seat angle for calculating according to face characteristic point coordinates face and horizontal direction, by the user images rotate to Face and the differential seat angle of horizontal direction meet preset standard angle;
S303, the dimensional ratios according to face characteristic point coordinates calculating face and preset standard face, by the user images Scaled according to dimensional ratios;
S304, the position according to human face characteristic point in user images, will cut by the user images for rotating and scaling To the normal place of standard area size and human face characteristic point in the standard area.
Using above-described embodiment method, by user images by facial feature points detection obtain face characteristic point coordinates it Afterwards, the face in image has been carried out including adjustment angle, adjustment size, adjustment face location and has cut returning for image size One changes face processing so that the unstable user images of picture quality can obtain the face of alignment in the same size by processing Image, the face characteristic for extracting same user images be more nearly unanimously, influence of the elimination low-quality image to recognition of face, To reduce follow-up recognition of face difficulty, improve accuracy of identification.
It should be noted that the inventive method can apply to two-dimension human face identification, three-dimensional face can also be applied to and known Not, specifically can be according to needs be implemented, accordingly using two dimensional image or 3-D view as at the input of the inventive method Reason, the present invention is not limited in this regard.
Below, user images are obtained into face characteristic point coordinates by facial feature points detection to step S301 to carry out in detail Illustrate, in an embodiment of the present invention, specifically can obtain face characteristic point coordinates by following steps, for example:
User images are converted into gray-scale map;
The face detection module in OpenCV image procossings storehouse is called to carry out Face datection to the gray-scale map, wherein, OpenCV Storehouse uses the Face datection algorithm based on Haar features and cascade AdaBoost classification;
When detecting face, coordinate of the summit of face rectangular area four in view picture gray-scale map is determined;
The input of four apex coordinates and gray-scale map as facial feature points detection model is handled, face is obtained special Point coordinates is levied, wherein, the facial feature points detection model specifically can be in advance using ASM algorithms to marked face characteristic The user images of point are trained acquisition.
Wherein, use ASM algorithms the user images of marked human face characteristic point are trained can specifically include with Lower step:
A data set for including N width training images is set up, each image must include a face, wherein training image The collection situation collection situation being likely encountered during with actual use that to try one's best it is similar, such as:Training image data set is comprising various Possible illumination condition, user's expression, user's head angle, wear glasses or do not wear glasses etc.;
The each image that training data is concentrated is proceeded as follows:
Manual mark is carried out to M human face characteristic point in each image, including four tops of face rectangular area Point, obtains coordinate of the characteristic point in entire image, for example, M can take the face characteristic of 68 human face characteristic points, by hand mark Point can include left eye characteristic point, right eye characteristic point, nasion characteristic point, nose characteristic point, corners of the mouth characteristic point, cheek contour feature Point etc.;
The face in each image is detected using OpenCV, face rectangular area four summit is obtained in the picture Coordinate;
Training dataset and all coordinate informations are inputted into ASM algorithms and are trained, feature point detection model is obtained.
Sat it should be noted that user images are obtained into human face characteristic point by facial feature points detection described in step S301 Mark, wherein human face characteristic point is specifically as follows left eye characteristic point and right eye characteristic point, or such as nasion characteristic point and nose Other human face characteristic points such as sharp characteristic point, with specific reference to implement need choose, below, by left eye characteristic point of human face characteristic point with Exemplified by right eye characteristic point, step S302 to S304 of the present invention is described in detail:
Wherein, step S302 calculates face and the differential seat angle of horizontal direction according to face characteristic point coordinates, is specifically as follows Two characteristic point lines and the differential seat angle of horizontal direction are calculated according to left eye feature point coordinates and right eye feature point coordinates;
Wherein, the differential seat angle that the user images are rotated to face and horizontal direction is met into preset standard described in step S302 Angle, is specifically as follows that to rotate the user images to two characteristic point lines and the differential seat angle of horizontal direction be zero degree;
Wherein, the dimensional ratios of face and preset standard face are calculated described in step S303 according to face characteristic point coordinates, It is specifically as follows and two characteristic point distances and preset standard distance is calculated according to left eye feature point coordinates and right eye feature point coordinates Ratio;
Wherein, the user images are scaled according to dimensional ratios described in step S303, can specifically zooms to two characteristic points Distance, apart from equal, can also zoom to the ratio of two characteristic point distances and preset standard distance equal to default ratio with preset standard Rate, can specifically pre-set according to needs are implemented;
Wherein, the position described in step S304 according to human face characteristic point in user images, will pass through what is rotated and scale User images are cut to standard area size and human face characteristic point is in the normal place of the standard area, are specifically as follows basis Position of two characteristic points in user images, will be cut to standard area size by the user images for rotating and scaling and two is special The normal place for being a little in the standard area is levied, for example, one of two characteristic points arrive at least two of user images after cutting The distance at non-parallel edge is preset standard distance, for example, as shown in figure 3, will be cut by the user images for rotating and scaling Top margin frame to left eye characteristic point to user images is preset standard distance apart from b apart from a and left frame;
Wherein, it is pre- to be reduced described in step S304 to standard area size and can specifically be cut to the outer rims of user images If standard size, it is specifically as follows as shown in figure 4, the high H and width W of user images are preset standard size.
Below, using human face characteristic point as nasion characteristic point and nose characteristic point example, step S302 to S304 of the present invention is entered Row is described in detail:
Wherein, step S302 calculates face and the differential seat angle of horizontal direction according to face characteristic point coordinates, is specifically as follows Two characteristic point lines and the differential seat angle of horizontal direction are calculated according to nasion feature point coordinates and nose feature point coordinates;
Wherein, the differential seat angle that the user images are rotated to face and horizontal direction is met into preset standard described in step S302 Angle, is specifically as follows that to rotate the user images to two characteristic point lines and the differential seat angle of horizontal direction be 90 degree;
Wherein, the dimensional ratios of face and preset standard face are calculated described in step S303 according to face characteristic point coordinates, It is specifically as follows and two characteristic point distances and preset standard distance is calculated according to nasion feature point coordinates and nose feature point coordinates Ratio;
Wherein, the user images are scaled according to dimensional ratios described in step S303, can specifically zooms to two characteristic points Distance, apart from equal, can also zoom to the ratio of two characteristic point distances and preset standard distance equal to default ratio with preset standard Rate, can specifically pre-set according to needs are implemented;
Wherein, the position described in step S304 according to human face characteristic point in user images, will pass through what is rotated and scale User images are cut to standard area size and human face characteristic point is in the normal place of the standard area, are specially special according to two The position a little in user images is levied, standard area size and two characteristic points will be cut to by the user images for rotating and scaling Normal place in the standard area, for example, nose characteristic point will be cut to use by the user images for rotating and scaling The top margin frame of family image is preset standard distance apart from b apart from a and left frame;
, can also be through it should be noted that user images of the present invention are directly obtained after being gathered by camera Cross acquisition after other pretreatments, it is contemplated that environment difference is gathered in actual environment, inconsistent situation occurs for user images can Can be more, in order to improve the success rate of recognition of face, the present invention also proposes to obtain matter by following quality evaluation pre-treatment step Measure qualified user images, image off quality screened out, for example, quality evaluation pre-treatment step can include:
Extract the characteristics of image of the user images;
Judge described image feature whether in the range of level threshold value;
If it is, obtaining the user images.
Wherein, the characteristics of image for the user images extracted specifically can be according to implementing to need to be set, can be different The face characteristic information extracted in following embodiments during recognition of face, can include:For example, illumination effect can be directed to, adopt It is distributed with the grey level histogram of contrast left and right face area, selects reasonable illumination threshold range to screen out and differentiated bright, excessively dark Image, for example, can specifically include:
Described image feature is specifically as follows the left face of user images and the grey level histogram of right face;
The level threshold value scope is specifically as follows illumination threshold range.
For another example can be too low for defocusing blurring, motion blur and resolution ratio, image is calculated using gradient operator Image quality evaluation index degree, such as sharpness selects reasonable image quality evaluation index degree threshold value to screen out blurred picture, For example, can specifically include:
The characteristics of image for extracting user images is specifically as follows the image matter that user images are calculated using gradient operator Measure evaluation index degree;
The level threshold value scope is specifically as follows image quality evaluation index degree threshold range.
For another example the light and shade ratio of left and right face can be directed to, the unqualified optical sieving less than standard ratio threshold value is gone out Come to screen out the image of left and right negative and positive face, for example, can specifically include:
Described image feature is specifically as follows facial image grey level histogram light and shade distribution ratio;
The level threshold value scope is specifically as follows standard ratio threshold range.
If, specifically can be according to the image extracted it is understood that described image feature is not in threshold range Feature and the corresponding message that such as herein below is returned to user of the level threshold value range content judged, for example:
Human face region illumination is excessively bright!Ambient light is too bright, is resurveyed after please adjusting;
Or,
Human face region illumination is excessively dark!Ambient light is too dark, is resurveyed after please adjusting;
Or,
Left and right negative and positive face!Face or so uneven illumination, please adjust light and resurvey;
Or,
Resolution ratio is too low, defocusing blurring or motion blur!Camera pixel is too low or rocks and causes image blurring.
It can be seen that, the user images of reliable in quality can be obtained by above quality evaluation pre-treatment step, people is effectively improved The success rate of face identification.In addition, it is contemplated that the registration phase of recognition of face is different with the user images quality requirement of authentication phase, For example, registration phase is in order to extract more complete effective face characteristic information, it usually needs the user images obtained have higher Picture quality, and authentication phase is in order to reduce the difficulty of user authentication, it usually needs the user images obtained can have summary Low picture quality, therefore, whether the present invention is proposed before judging characteristics of image in the range of level threshold value, can also be wrapped Include:
Judge that the user images are used to register or for certification;
If for registering, it is described to judge whether characteristics of image is specifically as follows in the range of level threshold value and judge image spy Levy whether in the range of the first level threshold value for registration;
If for certification, it is described to judge whether characteristics of image is specifically as follows in the range of level threshold value and judge image spy Levy whether in the range of the second level threshold value for certification.
In addition, in order to reduce influence of the illumination in collection environment to the user images pixel value of collection, making user images Stabilised quality is still kept in various photoenvironments, the present invention also proposes to carry out illumination pretreatment to user images so that different The user images gathered under illumination have approximate illumination patterns after treatment, at utmost reduce illumination to user images Influence, specifically, for example, the present invention can also include:
Gamma transformation is carried out to the user images by cutting;
High and low frequency part is filtered out using wave filter, the user images updated are obtained.
The user images that are obtained after being handled by the various embodiments described above, are provided with good uniformity and light substantially According to distribution, still, due to human face region and the rectangle of non-critical, generally, the user after above-described embodiment is handled Image can also be disturbed in the lower left corner and lower right corner meeting introducing portion background, sometimes forehead region by hair, therefore, in this hair In a bright embodiment, in order to further improve effective human face region in user images, it is further proposed that by the following method The ambient interferences beyond face part in user images are removed, specifically, for example, can include:
The user images by cutting are covered using standard faces template, for example, the standard faces template is such as Fig. 5 institutes The template shown;
Portion intercepts of the capped user images in default template effective coverage are come out as renewal by cutting out The user images cut.
It can be seen that, the face characteristic information of the user images obtained after the various embodiments described above processing is extracted, people is ensure that The uniformity of face characteristic information, improves the success rate of recognition of face.
In the present invention, in order to which the face characteristic information extracted can effectively describe face characteristic, by many experiments Comparative analysis, it is final to determine that Gabor characteristic, LBP features and the HOG features of extracting the user images by cutting are special as face Reference is ceased, and these three features are described in detail below:
Wherein, Gabor characteristic is feature of the field of face identification closest to human visual system.In the present invention, mainly The effective information of human face region is extracted using multiple dimensioned, the multi-direction characteristic of Gabor characteristic, is described in detail below:
The definition of Two-Dimensional Gabor Wavelets is:
In formula:σ is the constant relevant with small echo frequency bandwidth;Z=(x, y) is locus coordinate;κ is determined in Gabor The direction of core and yardstick, in the sampling using 8 directions and 5 yardsticks, the κ on direction and yardstick can be written asWherein κvmax/fvFor sampling scale, v ∈ { 0,1 ..., 4 } are yardstick label;Φμ=π μ/8 are sampling side To μ ∈ { 0,1 ..., 7 } are direction label, KmaxFor peak frequency, f is the kernel interval factor in frequency domain, wherein, make parameter Kmax=π/2、The π of σ=2, can obtain preferable small echo and characterize and distinguish effect, Gabor transformation and Gabor kernels Convolution is:
Jk(z)=I(z)*Ψ(k,z);
If Jk(z) amplitude and phase is respectively AkAnd φk, thenCombine different scale and the J in directionk (z), Gabor characteristic vector of the pie graph picture at z location;
Defining similitudes of the Gabor characteristic J and J ' when not considering phase difference is:
When extracting the Gabor characteristic of facial image, generally using multiple Gabor filtering on different scale and direction Device constitute wave filter group, and according to the characteristics of image and neuro-physiology conclusion come selection parameter, generally research using altogether bag Include 8 directions(n=8;μ=0,1 ..., 7)With 5 yardsticks(Kmax=pi/2;f=2;v=0,1,2,3,4)Gabor filter group, And σ=π is made, the bandwidth for making wave filter is about 1 octave.
Because Gabor characteristic has good spatial locality and set direction, and have to illumination, posture certain Robustness, therefore obtain in recognition of face successful application.
In the present invention, facial image first can be normalized to 80*80 sizes, then extracts 5 yardsticks, the Gabor in 8 directions Feature, obtains the dimensional feature vector of 40*80*80=256000.
Wherein, LBP(Local Binary Pattern)Feature is using textural characteristics come the texture information of phenogram picture, base This LBP operators with the local binary value of surrounding eight neighborhood point as the new pixel value of central point, but in view of LBP features only Only it is the LBP histogram features that single yardstick is extracted for uniform block image so that the LBP features of uniform grid can not be effective Adapt to the face that offsets, therefore, in the present invention, using it is multiple dimensioned, there are overlapping circular LBP features effectively to adapt to this Change, can successfully manage face alignment has the situation of slight errors, achieves more more preferable than single yardstick, simple division image Recognition performance.In the present invention, facial image is first normalized to 130*150 sizes, then using 5 yardsticks(Respectively 10*11, 11*13、13*15、15*18、18*21), overlapping 3 pixel method divide image, 8409 sub-regions are obtained altogether, are finally carried Take the P of every sub-regions(8,2), uniform LBP histogram vectors(59 dimensions), be obtained the dimension histogram of 8409*59=496131 to Amount.For two images to be compared, card side's distance between respective sub-areas LBP histogram vectors can be calculated, it is used as Similarity measurement.
Wherein, HOG(Histogram of Orientied Gradient)Feature uses histograms of oriented gradients mark sheet State the external appearance characteristic and directional information of image.Facial image is first normalized to 80*80 sizes in the present invention, then using 5 chis Degree(Respectively 4*4,6*6,8*8,10*10,12*12), overlapping 50% method divide image, obtain 2876 sub-regions altogether, The HOG features per sub-regions are finally extracted, the dimensional feature vector of 2876*16=46016 is obtained.For two images to be compared, Card side's distance between respective sub-areas HOG histogram vectors can be calculated, its similarity measurement is used as.
For three kinds of features of said extracted, it will respectively obtain a high dimensional feature vector, such as above-described embodiment is retouched Gabor characteristic described in stating is characterized as that 496131 dimensions and HOG are characterized as 46016 dimensions for 2560000 dimensions, LBP, if directly profit It is identified with these three high dimensional feature vectors, expense is all very big over time and space, is not easy to actual use.Therefore, The present invention is also proposed using AdaBoost algorithms selections Gabor characteristic, LBP features and HOG features as face characteristic information, real Existing Feature Dimension Reduction, obtains the feature most beneficial for Classification and Identification, improves the face characteristic extraction rate of Qualify Phase, specifically, Three kinds of features can be trained and be recognized respectively using Adaboost methods, finally each picked out most beneficial for identification Preceding 100 dimension Gabor characteristic, the corresponding LBP features of 100 sub-regions, the corresponding HOG features of 100 sub-regions.Wherein, AdaBoost algorithms are a kind of Supervised machine learning algorithms, are mainly used in binary classification problems.The algorithm can descend root online According to training set (predominantly characteristic value classification corresponding with its), multiple Weak Classifiers are automatically generated, then Weak Classifier are combined Into a final strong classifier.In use, AdaBoost makes decisions to the characteristic value of input sample on line, classification knot is provided Really.It is automatic from various features to select discrimination most strong a few features in addition, AdaBoost can also realize Feature Dimension Reduction.
It should be noted that obtain face characteristic information after the various embodiments described above are handled, can for registered user, It may also be used for certification user:
It is in the present invention, same to use if for registered user, the face characteristic information can be stored in database Family can register many parts of face informations in database, can specifically be configured as needed.
If for certification user, the flow of certification can include:
Obtain the identity of user's statement;
The image of the user is gathered by camera;
Face characteristic information is obtained after the various embodiments described above are handled;
Face characteristic information corresponding with the identity is exported from database;
Face characteristic information face characteristic information corresponding with the identity will be obtained after the various embodiments described above are handled It is compared;
It it should be noted that the face characteristic information of registration may have many parts, therefore, it can that judgement is compared successively.
If comparison result is consistent, certification success;
If comparison result is inconsistent, authentification failure.
Wherein, what face characteristic information was compared, which implements, to be:
Face characteristic information is obtained after the various embodiments described above are handled, vector is usedRepresent;
Derived face characteristic information corresponding with the identity, uses vector from databaseRepresent.
CalculateWherein,The difference of two groups of characteristic values is reflected, therefore can basis GivenJudge that it belongs to consistent or inconsistent, in the present invention it is possible to willThe input present invention is above-mentioned The binary classifier generated while realizing Feature Dimension Reduction using AdaBoost algorithms in embodiment, obtains court verdict.
In addition, in the present invention, the influence that the change in order to solve face is caused to recognition of face, it is also proposed that face is special What reference ceased automatically updates, for example, the present invention can also include:User's registration information is obtained, the user's registration information shape is judged Whether state meets update condition, if it is satisfied, replacing user's note with the face characteristic information by the user images cut The former face characteristic information included in volume information, specifically, for example, the user's registration information state is specifically as follows user's note The time interval of volume time to current time, the update condition is specifically as follows the time interval equal to default timing more New time interval, according to the inventive method, is automatically updated because the present invention is introduced in recognition of face, can be updated meeting During condition, user's face characteristic information in database is updated, even if therefore elapse over time, face there occurs Change, method of the invention remains to realize effective recognition of face.
Exemplary means
After the method for exemplary embodiment of the invention is described, next, with reference to Fig. 6 to exemplary reality of the invention A kind of device used in recognition of face for applying mode is introduced, as illustrated, the device can include:
Face datection unit 601:It is configured to user images obtaining human face characteristic point seat by facial feature points detection Mark;
Pretreatment unit 602:It is configured to calculate face and the differential seat angle of horizontal direction according to face characteristic point coordinates;Root The dimensional ratios of face and preset standard face are calculated according to face characteristic point coordinates, the user images are contracted according to dimensional ratios Put;According to position of the human face characteristic point in user images, standard area will be cut to by the user images for rotating and scaling The normal place of size and human face characteristic point in the standard area.
In an embodiment of the present invention, the Face datection unit 601:Specifically it may be configured to pass through user images Facial feature points detection obtains left eye feature point coordinates and right eye feature point coordinates.
Correspondingly, the pretreatment unit 602:Specifically it may be configured to according to left eye feature point coordinates and right eye feature Point coordinates calculates two characteristic point lines and the differential seat angle of horizontal direction, and the user images are rotated to two characteristic point lines and level The differential seat angle in direction is zero.
The pretreatment unit 602:Specifically it may be configured to according to left eye feature point coordinates and right eye feature point coordinates Calculate the ratio of two characteristic point distances and preset standard distance.
The pretreatment unit 602:The specific position that may be configured to according to two characteristic points in user images, will be through The user images for crossing rotation and scaling are cut to standard area size and normal place of two characteristic points in the standard area.
In an alternative embodiment of the invention, described device can also include:Image quality evaluation unit 603:It is configured to The characteristics of image of the user images is extracted, described image feature is judged whether in the range of level threshold value, if it is, obtaining institute State user images.
Wherein, described image quality evaluation unit 603:The specific left face that may be configured to extract the user images and The grey level histogram of right face, judges the grey level histogram whether in standard illumination threshold range, if it is, obtaining described use Family image.
Wherein, described image quality evaluation unit 603:It is specific to may be configured to calculate user images using gradient operator Image quality evaluation index degree, judge described image quality evaluation index degree whether in image quality evaluation index degree threshold value model In enclosing, if it is, obtaining the user images.
Wherein, described image quality evaluation unit 603:The specific facial image ash that may be configured to extract user images Histogram light and shade distribution ratio is spent, the ratio is judged whether in standard ratio threshold range, if it is, obtaining the user Image.
Wherein, described image quality evaluation unit 603:Can also be configured to judge the user images be used for register still For certification, if for registering, it is described to judge whether characteristics of image is specifically as follows judgement image in the range of level threshold value Whether whether feature is in the range of the first level threshold value for registration, if being used for certification, described to judge characteristics of image in mark It is specifically as follows in quasi- threshold range and judges characteristics of image whether in the range of the second level threshold value for certification.
In an alternative embodiment of the invention, described device can also include:Photo-irradiation treatment unit 604:It is configured to warp Cross the image cut and carry out gamma transformation, filter out high and low frequency part using wave filter, obtain the user images updated.
In yet another embodiment of the invention, described device can also include:Except interference unit 605:It is configured to utilize mark Capped user images are being preset the part in template effective coverage by quasi- face template covering by the user images cut Interception is out as the user images by cutting updated.
For the device described in the various embodiments described above, it can also include:Feature extraction unit 606:Use can specifically be configured Face characteristic information is used as in extracting Gabor characteristic, LBP features and HOG features by the user images cut.
Wherein, the feature extraction unit 606:Can also be configured to using AdaBoost algorithms selections Gabor characteristic, LBP features and HOG features are used as face characteristic information.
As a preferred embodiment, device of the present invention can also include:Updating block 607:It is configured to obtain User's registration information, judges whether the user's registration information state meets update condition, if it is satisfied, passing through what is cut with described The face characteristic information of user images replaces the former face characteristic information included in user's registration information.
Although it should be noted that the son that a kind of device used in recognition of face is referred in above-detailed is single Member, but this division is only not enforceable.In fact, according to the embodiment of the present invention, above-described two or More multiunit feature and function can embody in a unit.Conversely, the feature and work(of an above-described unit It can be further divided into being embodied by multiple units.
In addition, although the operation of the inventive method is described with particular order in the accompanying drawings, this do not require that or Hint must be performed according to the particular order these operation, or the operation having to carry out shown in whole could realize it is desired As a result.On the contrary, the step of describing in flow chart can change execution sequence.Additionally or alternatively, it is convenient to omit some steps, Multiple steps are merged into a step execution, and/or a step is decomposed into execution of multiple steps.
Verb " comprising ", "comprising" and its paradigmatic use referred in application documents is not excluded for except application documents Described in those elements or the element outside step or the presence of step.Article "a" or "an" before element is not excluded for many The presence of individual this element.
Although describing spirit and principles of the present invention by reference to some embodiments, it should be appreciated that, this Invention is not limited to disclosed embodiment, and the division to each side does not mean that the feature in these aspects can not yet Combination is this to divide merely to the convenience of statement to be benefited.It is contemplated that cover appended claims spirit and In the range of included various modifications and equivalent arrangements.Scope of the following claims meets broadest explanation, so that comprising All such modifications and equivalent structure and function.

Claims (22)

1. a kind of method used in recognition of face, including:
The characteristics of image of user images is extracted, described image feature is judged whether in the range of level threshold value, if it is, obtaining institute State user images;Wherein, described image feature is specially the left face of user images and the grey level histogram of right face, the standard threshold Value scope is specially illumination threshold range;Or, the characteristics of image for extracting user images is specially to use gradient operator meter The image quality evaluation index degree of user images is calculated, the level threshold value scope is specially image quality evaluation index degree threshold value model Enclose;Or, described image feature is specially facial image grey level histogram light and shade distribution ratio, and the level threshold value scope is specific For standard ratio threshold range;
The user images are converted into gray-scale map, Face datection is carried out to the gray-scale map, when detecting face, people is determined Coordinate of the summit of face rectangular area four in view picture gray-scale map, regard four apex coordinates and the gray-scale map as face characteristic The input of point detection model carries out processing and obtains face characteristic point coordinates;
Face and the differential seat angle of horizontal direction are calculated according to face characteristic point coordinates, the user images are rotated to face and level The differential seat angle in direction meets preset standard angle;
The dimensional ratios of face and preset standard face are calculated according to face characteristic point coordinates, by the user images according to size ratio Rate is scaled;
According to position of the human face characteristic point in user images, standard area will be cut to by the user images for rotating and scaling The normal place of size and human face characteristic point in the standard area.
2. according to the method described in claim 1, wherein the human face characteristic point coordinate is that left eye feature point coordinates and right eye are special Levy point coordinates.
3. method according to claim 2, wherein described calculate face and horizontal direction according to face characteristic point coordinates Differential seat angle, is specially the angle that two characteristic point lines and horizontal direction are calculated according to left eye feature point coordinates and right eye feature point coordinates Degree is poor;
It is described that the differential seat angle that the user images are rotated to face and horizontal direction is met into preset standard angle, it is specially to use this Family image rotation to two characteristic point lines and the differential seat angle of horizontal direction are zero degree.
4. method according to claim 2, wherein described calculate face and preset standard people according to face characteristic point coordinates The dimensional ratios of face, be specially according to left eye feature point coordinates and right eye feature point coordinates calculate two characteristic point distances with it is default The ratio of first gauged distance.
5. method according to claim 2, wherein the position according to human face characteristic point in user images, will pass through The user images of rotation and scaling are cut to standard area size and human face characteristic point is in the normal place of the standard area, tool Position of two characteristic points in user images according to body, it is big by standard area is cut to by the user images for rotating and scaling Small and two characteristic points are in the normal place of the standard area.
6. according to the method described in claim 1, it is described judge characteristics of image whether in the range of the level threshold value before, also wrap Include:Judge that the user images are used to register or for certification;
If for registering, it is described judge characteristics of image whether specially judge in the range of level threshold value characteristics of image whether In the range of the first level threshold value of registration;
If for certification, it is described judge characteristics of image whether in the range of level threshold value be specially judge characteristics of image whether In the range of the second level threshold value of certification.
7. according to the method described in claim 1, in addition to:
Gamma transformation is carried out to the user images by cutting;
High and low frequency part is filtered out using wave filter, the user images updated are obtained.
8. according to the method described in claim 1, in addition to:
The user images by cutting are covered using standard faces template;
Portion intercepts of the capped user images in default template effective coverage are come out and pass through what is cut as what is updated User images.
9. the method according to claim 1 to 8 any one, in addition to:Extract the Gabor of the user images by cutting Feature, LBP features and HOG features are used as face characteristic information.
10. method according to claim 9, in addition to:Using AdaBoost algorithms selections Gabor characteristic, LBP features and HOG features are used as face characteristic information.
11. according to the method described in claim 1, in addition to:User's registration information is obtained, the user's registration information state is judged Whether update condition is met, if it is satisfied, replacing user's registration information with the face characteristic information by the user images cut In the former face characteristic information that includes.
12. a kind of device used in recognition of face, including:
Image quality evaluation unit:It is configured to extract the characteristics of image of user images, judges described image feature whether in mark In quasi- threshold range, if it is, obtaining the user images;Wherein, described image feature be specially user images left face and The grey level histogram of right face, the level threshold value scope is specially illumination threshold range;Or, the figure for extracting user images Picture feature is specially the image quality evaluation index degree that user images are calculated using gradient operator, and the level threshold value scope is specific For image quality evaluation index degree threshold range;Or, described image feature is specially facial image grey level histogram light and shade point Cloth ratio, the level threshold value scope is specially standard ratio threshold range;
Face datection unit:It is configured to the user images being converted to gray-scale map, Face datection is carried out to the gray-scale map, When detecting face, determine coordinate of the summit of face rectangular area four in view picture gray-scale map, by four apex coordinates and The gray-scale map carries out processing as the input of facial feature points detection model and obtains face characteristic point coordinates;
Pretreatment unit:It is configured to calculate face and the differential seat angle of horizontal direction according to face characteristic point coordinates, by the user The differential seat angle of image rotation to face and horizontal direction meets preset standard angle;According to face characteristic point coordinates calculate face with The dimensional ratios of preset standard face, the user images are scaled according to dimensional ratios;According to human face characteristic point in user images In position, standard area size and human face characteristic point will be cut to by the user images of rotation and scaling and be in the standard regions The normal place in domain.
13. device according to claim 12, the Face datection unit:It is configured specifically for user images through remarkable Face characteristic point detection obtains left eye feature point coordinates and right eye feature point coordinates.
14. device according to claim 13, the pretreatment unit:It is configured specifically for according to left eye feature point coordinates Two characteristic point lines and the differential seat angle of horizontal direction are calculated with right eye feature point coordinates, the user images are rotated to two characteristic points Line and the differential seat angle of horizontal direction are zero.
15. device according to claim 13, the pretreatment unit:It is configured specifically for according to left eye feature point coordinates Two characteristic point distances and the ratio of default first gauged distance are calculated with right eye feature point coordinates.
16. device according to claim 13, the pretreatment unit:It is configured specifically for according to two characteristic points in user Position in image, will be cut to standard area size by the user images for rotating and scaling and two characteristic points is in the standard The normal place in region.
17. device according to claim 12, described image quality evaluation unit:It is also configured to judge the user images For registering or for certification, whether if for registering, it is specially to judge in the range of level threshold value to judge characteristics of image Whether characteristics of image is in the range of the first level threshold value for registration, described whether to judge characteristics of image if being used for certification It is specially to judge characteristics of image whether in the range of the second level threshold value for certification in the range of level threshold value.
18. device according to claim 12, in addition to:Photo-irradiation treatment unit:It is configured to the image by cutting Gamma transformation is carried out, high and low frequency part is filtered out using wave filter, the user images updated are obtained.
19. device according to claim 12, in addition to:Except interference unit:It is configured to cover using standard faces template Lid by the user images that cut, portion intercepts of the capped user images in default template effective coverage are come out as The user images by cutting of renewal.
20. the device according to claim 12 to 19 any one, in addition to:Feature extraction unit:It is configured specifically for Extract Gabor characteristic, LBP features and HOG features by the user images cut and be used as face characteristic information.
21. device according to claim 20, the feature extraction unit:It is also configured to using the choosing of AdaBoost algorithms Gabor characteristic, LBP features and HOG features are selected as face characteristic information.
22. device according to claim 20, in addition to:Updating block:It is configured to obtain user's registration information, judges Whether the user's registration information state meets update condition, if it is satisfied, special with the face of the user images by cutting Reference breath replaces the former face characteristic information included in user's registration information.
CN201210592215.8A 2012-12-30 2012-12-30 A kind of method and apparatus used in recognition of face Active CN103914676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210592215.8A CN103914676B (en) 2012-12-30 2012-12-30 A kind of method and apparatus used in recognition of face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210592215.8A CN103914676B (en) 2012-12-30 2012-12-30 A kind of method and apparatus used in recognition of face

Publications (2)

Publication Number Publication Date
CN103914676A CN103914676A (en) 2014-07-09
CN103914676B true CN103914676B (en) 2017-08-25

Family

ID=51040346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210592215.8A Active CN103914676B (en) 2012-12-30 2012-12-30 A kind of method and apparatus used in recognition of face

Country Status (1)

Country Link
CN (1) CN103914676B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268539B (en) * 2014-10-17 2017-10-31 中国科学技术大学 A kind of high performance face identification method and system
CN105117692A (en) * 2015-08-05 2015-12-02 福州瑞芯微电子股份有限公司 Real-time face identification method and system based on deep learning
CN105809415B (en) * 2016-03-04 2020-04-21 腾讯科技(深圳)有限公司 Check-in system, method and device based on face recognition
CN106022313A (en) * 2016-06-16 2016-10-12 湖南文理学院 Scene-automatically adaptable face recognition method
CN106126572B (en) * 2016-06-17 2019-06-14 中国科学院自动化研究所 Image search method based on area validation
CN106778925B (en) * 2016-11-03 2021-10-08 五邑大学 Face recognition pose over-complete face automatic registration method and device
CN107423684A (en) * 2017-06-09 2017-12-01 湖北天业云商网络科技有限公司 A kind of fast face localization method and system applied to driver fatigue detection
WO2019002521A1 (en) * 2017-06-29 2019-01-03 Koninklijke Philips N.V. Obscuring facial features of a subject in an image
CA3034688C (en) * 2017-06-30 2021-11-30 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for verifying authenticity of id photo
CN107633209B (en) * 2017-08-17 2018-12-18 平安科技(深圳)有限公司 Electronic device, the method for dynamic video recognition of face and storage medium
CN107610370A (en) * 2017-08-29 2018-01-19 深圳怡化电脑股份有限公司 Not plug-in card ATM and not plug-in card financial trade method
CN107729879A (en) * 2017-11-14 2018-02-23 北京进化者机器人科技有限公司 Face identification method and system
CN108010009B (en) * 2017-12-15 2021-12-21 北京小米移动软件有限公司 Method and device for removing interference image
CN108492344A (en) * 2018-03-30 2018-09-04 中国科学院半导体研究所 A kind of portrait-cartoon generation method
CN108549487A (en) * 2018-04-23 2018-09-18 网易(杭州)网络有限公司 Virtual reality exchange method and device
CN108898628A (en) * 2018-06-21 2018-11-27 北京纵目安驰智能科技有限公司 Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular
CN109040842A (en) * 2018-08-16 2018-12-18 上海哔哩哔哩科技有限公司 Video spectators' emotional information capturing analysis method, device, system and storage medium
CN109359508A (en) * 2018-08-27 2019-02-19 贵阳朗玛信息技术股份有限公司 A kind of head portrait processing method and processing device
CN108921148A (en) * 2018-09-07 2018-11-30 北京相貌空间科技有限公司 Determine the method and device of positive face tilt angle
CN109492540B (en) * 2018-10-18 2020-12-25 北京达佳互联信息技术有限公司 Face exchange method and device in image and electronic equipment
CN110046652A (en) * 2019-03-18 2019-07-23 深圳神目信息技术有限公司 Face method for evaluating quality, device, terminal and readable medium
CN110197137A (en) * 2019-05-14 2019-09-03 苏州沃柯雷克智能系统有限公司 A kind of method, apparatus, equipment and the storage medium of determining palm posture
CN110781473B (en) * 2019-10-10 2021-11-16 浙江大华技术股份有限公司 Method for recognizing and preprocessing face picture
CN110807403B (en) * 2019-10-29 2022-12-02 中新智擎科技有限公司 User identity identification method and device and electronic equipment
CN110909618B (en) * 2019-10-29 2023-04-21 泰康保险集团股份有限公司 Method and device for identifying identity of pet
CN110969189B (en) * 2019-11-06 2023-07-25 杭州宇泛智能科技有限公司 Face detection method and device and electronic equipment
CN110852293B (en) * 2019-11-18 2022-10-18 业成科技(成都)有限公司 Face depth map alignment method and device, computer equipment and storage medium
CN111179174B (en) * 2019-12-27 2023-11-03 成都品果科技有限公司 Image stretching method and device based on face recognition points
CN111028251B (en) * 2019-12-27 2023-08-11 成都牙讯科技有限公司 Dental picture cropping method, system, equipment and storage medium
CN113127658A (en) * 2019-12-31 2021-07-16 浙江宇视科技有限公司 Method, device, medium and electronic equipment for initializing identity recognition database
CN111401242B (en) * 2020-03-16 2023-07-25 Oppo广东移动通信有限公司 Credential detection method, apparatus, electronic device and storage medium
CN111768511A (en) * 2020-07-07 2020-10-13 湖北省电力装备有限公司 Staff information recording method and device based on cloud temperature measurement equipment
CN112053381A (en) * 2020-07-13 2020-12-08 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112070013A (en) * 2020-09-08 2020-12-11 安徽兰臣信息科技有限公司 Method and device for detecting facial feature points of children and storage medium
CN112183421A (en) * 2020-10-09 2021-01-05 江苏提米智能科技有限公司 Face image evaluation method and device, electronic equipment and storage medium
CN113763348A (en) * 2021-09-02 2021-12-07 北京格灵深瞳信息技术股份有限公司 Image quality determination method and device, electronic equipment and storage medium
CN114638018A (en) * 2022-03-29 2022-06-17 润芯微科技(江苏)有限公司 Method for protecting privacy of vehicle owner driving recorder based on facial recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885310A (en) * 2006-06-01 2006-12-27 北京中星微电子有限公司 Human face model training module and method, human face real-time certification system and method
CN102332086A (en) * 2011-06-15 2012-01-25 夏东 Facial identification method based on dual threshold local binary pattern
CN102663361A (en) * 2012-04-01 2012-09-12 北京工业大学 Face image reversible geometric normalization method facing overall characteristics analysis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098229A1 (en) * 2005-10-27 2007-05-03 Quen-Zong Wu Method and device for human face detection and recognition used in a preset environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885310A (en) * 2006-06-01 2006-12-27 北京中星微电子有限公司 Human face model training module and method, human face real-time certification system and method
CN102332086A (en) * 2011-06-15 2012-01-25 夏东 Facial identification method based on dual threshold local binary pattern
CN102663361A (en) * 2012-04-01 2012-09-12 北京工业大学 Face image reversible geometric normalization method facing overall characteristics analysis

Also Published As

Publication number Publication date
CN103914676A (en) 2014-07-09

Similar Documents

Publication Publication Date Title
CN103914676B (en) A kind of method and apparatus used in recognition of face
US11922646B2 (en) Tracking surgical items with prediction of duplicate imaging of items
US20210056360A1 (en) System and method using machine learning for iris tracking, measurement, and simulation
WO2020000908A1 (en) Method and device for face liveness detection
CN108351961B (en) Biological recognition system and computer implemented method based on image
CN108334848B (en) Tiny face recognition method based on generation countermeasure network
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN106803067B (en) Method and device for evaluating quality of face image
CN105917353B (en) Feature extraction and matching for biological identification and template renewal
CN106056064B (en) A kind of face identification method and face identification device
CN109684925B (en) Depth image-based human face living body detection method and device
CN103902977B (en) Face identification method and device based on Gabor binary patterns
CN107506693B (en) Distort face image correcting method, device, computer equipment and storage medium
WO2022001509A1 (en) Image optimisation method and apparatus, computer storage medium, and electronic device
CN109086718A (en) Biopsy method, device, computer equipment and storage medium
CN108846319A (en) Iris image Enhancement Method, device, equipment and storage medium based on histogram
CN110147721A (en) A kind of three-dimensional face identification method, model training method and device
CN103116902A (en) Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN106372629A (en) Living body detection method and device
CN110287791A (en) A kind of screening technique and system for face picture
CN108629262A (en) Iris identification method and related device
Efraty et al. Facial component-landmark detection
CN111539911B (en) Mouth breathing face recognition method, device and storage medium
CN107918773A (en) A kind of human face in-vivo detection method, device and electronic equipment
CN109460767A (en) Rule-based convex print bank card number segmentation and recognition methods

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190626

Address after: 311215 Room 102, 6 Blocks, C District, Qianjiang Century Park, Xiaoshan District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou Yixian Advanced Technology Co., Ltd.

Address before: 310013 Room 604-605, 6th floor, 18 Jiaogong Road, Xihu District, Hangzhou City, Zhejiang Province

Patentee before: Hangzhou Langhe Technology Limited

TR01 Transfer of patent right