CN106203248A - Method and apparatus for face recognition - Google Patents

Method and apparatus for face recognition Download PDF

Info

Publication number
CN106203248A
CN106203248A CN201510441052.7A CN201510441052A CN106203248A CN 106203248 A CN106203248 A CN 106203248A CN 201510441052 A CN201510441052 A CN 201510441052A CN 106203248 A CN106203248 A CN 106203248A
Authority
CN
China
Prior art keywords
face
facial
model
user
shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510441052.7A
Other languages
Chinese (zh)
Inventor
金亭培
李宣旼
黄英珪
韩在濬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to CN202210474282.3A priority Critical patent/CN114627543A/en
Publication of CN106203248A publication Critical patent/CN106203248A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

A kind of method and apparatus for face recognition is provided.The open a kind of face recognition device of at least one example embodiment, described face recognition device is configured to: obtain two dimension (2D) input picture of the face area including user, from two dimension 2D input picture detection facial feature points;Facial feature points based on detection adjusts the attitude of three-dimensional (3D) facial model of storage;2D projection picture is produced from the 3D facial model adjusted;Face recognition is performed based on the face area in the face area in 2D input picture and 2D projection picture;The result of output face recognition.

Description

Method and apparatus for face recognition
This application claims and within 5th, be submitted to the of Korean Intellectual Property Office in JIUYUE in 2014 10-2014-0118828 korean patent application and on January 7th, 2015 are submitted to Korean Intellectual Property Office The benefit of priority of 10-2015-0001850 korean patent application, described each korean patent application Full content be all incorporated herein by quoting.
Technical field
At least some example embodiment relates to identify the face detection of the face of user.
Background technology
With identification technology (such as, fingerprint recognition and the iris knowledge needing user to perform special exercise or action ) not different, face detection is considered to verify in the case of not contacting the convenience of target with target And competitive biological identification technology.Due to convenience and the effectiveness of this face detection, because of This this face detection has been widely used for various application, and (such as, security system, movement are recognized Card and multimedia search) in.
Summary of the invention
At least some example embodiment relates to a kind of recognition algorithms.
In at least some example embodiment, described recognition algorithms comprises the steps that from two dimension (2D) defeated Enter image detection facial feature points;Facial feature points based on detection adjusts three-dimensional (3D) face of storage Model;2D projection picture is produced from the 3D facial model adjusted;Throw based on 2D input picture and 2D Shadow image performs face recognition.
The step of the described 3D facial model adjusting storage comprises the steps that by the facial feature points by detection It is mapped to the 3D facial model of storage to adjust facial pose and face's table of the 3D facial model of storage Feelings.
The 3D facial model of storage includes 3D shape and 3D texture model.3D shape and 3D texture model can be facial pose and countenance deformable 3D model.
The described step of 3D facial model adjusting storage comprises the steps that based on detecting from 2D input picture Facial feature points adjusts the 3D shape of storage;Parameter information based on the 3D shape adjusted is adjusted Whole 3D texture model.
The step of the described 3D shape adjusting storage comprises the steps that facial feature points based on detection is adjusted The attitude parameter of whole 3D shape and expression parameter.
The step of described generation 2D projection picture comprises the steps that producing 2D from the 3D texture model adjusted throws Shadow image.
The 3D facial model of storage, 2D can be produced based on the characteristic point detected from multiple 2D face images Face image can be the image obtained by catching the face of user from multiple viewpoints.
The step of described execution face recognition comprises the steps that and determines between 2D input picture and 2D projection picture Similarity degree;Whether meet condition based on similarity degree and export the result of face recognition.
The step of described detection facial feature points comprises the steps that extracts face area from 2D input picture;From In face area detection eyebrow, eyes, nose, lip, chin, ear and the face mask extracted Facial feature points at least one.
At least other example embodiment relates to a kind of method producing three-dimensional (3D) facial model.
In some example embodiments, described method comprises the steps that the 2D face obtaining user from multiple viewpoints Portion's image;Facial feature points is detected from 2D face image;Facial feature points based on detection produces variable The 3D shape of shape and deformable 3D texture model;By deformable 3D shape and variable The 3D texture model of shape is stored as the 3D facial model of user.
The step of described generation comprises the steps that based on deformable 3D shape with from 2D face image In the texture information of at least one 2D face image produce deformable 3D texture model.
The step of described generation comprises determining that for the facial feature points of detection is mapped to 3D master die The parameter of the characteristic point of type;Produce deformable by the parameter determined is applied to 3D master pattern 3D shape.
At least other example embodiment relates to a kind of method producing 3D facial model.
In at least some example embodiment, described method comprises the steps that acquisition 2D face image and 2D face The bearing data of portion's image, 2D face image includes the face of user;Determine about 2D face image it Between the information of match point;Bearing data based on 2D face image and the information about match point produce The 3D data of the face of user;Use 3D data that 3D master pattern is converted to the 3D face of user Model.
The step of described acquisition comprises the steps that and uses the exercise data sensed by motion sensor to obtain 2D face The bearing data of portion's image.
It can be the collection of the 3D point of the shape of the face of configuration user about the 3D data of the face of user Close.
The step of described conversion comprises the steps that by the 3D data of 3D master pattern with the face of user being entered 3D master pattern is converted to the 3D facial model of user by row coupling.
At least other example embodiment relates to a kind of face recognition device.
In at least some example embodiment, described face recognition device comprises the steps that image grabber, quilt It is configured to obtain the 2D input picture of the face area including user;3D facial model processor, is joined It is set to the 3D facial model of facial pose based on the user occurred in 2D input picture adjustment storage Facial pose, and produce 2D projection picture from the 3D facial model adjusted;Face recognition device, is configured For performing face recognition based on 2D input picture and 2D projection picture.
3D facial model processor comprises the steps that area detector, is configured to detect from 2D input picture Face area;Feature point detector, is configured to the face area detection facial feature points from detection.
3D facial model processor can be by the 3D facial model of the facial feature points that will detect and storage Characteristic point carries out mating to adjust the facial pose of the 3D facial model of storage.
3D facial model processor can facial pose based on the user occurred in 2D input picture adjust The facial pose of 3D shape, and parameter information based on the 3D shape adjusted adjustment 3D stricture of vagina Reason model.
Described face recognition device may also include that display, is display configured at least in following item : an input picture, 2D projection picture and the result of face recognition in 2D input picture.
At least other example embodiment relates to a kind of equipment for producing 3D facial model.
In at least some example embodiment, described equipment comprises the steps that image grabber, be configured to from Multiple viewpoints obtain the 2D face image of user;Feature point detector, is configured to from 2D face image Detection facial feature points;3D facial model generator, is configured to facial feature points based on detection and produces Deformable 3D shape and deformable 3D texture model;3D facial model logger, is joined It is set to be stored as deformable 3D shape and deformable 3D texture model the 3D face of user Model.
3D facial model generator comprises the steps that 3D shape generator, is configured to based on detection Facial feature points produces the deformable 3D shape of the face of user;3D texture model generator, It is configured to based on deformable 3D shape with from least one the 2D face in 2D face image The texture information of portion's image produces deformable 3D texture model.
At least other example embodiment relates to a kind of equipment for producing 3D facial model.
In at least some example embodiment, described equipment comprises the steps that image grabber, be configured to from Multiple viewpoints obtain the 2D face image of user;Motion sensing unit, is configured to obtain 2D face figure The bearing data of picture;3D facial model generator, is configured to based between about 2D face image The information of match point and the bearing data of 2D face image produce the 3D data of the face of user, 3D Facial model generator is configured with 3D data and 3D master pattern is converted to the 3D face of user Model;3D facial model logger, is configured to store the 3D facial model of user.
The other side of example embodiment will be set forth in part in the following description, and partly will be from Described description it will be apparent that, or can be appreciated that by the enforcement of the disclosure.
Accompanying drawing explanation
From below in conjunction with the accompanying drawing description to example embodiment, these and/or other side will be clear from and It is easier to understand, in the accompanying drawings:
Fig. 1 is the showing of overall operation illustrating the face identification system according at least one example embodiment Figure;
Fig. 2 is to illustrate that three-dimensional (3D) facial model according at least one example embodiment produces equipment The diagram of configuration;
Fig. 3 is the diagram of the configuration illustrating the face recognition device according at least one example embodiment;
Fig. 4 illustrate according at least one example embodiment from two dimension (2D) face image detection characteristic point Process;
Fig. 5 illustrates that the use 3D master pattern according at least one example embodiment produces 3D facial model Process;
Fig. 6 illustrate according at least one example embodiment based on the characteristic point detected from 2D input picture Adjust the process of 3D facial model;
Fig. 7 illustrate according at least one example embodiment by by 2D input picture and 2D projection picture It is compared to perform the process of face recognition;
Fig. 8 is the flow chart illustrating the 3D facial model production method according at least one example embodiment;
Fig. 9 is the flow chart illustrating the recognition algorithms according at least one example embodiment;
Figure 10 is to illustrate that 3D facial model according at least one example embodiment produces another of equipment and joins The diagram put;
Figure 11 is the stream illustrating another 3D facial model production method according at least one example embodiment Cheng Tu.
Detailed description of the invention
Hereinafter, with reference to the accompanying drawings some example embodiment are described in detail.About dividing in the accompanying drawings The reference number of dispensing element, it should be noted that identical element will be indicated by identical reference number, Whenever possible, even if they illustrate in different drawings is also such.Additionally, example is being implemented In the description of example, when the detailed description of known dependency structure or function is believed to result in the disclosure Ambiguous interpretation time, this detailed description will be omitted.
It should be understood, however, that and be not intended to be confined to the disclosure disclosed example embodiment.Phase Instead, example embodiment covers all of amendment, equivalent and the replacement in the range of example embodiment Thing.In the description of whole accompanying drawing, identical label represents identical element all the time.
Additionally, such as first, second, A, B, (a), the term of (b) etc. can here be used for describing Assembly.Each being not used in these terms limits the essence of corresponding assembly, order or sequence, and only uses In corresponding assembly is made a distinction with other assembly.
Term used herein is only for describing the purpose of specific embodiment, and is not intended to limit.As Used herein above, unless the context clearly indicates otherwise, otherwise singulative is also intended to include plural number shape Formula.It will also be appreciated that and " comprise " and/or time " including " when being used herein term, illustrate to exist Feature, entirety, step, operation, element and/or the assembly of statement, but do not preclude the presence or addition of one Or more further feature, entirety, step, operation, element, assembly and/or their group
Unless specifically indicated otherwise, or from discuss it is clear that such as " process " or " calculating " Or the term of " determination " or " display " etc. refer to calculating system or similar electronic calculate device action and Processing, described calculating system or similar computing electronics are by the RS of computer system In be expressed as physical quantity, the data manipulation of electricity and be converted to computer system memory or depositor or It is similarly represented as its of physical quantity in other this information-storing device, transmitting device or display device Its data.
Be also to be noted that in some selectable embodiments, it is indicated that function/behavior can not press The order pointed out in accompanying drawing occurs.Such as, according to involved function/behavior, two illustrated continuously figure Actually can be performed the most simultaneously or sometimes can be performed in reverse order.
Now with reference to illustrating the accompanying drawing of some example embodiment, more fully various example embodiment are entered Line description.In the accompanying drawings, for the sake of clarity, the thickness in layer and region can be exaggerated.
Fig. 1 is the overall operation illustrating the face identification system 100 according at least one example embodiment Diagram.Face identification system 100 can be from two dimension (2D) the input picture identification user for face recognition Face.Face identification system 100 can extract and identify present 2D by analyzing 2D input picture The face of the user in input picture.(such as, face identification system 100 can be used for various application Safety monitoring system, mobile authentication and multimedia data searching) in.
Three-dimensional (3D) facial model of user can be registered and use registration by face identification system 100 3D facial model perform face recognition.3D facial model can be can be according to occurring in 2D input Facial pose or the countenance of the user in image and the deformable 3D model that deforms.Such as, when going out When facial pose in 2D input picture is towards left side now, the rotatable registration of face identification system 100 3D facial model with towards left side.Additionally, face identification system 100 can be based on occurring in 2D input The countenance of the user in image adjusts the countenance of 3D facial model.Such as, face recognition System 100 can analyze the countenance of user based on the facial feature points detected from 2D input picture, And adjust the shape of the eyes of 3D facial model, lip and nose so that the shape adjusted is corresponding to analyzing Countenance.
Face identification system 100 can produce 2D projection picture from the 3D facial model of registration, and by inciting somebody to action 2D projection picture and 2D input picture are compared to perform face recognition.2D image can be used real-time Perform face recognition.2D projection seems the 2D referring to obtain by 3D facial model is projected to plane Image.Such as, 2D projection picture can be by same or similar with the viewpoint in 2D input picture The viewpoint 3D facial model mated with 2D input picture of projection and the 2D image that obtains, therefore, go out Facial pose in 2D projection picture can be with the facial pose of the user occurred in 2D input picture now Same or similar.Can be by by the 3D facial model of pre-stored and the face occurred in 2D input picture Attitude carries out mating and being compared with 2D input picture by 2D projection picture, performs face recognition. Although the facial pose of the user occurred in 2D input picture is not towards above, but can be by will 3D facial model carries out mating and performing face recognition with the facial pose occurred in 2D input picture, Realize the raising discrimination of the change in response to attitude.
Hereinafter, will be described in the operation of face identification system 100.By face identification system 100 The face recognition performed can include the process 110 registering the 3D facial model of user and use registration 3D facial model from the process 120 of the face of 2D input picture identification user.
With reference to Fig. 1, in the operation 130 of process 110, face identification system 100 obtains for face recognition Multiple 2D face images of user.2D face image can include the face of the user caught from each viewpoint The image in portion.Such as, face identification system 100 can obtain the front of the face from user and side is passed through The 2D face image of cameras capture.2D face image can refer to include the image of the face area of user, but It it is the whole region of the face that can include user.In the operation 140 of process 110, face recognition system System 100 detects facial feature points from 2D face image, such as, indicates (landmark).Such as, face Identification system 100 can detect from the 2D face image of user include eyebrow, eyes, nose, lip, The characteristic point of chin, hair, ear and/or face mask.
In the operation 150 of process 110, face identification system 100 is by by from the 2D being used for face recognition The characteristic point that face image extracts is applied to 3D master pattern that is predetermined and/or that select, makes 3D model Personalized.Such as, 3D master pattern can be the deformable 3D produced based on 3D face training data Shape.3D master pattern can include 3D shape and 3D texture, and represents 3D shape and 3D The parameter of texture.Face identification system 100 can be by by the characteristic point of 3D master pattern and from 2D face The characteristic point of image zooming-out is mated, and produces the 3D facial model of the face about user.Produce 3D facial model can be registered and be stored as to occur in the 3D face mould of the user in 2D face image Type.
Selectively, face identification system 100 can use the 2D face image for face recognition, 2D The exercise data of face image and 3D master pattern produce the 3D facial model of user.Face recognition system System 100 can be obtained 2D face image and can be obtained the bearing data of 2D face image by motion sensor, And bearing data based on 2D face image and match information produce the 3D number of the face about user According to.It can be the set of the 3D point of the shape of the face of structuring user's about the 3D data of the face of user. Face identification system 100 can be by carrying out the 3D data of the face about user and 3D master pattern Join, produce the 3D facial model of user.The 3D facial model produced can be stored and be registered as to be occurred The 3D facial model of the user in 2D face image.
Processing 120, face identification system 100 obtains the 2D of the face area including user by camera Input picture.Although face identification system 100 can use single 2D input picture to perform face recognition, But example embodiment can be not limited to this.At the operation 160 of process 120, face identification system 100 base In the facial pose occurred in 2D input picture or countenance, adjust the 3D of the pre-stored of user Facial model.The attitude of face identification system 100 adjustable 3D facial model with occur in 2D input Facial pose coupling in image, and adjust the expression of 3D facial model with occur in 2D input picture In countenance coupling.
Face identification system 100 is from the 3D facial model mated with the 2D input picture for face recognition Produce 2D projection picture.In the operation 170 of process 120, face identification system 100 is by defeated by 2D Enter image to be compared to perform face recognition and export the result of face recognition with 2D projection picture.Example As, face identification system 100 can determine that face area in 2D input picture and 2D projection as in Similarity degree between face area, and the situation of predetermined and/or desired condition is met at similarity degree Lower the result of face recognition is output as " face recognition success ", and exports that " face knows in other cases The most failed ".
Face identification system 100 can include that 3D facial model produces equipment (such as, the 3D face of Fig. 2 Model produces equipment 200, the 3D facial model of Figure 10 produces equipment 1000) and face recognition device (example Such as, the face recognition device 300 of Fig. 3) in any one.The 3D facial model of user is stepped on The process 110 of note can be produced equipment 200 or 3D facial model by 3D facial model and produce equipment 1000 Perform.Process 120 from the face of 2D input picture identification user can be by face recognition device 300 Perform.
Fig. 2 is to illustrate that the 3D facial model according at least one example embodiment produces joining of equipment 200 The diagram put.3D facial model produces equipment 200 can be from the multiple 2D face images for face recognition Produce the 3D facial model of the face of user.3D facial model produces equipment 200 can produce 3D shape Model and 3D texture model are as 3D facial model, and by the 3D shape produced and the 3D of generation Texture model is registered as the 3D facial model of user.Equipment 200 is produced with reference to Fig. 2,3D facial model Including image grabber 210, feature point detector 220,3D facial model generator 230 and 3D face Model logger 260.Nextport hardware component NextPort described below can be used and/or run the nextport hardware component NextPort of component software Realize image grabber 210, feature point detector 220,3D facial model generator 230 and 3D face Portion's model logger 260.
At image grabber 210, feature point detector 220,3D facial model generator 230 and 3D When at least one in facial model logger 260 is the nextport hardware component NextPort running software, nextport hardware component NextPort is joined It is set to be stored in the software in memorizer (non-transitory computer-readable medium) 270 to hold for operation Row image grabber 210, feature point detector 220,3D facial model generator 230 and 3D face's mould The function of at least one in type logger 260.
Although memorizer 270 is shown in 3D facial model and produces the outside of equipment 200, but deposits Reservoir 270 can be included in 3D facial model and produce in equipment 200.
Image grabber 210 obtains the 2D face image of the user for face recognition.2D face image The face area comprising the user of various facial pose can be included.Such as, image grabber 210 obtain from Multiple viewpoints 2D face image by cameras capture, such as direct picture or side image.Can be from just Face image zooming-out about the information of the overall 2D shape of the face of user and the texture information of the face of user, The details of the shape of the face about user can be extracted from side image.Such as, can be by 3D face Model produces equipment 200 by by the face area of the user in direct picture and the user in side image Face area compare, determine the information of the 3D shape of the face about user.According to example Embodiment, image grabber 210 can be by cameras capture 2D face image to carry out 3D facial model Registration, image grabber 210 can will be stored in memorizer 270 by the 2D face image of cameras capture In.
Feature point detector 220 is from the face area of 2D face image detection face area and detection Characteristic point or mark.Such as, feature point detector 220 can from 2D face image detection be positioned at eyebrow, Characteristic point on the profile of eyes, nose, lip and/or chin.According to example embodiment, feature spot check Surveying device 220 can use active shape model (ASM), active appearance models (AAM) or supervision to decline Method (SDM) detects facial feature points from 2D face image.
3D facial model generator 230 produces about user based on the characteristic point detected from 2D face image The 3D facial model of face.The deformable 3D shape of face and deformable 3D about user Texture model can be generated as 3D facial model.3D facial model generator 230 includes 3D shape mould Type generator 240 and 3D texture model generator 250.
3D shape generator 240 uses the 2D face image caught from different points of view to produce user's The 3D shape of face.3D shape refers to have shape and do not have veined 3D model. 3D shape generator 240 produces 3D shape based on the facial feature points detected from 2D face image Model.3D shape generator 240 determines that the characteristic point for detecting from 2D face image maps To the parameter of the characteristic point of 3D master pattern, and by the parameter determined being applied to 3D master pattern Produce 3D shape.Such as, 3D shape generator 240 can be by will be from 2D face image The characteristic point of eyebrow, eyes, nose, lip and/or the chin of detection and the characteristic point of 3D master pattern Mate, produce the 3D shape of the face of user.
Use the 2D face image caught from different points of view to produce 3D shape and can make more detailed 3D Shape can produce.Produce only using the direct picture obtained by catching the face of user from front In the case of raw 3D shape, possibly cannot easily determine the 3D shape in 3D shape, all Height and the shape of cheekbone such as nose.But, using the multiple 2D faces caught from different points of view In the case of image produces 3D shape, more detailed 3D shape can be produced, this is because can Additionally consider the information of the shape of the height about such as nose and cheekbone.
3D texture model generator 250 is based at least from 2D face image and 3D shape The texture information of individual extraction produces 3D texture model.Such as, 3D texture model generator 250 can pass through The texture extracted from direct picture is mapped to 3D shape to produce 3D texture model.3D texture Model refers to the model of shape and the texture with 3D model.Compared to 3D shape, 3D Texture model can have the details of higher level, and includes the summit of 3D shape.3D shape Figurate 3D model and deformable attitude and expression can be had with 3D texture model.3D shape Model and 3D texture model can have identical attitude and expression.3D shape and 3D texture model can To indicate same or analogous attitude and expression with identical parameters.
3D shape and 3D texture model are registered and are stored as using by 3D facial model logger 260 The 3D facial model at family.Such as, as the user of the 2D face image obtained by image grabber 210 it is Time " A ", 3D facial model logger 260 can be by the 3D shape produced for A and 3D stricture of vagina Reason model is registered as the 3D facial model of A, and memorizer 270 can store 3D shape and the 3D of A Texture model.
Fig. 3 is the diagram of the configuration illustrating the face recognition device 300 according at least one example embodiment. Face recognition device 300 can use the 3D facial model of registration to perform for occurring in for face recognition 2D input picture in the face recognition of user.Face recognition device 300 can be by rotating 3D face Model makes 3D facial model have or phase identical with the facial pose of the user occurred in 2D input picture As facial pose, produce 2D projection picture.Face recognition device 300 can be by by 2D projection As being compared to perform face recognition with 2D input picture.Face recognition device 300 can be by stepping on The 3D facial model of note carries out mating and perform face and knows with the facial pose occurred in 2D input picture , the change not providing the attitude to user has the recognition algorithms of robustness.With reference to Fig. 3, face Portion identifies that equipment 300 includes image grabber 310,3D facial model processor 320 and face recognition device 350.3D facial model processor 320 includes face recognition detector 330 and feature point detector 340.
Nextport hardware component NextPort described below can be used and/or run the nextport hardware component NextPort of component software to realize image Getter 310,3D facial model processor 320 (include face area detector 330 and specific spot check Survey device 340) and face recognition device 350.
(face area detector 330 is included at image grabber 310,3D facial model processor 320 With specified point detector 340) and face recognition device 350 at least one be run software hardware group In the case of part, nextport hardware component NextPort is configurable for operation and is stored in memorizer (non-transitory computer can Read medium) in 370 software to perform image grabber 310,3D facial model processor 320 (includes Face area detector 330 and specified point detector 340) and face recognition device 350 at least one Function.
Although memorizer 370 is shown as a part for face recognition device 300, but memorizer 370 Can separate with face recognition device 300.
Image grabber 310 obtains the 2D input figure of the face for identifying the face area including user Picture.Image grabber 310 is obtained for identifying user or the 2D being authenticated user by camera etc. Input picture.Although face recognition device 300 can use single 2D input picture that user is performed face Identify, but example embodiment is not limited to this.
Face area detector 330 is from the face area of 2D input picture detection user.Face area is examined Survey device 330 and use the Luminance Distribution about 2D input picture, the motion of object, distribution of color, eyes The information of position etc. is from 2D input picture identification face area, and extracts the positional information of face area. Such as, face area detector 330 uses the most normally used based on Haar Adaboost cascade classifier detects face area from 2D input picture.
Feature point detector 340 detects facial feature points from the face area of 2D face image.Such as, Feature point detector 340 includes eyebrow, eyes, nose, lip, chin, head from face area detection Send out, ear and/or the characteristic point of face mask.According to example embodiment, feature point detector 340 uses ASM, AAM or SDM detect facial feature points from 2D input picture.
3D facial model processor 320 characteristic point based on detection adjusts the 3D facial model of pre-stored. 3D facial model is entered by 3D facial model processor 320 characteristic point based on detection with 2D input picture Row coupling.Result based on coupling, 3D facial model can be deformed into and occur in 2D input picture Facial pose and expression coupling.3D facial model processor 320 will be by detecting from 2D input picture Characteristic point be mapped to 3D facial model, adjust attitude and the expression of 3D facial model.3D face Model can include 3D shape and 3D texture model.3D shape can be used for and 2D input picture The facial pose of middle appearance carries out Rapid matching, and 3D texture model can be used for producing high resolution 2 D projection Image.
3D facial model processor 320 is based on the pose adjustment 3D shape occurred in 2D input picture The attitude of model.3D facial model processor 320 by the characteristic point that will detect from 2D input picture and The characteristic point of 3D shape is mated, and the attitude of 3D shape is inputted with occurring in 2D Attitude in image is mated.3D facial model processor 320 is based on detecting from 2D input picture Characteristic point adjusts attitude parameter and the expression parameter of 3D shape.
Additionally, 3D facial model processor 320 parameter information based on 3D shape adjusts 3D stricture of vagina Reason model.3D facial model processor 320 is true by 3D shape and the coupling of 2D input picture Fixed attitude parameter and expression parameter are applied to 3D texture model.Result based on application, 3D texture mould Type can be adjusted to that have and the attitude of 3D shape and express one's feelings same or analogous attitude and expression. After adjusting 3D texture model, 3D facial model processor 320 can be by the 3D texture that will adjust Model projection produces 2D projection picture to plane.
Face recognition device 350 is by being compared to perform face by 2D projection picture and 2D input picture Identify.Face recognition device 350 is thrown with occurring in 2D based on the face area occurred in 2D input picture The similarity degree between face area in shadow image performs face recognition.Face recognition device 350 determines Similarity degree between 2D input picture and 2D projection picture, and based on a determination that similarity degree whether full Sufficient predetermined and/or desired condition exports the result of face recognition.
Face recognition device 350 is usable in the normally used eigenvalue side of determination in face detection field Method determines the similarity degree between 2D input picture and 2D projection picture.Such as, face recognition device 350 Can use feature extraction wave filter (such as add rich (Gabor) wave filter, local binary patterns (LBP), Histograms of oriented gradients (HoG), principal component analysis (PCA) and linear discriminant analysis (LDA)), come Determine the similarity degree between 2D input picture and 2D projection picture.Gabor filter refers to for using There is the multi-filter wave filter from image zooming-out feature of all size and angle.LBP refer to for from Difference between image zooming-out current pixel and neighbor is as the wave filter of feature.According to example embodiment, The face area that face recognition device 350 can will appear in 2D input picture and 2D projection picture is divided into Predetermined and/or the unit of size that selects also calculates the rectangular histogram being associated with LBP, example for each unit As, about the rectangular histogram of the LBP index value included in the cells.Face recognition device 350 will be by by line Property connect the vector that the rectangular histogram calculated obtains and be defined as final eigenvalue, and by 2D input picture Whole eigenvalue is compared to determine 2D input picture and 2D throwing with the final eigenvalue of 2D projection picture Similarity degree between shadow image.
According to example embodiment, face recognition device 300 also includes display 360.Display 360 shows Show 2D input picture, 2D projection picture and/or the result of face recognition.At user 2D based on display Input picture determines the face the most suitably catching user, or final by face recognition of display 360 In the case of result is shown as failure, user can catch face again and face recognition device 300 can be right Face recognition is re-executed by again catching the 2D input picture of generation.
Fig. 4 illustrates the process from 2D face image detection characteristic point according at least one example embodiment. With reference to Fig. 4, image 420 is to be produced equipment by catching the face of user from this front by 3D facial model And the 2D face image obtained, image 410 and image 430 are to be produced equipment (example by 3D facial model As, 200 and 1000) the 2D face image that obtains by catching the face of user from side.Can be by 3D facial model produces equipment (such as, 200 and 1000) and extracts the face about user from image 420 The information of overall 2D shape and the texture information of face of user.Can be from image 410 and image 430 Extract the more detailed information of the shape about face.Such as, can be based on the user extracted from image 420 The shape of face the basic model of the face about user is set, can be based on from image 410 and image The shape of the face of 430 users extracted is produced equipment (such as, 200 and 1000) by 3D facial model Determine the 3D shape of basic model.
The 3D facial model of Fig. 2 produces the feature point detector 220 of equipment 200 can be from from multiple viewpoints 2D face image (such as, image 410, image 420 and image 430) the detection face feature caught Point.Facial feature points refers to be positioned in the contour area of eyebrow, eyes, nose, lip, chin etc. Characteristic point.Feature point detector 220 is usable in normally used ASM, AAM in prior art Or SDM detects facial feature points from image 410, image 420 and image 430.Can be based on face detection Result perform the attitude of ASM, AAM or SDM, ratio or the initialization of position.
Image 440 is the result images detecting characteristic point 444 in the face area 442 of image 410. Image 450 is the result images detecting characteristic point 454 in the face area 452 of image 420.Phase As, image 460 is the result figure detecting characteristic point 464 in the face area 462 of image 430 Picture.
Fig. 5 illustrates that the use 3D master pattern according at least one example embodiment produces 3D facial model Process.With reference to Fig. 5, model 510 represents 3D master pattern.As based on 3D face training data The 3D master pattern of the deformable 3D shape produced can be to represent use by average shape and parameter The parameter model of the characteristic of the face at family.
As in equationi, 3D master pattern can include the knots modification of average shape and shape.Changing of shape Variable instruction form parameter and the weighted sum of shape vector.
[equation 1]
In equation 1,Represent the element of the 3D shape of configuration 3D master pattern,Table Show the element that the average shape with 3D master pattern is associated,Represent corresponding with the index factor " i " Shape Element,Represent the form parameter being applied to the Shape Element corresponding with indexing factor i.
As shown in equation 2,The coordinate of 3D point can be included
In equation 2,Represent the index of instruction 3D point (such as,With) variable, And " T " expression " transposition ".
The 3D facial model of Fig. 2 produces the 3D shape generator 240 of equipment 200 can be based on from many The 2D face image that individual viewpoint catches makes the personalization of 3D master pattern register with the face to user. 3D shape generator 240 can determine that the characteristic point for being included within 3D master pattern and from 2D The facial feature points of face image detection carries out the parameter mated, and by the parameter determined is applied to 3D Master pattern produces the 3D shape of the face about user.
With reference to Fig. 5, model 520 and model 530 are to produce from the model 510 as 3D master pattern 3D shape about the face of user.Model 520 represents the 3D shape from front viewing, Model 530 represents the 3D shape from side viewing.3D shape can have shape information and not There is texture information, and can be for carry out with 2D input picture at a high speed in the process of user authentication Join.
The 3D texture model generator 250 of Fig. 2 can be by extracting from least one 2D face image Texture is mapped to the surface of 3D shape, produces 3D texture model.Such as, map textures onto The surface of 3D shape can represent adds to the depth information obtained from 3D shape from from front The texture information that the 2D face image caught extracts.
Model 540 and model 550 are the 3D texture models produced based on 3D shape.Model 540 Representing the 3D texture model from front viewing, model 550 represents the 3D texture from diagonal viewing Model.3D texture model can be the model including shape information and texture information, and recognizes user The process of card is used for producing 2D projection picture.
3D shape and 3D texture model be the facial contours of specific characteristic of instruction user be fixing And attitude or expression are deformable 3D models.Compared to 3D shape, 3D texture model can have There is the details of higher level, and include greater number of summit.It is included in the summit in 3D shape The subset on the summit can being included in 3D texture pattern.3D shape and 3D texture model can To indicate same or analogous attitude and expression with identical parameters.
Fig. 6 illustrate according at least one example embodiment based on the characteristic point detected from 2D input picture Adjust the process of 3D facial model.With reference to Fig. 6, image 610 is enter into face recognition device to carry out The 2D input picture of face recognition or user authentication, and represent the facial pose image by cameras capture.
The face area detector 330 of the face recognition device 300 of Fig. 3 can detect from 2D input picture Face area, the feature point detector 340 of Fig. 3 can detection face area in detection be positioned at eyes, Characteristic point on the profile of eyebrow, nose, lip or chin.Such as, feature point detector 340 can make Facial feature points is detected from 2D input picture with ASM, AAM or SDM.
Image 620 is by being detected face area 630 also by face area detector 330 from image 610 In face area 630, the result images that characteristic point 640 obtains is detected by feature point detector 340.
3D facial model processor 320 can be by the 3D shape of pre-registration and storage and 2D input figure As mating.3D facial model processor 320 can be based on the face feature detected from 2D input picture Point adjusts the parameter of 3D shape to adjust attitude and expression.Model 650 is pre-registration and storage The 3D shape of the face of user, model 660 is attitude and expresses one's feelings based on from image 610 detection Characteristic point 660 controlled 3D shape.3D facial model processor 320 can be by the 3D of pre-stored The pose adjustment of shape is same or similar with the facial pose occurred in 2D input picture.? As in the image 610 of 2D input picture, the face of user presents the attitude turning to side, attitude quilt The 3D shape that 3D facial model processor 320 adjusts presents and the attitude of the user in image 610 The same or analogous attitude turning to side.
Fig. 7 illustrate according at least one example embodiment by by 2D input picture and 2D projection picture It is compared to perform the process of face recognition.The 3D facial model of the face recognition device 300 of Fig. 3 Processor 320 can adjust based on the facial feature points detected from the 2D input picture for face recognition The attitude parameter of 3D shape and expression parameter.3D facial model processor 320 can be by 3D shape mould The attitude parameter of the adjustment of type and the expression parameter of adjustment are applied to 3D texture model with by 3D texture model Attitude and expression be adjusted to the attitude of 3D shape and express one's feelings same or similar.Subsequently, 3D face Portion's model processor 320 can produce 2D projection picture by 3D texture model projects to the plane of delineation. Face recognition device 350 can perform face based on the similarity degree between 2D input picture and 2D projection picture Portion identifies, and exports the result of face recognition.
With reference to Fig. 7, image 710 is the 2D input picture for face recognition.Image 720 is and conduct The image 710 of 2D input picture compares so that face recognition device 350 performs the reference of face recognition Image with.The region 730 being included in image 720 represents that the 2D that reflection produces from 3D texture model throws The region of shadow image.Such as, image 730 is the 3D shape by texture is mapped to Fig. 6 The face area that the texture model of 660 projects to the plane of delineation and obtains.Face recognition device 350 can pass through Will appear in the face area of the user in 2D input picture and the facial regions occurred in 2D projection picture Territory is compared to perform face recognition.Selectively, face recognition device 350 can be by will be as 2D The image 710 of input picture obtains in 2D input picture by being reflected in by 2D projection picture with conduct The whole region of image 720 of result images be compared to perform face recognition.
Fig. 8 is the flow chart illustrating the 3D facial model production method according at least one example embodiment.
With reference to Fig. 8, in operation 810,3D facial model produces equipment acquisition and passes through camera from different points of view The 2D face image of the user caught.2D face image can be used for registering the face of user.Example As, 2D face image can include the image comprising various facial pose, such as direct picture and side image.
In operation 820,3D facial model produces equipment and detects facial feature points from 2D face image.Example As, 3D facial model produce equipment be usable in prior art well-known ASM, AAM or SDM is positioned at the profile of eyebrow, eyes, nose, lip, chin etc. from the detection of 2D input picture Facial feature points.
In operation 830,3D facial model produces equipment characteristic point based on detection and produces 3D shape. 3D facial model produces equipment can be by by the eyebrow detected from 2D face image, eyes, nose, mouth The characteristic point of lip, chin etc. carries out mating with the characteristic point of 3D master pattern and produces 3D shape. 3D facial model produces equipment and determines for the characteristic point detected from 2D face image is mapped to 3D mark The parameter of the characteristic point of quasi-mode type, and produce 3D by the parameter determined is applied to 3D master pattern Shape.
In operation 840,3D facial model produces equipment based on 3D shape with from 2D face image The texture information extracted produces 3D texture model.3D facial model generation equipment can pass through will be from least The texture of one 2D face image extraction is mapped to 3D shape and produces the face about user 3D texture model.The 3D texture model of the parameter applying 3D shape can have and 3D shape mould The same or analogous attitude of type and expression.
In operation 850,3D facial model produces equipment and 3D shape and 3D texture model is registered With the 3D facial model being stored as user.3D shape and the 3D texture model of storage can be users For the user occurred in 2D input picture is authenticated in the process of certification.
Fig. 9 is the flow chart illustrating the recognition algorithms according at least one example embodiment.
With reference to Fig. 9, in operation 910, face recognition device is examined from the 2D input picture for face recognition Survey facial feature points.Face recognition device detects face area from 2D input picture, and detects in detection Face area in the profile being positioned at eyes, eyebrow, nose, chin, lip etc. on face feature Point.Such as, face recognition device can use Adaboost cascade classifier based on Haar to input from 2D Image detection face area, and use the face feature in ASM, AAM or SDM detection face area Point.
In operation 920, face recognition device adjusts user's based on the characteristic point detected from 2D input picture The 3D facial model of pre-registration.Face recognition device can be incited somebody to action based on the characteristic point detected from 2D input picture The 3D facial model of pre-registration is mated with 2D input picture.Face recognition device can make 3D face Model deformation is so that the facial pose of 3D facial model and expression and the face that occurs in 2D input picture Attitude and expression coupling.
3D facial model can include 3D shape and 3D texture model.Face recognition device can be based on Adjust the attitude of 3D shape from the characteristic point of 2D input picture detection, and be adjusted based on attitude 3D shape parameter information adjust 3D texture model.Face recognition device can be based on defeated from 2D The characteristic point entering image detection adjusts attitude parameter and the expression parameter of 3D shape, and can be by 3D shape The parameter of the adjustment of shape model is applied to 3D texture model.The result of application based on parameter, 3D texture Model can be adjusted to that have and the attitude of 3D shape and express one's feelings same or analogous attitude and expression.
In operation 930, face recognition device produces 2D projection picture from 3D texture model.Face recognition Equipment, by the 3D texture model adjusted based on 3D shape in operation 920 is projected to plane, comes Produce 2D projection picture.Occur in the facial pose in 2D projection picture can with occur in 2D input figure Facial pose in Xiang is same or similar.Such as, when the face of the user occurred in 2D input picture When attitude is the attitude of aspect-oriented, the 2D projection picture by operation 910 to operation 930 generation can There is the facial pose of the 3D texture model of aspect-oriented same or analogous with 2D input picture.
Operation 940, face recognition device by 2D input picture is compared with 2D projection picture, Perform face recognition.Face recognition device based on the face area occurred in 2D input picture with go out Similarity degree between face area in 2D projection picture, performs face recognition now.Face knows Other equipment determines the similarity degree between 2D input picture and 2D projection picture, and based on a determination that similar Whether degree meets predetermined and/or desired condition exports the result of face recognition.Such as, defeated at 2D In the case of entering the satisfied predetermined and/or desired condition of the similarity degree between image and 2D projection picture, The result of face recognition device exportable " face recognition success ", and export " face in other cases Recognition failures " result.
Figure 10 is to illustrate that the 3D facial model according at least one example embodiment produces equipment 1000 The diagram of another example of configuration.3D facial model produces equipment 1000 can be from many for face's registration Individual 2D face image produces the 3D facial model of the face of user.3D facial model produces equipment 1000 The 2D face image caught from different directions can be used, about the exercise data of 2D face image and 3D Master pattern produces the 3D facial model of user.Equipment 1000 is produced with reference to Figure 10,3D facial model Including image grabber 1010, motion sensing unit 1020,3D facial model generator 1030 and 3D Facial model logger 1040.
The nextport hardware component NextPort that can use nextport hardware component NextPort described below and/or operation component software realizes obtaining Device 1010, motion sensing unit 1020,3D facial model generator 1030 and 3D facial model are registered Device 1040.
At image grabber 1010, motion sensing unit 1020,3D facial model generator 1030 and When at least one in 3D facial model logger 1040 is the nextport hardware component NextPort running software, nextport hardware component NextPort It is configurable for the software that operation is stored in memorizer (non-transitory computer-readable medium) 1070 To perform image grabber 1010, motion sensing unit 1020,3D facial model generator 1030 and The function of at least one in 3D facial model logger 1040.
Although memorizer 1070 is shown in 3D facial model and produces the outside of equipment 1000, but Memorizer 1070 can be included in 3D facial model and produce in equipment 1000.
Image grabber 1010 obtains the 2D face image caught from different points of view for face's registration. Image grabber 1010 obtains the 2D face image of the face passing through cameras capture user from different directions. Such as, image grabber 1010 can obtain the 2D face image caught from different points of view, such as, front Image or side image.
Motion sensing unit 1020 obtains the bearing data of 2D face image.Motion sensing unit 1020 The exercise data sensed by various sensors is used to determine the bearing data of 2D face image.2D face The bearing data of image can include the information in the direction being captured about each 2D face image.Such as, Motion sensing unit 1020 can use Inertial Measurement Unit (IMU) (such as accelerometer, gyroscope and/ Or magnetometer) determine the bearing data of each 2D face image.
Such as, user by catching the face of user along different directions rotating camera, and can obtain from respectively The 2D face image that individual viewpoint catches is as the result caught.During catching 2D face image, motion Sensing unit 1020 can calculate based on the sensing signal exported from IMU and include such as catching 2D face figure The rapid change of the camera of picture, the motion number that direction changes, rolling changes, pitching changes and driftage changes According to, and determine the bearing data about the direction catching 2D face image.
3D facial model generator 1030 produces the 3D face mould of the user in present 2D face image Type.3D facial model generator 1030 is from 2D face image detection facial feature points or mark.Such as, 3D facial model generator 1030 can from 2D face image detection be positioned at eyebrow, eyes, nose, lip, Characteristic point on the profile of chin etc..3D facial model generator 1030 detects based on from 2D face image Facial feature points determine about 2D face image between the information of match point.
3D facial model generator 1030 is based on about the facial feature points detected from 2D face image Information, bearing data about the information of match point and 2D face image produce the 3D number of the face of user According to.Such as, 3D facial model generator 1030 can use existing solid matching method to produce the face of user The 3D data in portion.The 3D data of the face of user can be shape or the surface of the face of configuration user The set of point.
3D facial model generator 1030 uses the 3D data of the face about user by deformable 3D Master pattern is converted to the 3D facial model of user.3D facial model generator 1030 is by marking 3D Quasi-mode type mates with the 3D data of the face about user, and 3D master pattern is converted to user's 3D facial model.3D facial model generator 1030 is by by the characteristic point of 3D data and 3D standard The characteristic point of model is mated, and 3D master pattern is converted to the 3D facial model of user.User's 3D facial model can include 3D shape that the shape of the face with user is associated and/or include texture The 3D texture model of information.
3D facial model logger 1040 is registered and is stored and produced by 3D facial model generator 1030 The 3D facial model of user.The 3D facial model of the user of storage can be used for identifying the face of user, and And the shape of 3D facial model can be changed in the process of face recognition.
Figure 11 is the stream illustrating another 3D facial model production method according at least one example embodiment Cheng Tu.
With reference to Figure 11, in operation 1110,3D facial model produces equipment and obtains for face's registration many Individual 2D face image and the bearing data of 2D face image.3D facial model produces equipment and never obtains With the viewpoint 2D face image by the user of cameras capture.3D facial model produces equipment and never obtains The 2D face image of the equidirectional face catching user, such as, direct picture and side image.
3D facial model produces equipment and uses the exercise data sensed by motion sensor, obtains 2D face The bearing data of portion's image.Such as, 3D facial model generation equipment can use by including accelerometer, top The exercise data of the IMU sensing of spiral shell instrument and/or magnetometer obtains the bearing data of each 2D face image. The bearing data of 2D face image can include the information in the direction being captured about each 2D face image.
Operation 1120,3D facial model produce equipment determine about 2D face image between match point Information.3D facial model produces equipment and detects facial feature points from 2D face image, and based on detection Feature point detection match point.
In operation 1130,3D facial model produces equipment and produces the 3D data of the face about user.Example It can be the collection of the 3D point on the shape of the face of configuration user or surface such as, the 3D data of the face of user Close, and include multiple summit.3D facial model produces equipment based on the face detected from 2D face image The information of characteristic point, about the information of match point and the bearing data of 2D face image, produce user The 3D data of face.3D facial model produces equipment can use existing solid matching method, produces The 3D data of the face of user.
In operation 1140,3D facial model produces equipment use will in the 3D data that operation 1130 produces 3D master pattern is converted to the 3D facial model of user.3D facial model produces equipment by being marked by 3D Quasi-mode type carries out mating with the 3D data of the face about user 3D master pattern is converted to user's 3D facial model.3D facial model produces equipment by by the characteristic point of 3D master pattern and 3D data Characteristic point carry out mating to produce the 3D facial model of user.3D facial model produces equipment and produces 3D Shape and/or 3D texture model are as the 3D facial model of user.The 3D face of the user produced Model can be stored and be registered, and is used for identifying the face of user.
The nextport hardware component NextPort that can use nextport hardware component NextPort and/or operation component software is described here to be implemented in Unit and/or module.Such as, nextport hardware component NextPort can include mike, amplifier, band filter, sound Frequently digital converter and processing means.Processing means can be configured to perform arithmetic, patrol by use Collect and input/output operations is implemented and/or one or more hardware units of program code execution come real Execute.Processing means can include processor, controller and ALU, digital signal processor, micro- Type computer, field programmable gate array, programmable logic cells, microprocessor or can with definition Mode responds and performs other any device of instruction.Processing means can run operating system (OS) and One or more software application run on OS.Processing means may also respond to the operation of software Access, store, operate, process and create data.For purposes of simplicity, describe with odd number Processing means;But, those skilled in the art will understand, and processing means can include multiple treatment element With polytype treatment element.Such as, processing means can include multiple processor or a processor With a controller.Additionally, the different configurations that processes is feasible, such as parallel processor.
Software can include computer program, one section of code, instruction or their some combinations, with independence Ground or jointly indicate and/or configure processing means as required as run, thus by processing means Be converted to application specific processor.Can at any kind of machine, assembly, physically or a virtually equip, count Calculation machine storage medium or device are either permanently or temporarily implemented software and data, or can referring to Order or data be supplied to processing means or can processing means explain transmission signal wave for good and all or Person temporarily implements software and data.Software can also be distributed in the computer system of networking, thus with Distributed mode stores and performs software.Can by one or more non-transitory computers Read record medium stores software and data.
Method according to above-mentioned example embodiment may be recorded in and includes for implementing above-mentioned example enforcement In the non-transitory computer-readable medium of the programmed instruction of the various operations of example.Medium may also include individually Programmed instruction, data file, data structure etc. or combinations thereof.Record program on medium Instruction can for example embodiment specialized designs and structure, or can be to computer software With available known in for the technical staff in field.The example of non-transitory computer-readable medium includes: The such as magnetic medium of hard disk, floppy disk, tape;Such as CD-ROM disk, DVD and/or Blu-ray Disc Optical medium;The magnet-optical medium of such as CD;And be configured to storage specially and perform programmed instruction Hardware unit, such as read only memory (ROM), random access memory (RAM), flash memory (example As, USB flash drive, memory card, memory stick etc.) etc..The example of programmed instruction include such as by The machine code that compiler generates and the code comprising the higher level that compiler can be used to perform by computer Both files.Above-mentioned device can be configured for use as one or more software module, on performing The operation of the example embodiment stated, or vice versa.
It is described above multiple example embodiment.It should be appreciated, however, that can be real to these examples Execute example and carry out various amendment.Such as, if if being executed in different order described technology and/or institute Assembly in the system, framework, device or the circuit that describe combines and/or in a different manner by other assemblies Or its equivalent replacement or supplementary, then can realize suitable result.Therefore, other embodiments fall into In the range of claim.

Claims (33)

1. a recognition algorithms, including:
Facial feature points is detected from 2D input picture;
Facial feature points based on detection adjusts the 3D facial model of storage;
2D projection picture is produced from the 3D facial model adjusted;
Face recognition is performed based on 2D input picture and 2D projection picture.
2. recognition algorithms as claimed in claim 1, wherein, adjusts the 3D facial model of storage Step include: adjusted by the 3D facial model that the facial feature points of detection is mapped to storage and deposit The facial pose of the 3D facial model of storage and countenance.
3. recognition algorithms as claimed in claim 1, wherein, the 3D facial model of storage includes 3D shape and 3D texture model,
The step of the 3D facial model adjusting storage includes:
The 3D shape of storage is adjusted based on the facial feature points detected from 2D input picture;
Parameter information based on the 3D shape adjusted adjusts 3D texture model.
4. recognition algorithms as claimed in claim 3, wherein, adjusts the 3D shape of storage Step include: based on detection facial feature points adjust 3D shape attitude parameter and expression ginseng Number.
5. recognition algorithms as claimed in claim 3, wherein, produces the step of 2D projection picture Including: produce 2D projection picture from the 3D texture model adjusted.
6. recognition algorithms as claimed in claim 1, wherein, 3D shape and 3D texture The 3D model that model is facial pose and countenance can deform.
7. recognition algorithms as claimed in claim 1, wherein, 2D projection picture includes and 2D The facial pose that facial pose in input picture is identical.
8. recognition algorithms as claimed in claim 1, wherein, the step performing face recognition includes:
Determine the similarity degree between 2D input picture and 2D projection picture;
Whether meet condition based on similarity degree and export the result of face recognition.
9. recognition algorithms as claimed in claim 1, wherein, the step bag of detection facial feature points Include:
Face area is extracted from 2D input picture;
From the face area detection eyebrow, eyes, nose, lip, chin, ear and the face's wheel that extract Facial feature points at least one in exterior feature.
10. recognition algorithms as claimed in claim 1, wherein, the step of execution includes: pass through It is compared to perform face recognition by 2D input picture and 2D projection picture.
11. 1 kinds of methods producing 3D facial model, including:
The 2D face image of user is obtained from multiple viewpoints;
Facial feature points is detected from 2D face image;
Facial feature points based on detection produces the 3D shape that can deform and the 3D stricture of vagina that can deform Reason model;
The 3D shape that can deform and the 3D texture model that can deform are stored as the 3D of user Facial model.
12. methods as claimed in claim 11, wherein, the step of generation includes: be based on deformation 3D shape and from least one the 2D face image in 2D face image texture information come The 3D texture model that generation can deform.
13. methods as claimed in claim 11, wherein, the step of generation includes:
Determine the parameter of characteristic point for the facial feature points of detection being mapped to 3D master pattern;
The 3D shape that can deform is produced by the parameter determined being applied to 3D master pattern.
14. methods as claimed in claim 11, wherein, it is possible to the 3D texture model of deformation includes energy The summit of the 3D shape of enough deformation.
15. methods as claimed in claim 11, wherein, 2D face image includes from different points of view The image of the face of user.
16. 1 kinds of methods producing 3D facial model, including:
Obtaining 2D face image and the bearing data of 2D face image, 2D face image includes user's Face;
The information of the match point between determining about 2D face image;
Bearing data based on 2D face image and the information about match point produce the 3D of the face of user Data;
Use 3D data that 3D master pattern is converted to the 3D facial model of user.
17. methods as claimed in claim 16, wherein, the step of conversion includes: by being marked by 3D Quasi-mode type carries out with the 3D data of the face of user mating the 3D that 3D master pattern is converted to user Facial model.
18. methods as claimed in claim 16, wherein, the 3D data of the face of user are that configuration is used The set of the 3D point of the shape of the face at family.
19. methods as claimed in claim 16, wherein, it is thus achieved that step include: use passed by motion The exercise data of sensor sensing obtains the bearing data of 2D face image.
20. methods as claimed in claim 16, wherein it is determined that step include:
Facial feature points is detected from 2D face image;
Facial feature points based on detection determines the information about match point.
21. 1 kinds of face recognition device, including:
Image grabber, is configured to obtain the 2D input picture of the face area including user;
3D facial model processor, is configured to face based on the user occurred in 2D input picture The facial pose of the 3D facial model of pose adjustment storage, and produce 2D from the 3D facial model adjusted Projection picture;
Face recognition device, is configured to perform face recognition based on 2D input picture and 2D projection picture.
22. face recognition device as claimed in claim 21, wherein, 3D facial model processor bag Include:
Face area detector, is configured to detect face area from 2D input picture;
Feature point detector, is configured to the face area detection facial feature points from detection.
23. face recognition device as claimed in claim 22, wherein, 3D facial model processor quilt It is configured that and detects facial feature points from 2D input picture, by the facial feature points that will detect and storage The characteristic point of 3D facial model carry out mating to adjust the facial pose of the 3D facial model of storage.
24. face recognition device as claimed in claim 21, wherein, the 3D facial model bag of storage Include 3D shape and 3D texture model,
3D facial model processor is configured to: face based on the user occurred in 2D input picture The facial pose of pose adjustment 3D shape, and parameter information based on the 3D shape adjusted tune Whole 3D texture model.
25. face recognition device as claimed in claim 24, wherein, 3D facial model processor quilt It is configured that and produces 2D projection picture by the 3D texture model of adjustment is projected to plane.
26. face recognition device as claimed in claim 21, also include:
Display, is display configured at least one in following item: an input in 2D input picture Image, 2D projection picture and the result of face recognition.
27. face recognition device as claimed in claim 21, wherein, face recognition device is configured to lead to Cross and be compared to perform face recognition by 2D input picture and 2D projection picture.
28. 1 kinds of equipment being used for producing 3D facial model, including:
Image grabber, is configured to obtain the two-dimentional 2D face image of user from multiple viewpoints;
Feature point detector, is configured to detect facial feature points from 2D face image;
3D facial model generator, is configured to what facial feature points based on detection generation can deform 3D shape and the 3D texture model that can deform;
3D facial model logger, is configured to the 3D shape that can deform and can deform 3D texture model is stored as the 3D facial model of user.
29. equipment as claimed in claim 28, wherein, 3D facial model generator is configured to really Calmly for the facial feature points of detection being mapped to the parameter of the characteristic point of 3D master pattern, and by inciting somebody to action The parameter determined is applied to 3D master pattern and produces the 3D shape that can deform.
30. equipment as claimed in claim 28, wherein, 3D facial model generator includes:
3D shape generator, is configured to the face of facial feature points based on detection generation user The 3D shape that can deform;
3D texture model generator, is configured to be based on the 3D shape of deformation and from 2D The texture information of at least one the 2D face image in face image, produces the 3D texture mould that can deform Type.
31. 1 kinds of equipment being used for producing 3D facial model, including:
Image grabber, is configured to obtain the 2D face image of user from multiple viewpoints;
Motion sensing unit, is configured to obtain the bearing data of 2D face image;
3D facial model generator, is configured to based on the letter about the match point between 2D face image The bearing data of breath and 2D face image produces the 3D data of the face of user, and 3D facial model is produced Raw device is configured with 3D data and 3D master pattern is converted to the 3D facial model of user;
3D facial model logger, is configured to store the 3D facial model of user.
32. equipment as claimed in claim 31, wherein, 3D facial model generator is configured to: Detect facial feature points from 2D face image, and facial feature points based on detection determines about match point Information.
33. equipment as claimed in claim 30, wherein, 3D facial model generator is configured to: By carrying out mating 3D master pattern with the 3D facial model of the face of user by 3D master pattern Be converted to the 3D facial model of user.
CN201510441052.7A 2014-09-05 2015-07-24 Method and apparatus for face recognition Pending CN106203248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210474282.3A CN114627543A (en) 2014-09-05 2015-07-24 Method and apparatus for face recognition

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2014-0118828 2014-09-05
KR20140118828 2014-09-05
KR10-2015-0001850 2015-01-07
KR1020150001850A KR102357340B1 (en) 2014-09-05 2015-01-07 Method and apparatus for face recognition

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210474282.3A Division CN114627543A (en) 2014-09-05 2015-07-24 Method and apparatus for face recognition

Publications (1)

Publication Number Publication Date
CN106203248A true CN106203248A (en) 2016-12-07

Family

ID=55542177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510441052.7A Pending CN106203248A (en) 2014-09-05 2015-07-24 Method and apparatus for face recognition

Country Status (2)

Country Link
KR (1) KR102357340B1 (en)
CN (1) CN106203248A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171182A (en) * 2017-12-29 2018-06-15 广东欧珀移动通信有限公司 Electronic device, face identification method and Related product
CN108470186A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 A kind of matching process and device of image characteristic point
CN108629168A (en) * 2017-03-23 2018-10-09 三星电子株式会社 Face authentication method, equipment and computing device
CN108932459A (en) * 2017-05-26 2018-12-04 富士通株式会社 Face recognition model training method and device and recognition algorithms
CN109255327A (en) * 2018-09-07 2019-01-22 北京相貌空间科技有限公司 Acquisition methods, face's plastic operation evaluation method and the device of face characteristic information
CN110097035A (en) * 2019-05-15 2019-08-06 成都电科智达科技有限公司 A kind of facial feature points detection method based on 3D human face rebuilding
CN111382666A (en) * 2018-12-31 2020-07-07 三星电子株式会社 Device and method with user authentication
CN111788572A (en) * 2018-02-26 2020-10-16 三星电子株式会社 Method and system for face recognition
CN111886595A (en) * 2018-03-16 2020-11-03 三星电子株式会社 Screen control method and electronic device supporting the same

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101998361B1 (en) * 2016-06-01 2019-07-10 가천대학교 산학협력단 Method of facial recognition in camera image
RU2697627C1 (en) * 2018-08-01 2019-08-15 Самсунг Электроникс Ко., Лтд. Method of correcting illumination of an object on an image in a sequence of images and a user's computing device which implements said method
CN108858201A (en) * 2018-08-15 2018-11-23 深圳市烽焌信息科技有限公司 It is a kind of for nursing the robot and storage medium of children
CN111310512B (en) * 2018-12-11 2023-08-22 杭州海康威视数字技术股份有限公司 User identity authentication method and device
CN110111418B (en) * 2019-05-15 2022-02-25 北京市商汤科技开发有限公司 Method and device for creating face model and electronic equipment
KR102145517B1 (en) * 2019-05-31 2020-08-18 주식회사 모르페우스 Method, system and non-transitory computer-readable recording medium for registration of 3-dimentional measured data
KR20240025320A (en) * 2022-08-18 2024-02-27 슈퍼랩스 주식회사 Method, computer device, and computer program to create 3d avatar based on multi-angle image
KR20240030109A (en) * 2022-08-29 2024-03-07 삼성전자주식회사 An electronic apparatus for providing avatar based on an user's face and a method for operating the same
CN117789272A (en) * 2023-12-26 2024-03-29 中邮消费金融有限公司 Identity verification method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1870005A (en) * 2005-05-23 2006-11-29 株式会社东芝 Image recognition apparatus and method
CN101159015A (en) * 2007-11-08 2008-04-09 清华大学 Two-dimension human face image recognizing method
CN101499132A (en) * 2009-03-12 2009-08-05 广东药学院 Three-dimensional transformation search method for extracting characteristic points in human face image
CN101604387A (en) * 2008-06-11 2009-12-16 索尼株式会社 Image processing apparatus and image processing method
CN102156537A (en) * 2010-02-11 2011-08-17 三星电子株式会社 Equipment and method for detecting head posture
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3738320B2 (en) * 2003-10-10 2006-01-25 バブコック日立株式会社 Method for collating a subject displayed in a specific stereoscopic image and a two-dimensional image
JP5206366B2 (en) * 2008-11-27 2013-06-12 カシオ計算機株式会社 3D data creation device
JP2011039869A (en) * 2009-08-13 2011-02-24 Nippon Hoso Kyokai <Nhk> Face image processing apparatus and computer program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1870005A (en) * 2005-05-23 2006-11-29 株式会社东芝 Image recognition apparatus and method
CN101159015A (en) * 2007-11-08 2008-04-09 清华大学 Two-dimension human face image recognizing method
CN101604387A (en) * 2008-06-11 2009-12-16 索尼株式会社 Image processing apparatus and image processing method
CN101499132A (en) * 2009-03-12 2009-08-05 广东药学院 Three-dimensional transformation search method for extracting characteristic points in human face image
CN102156537A (en) * 2010-02-11 2011-08-17 三星电子株式会社 Equipment and method for detecting head posture
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄福等: "基于多角度照片的真实感三维人脸建模的研究", 《电子测试》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629168A (en) * 2017-03-23 2018-10-09 三星电子株式会社 Face authentication method, equipment and computing device
CN108629168B (en) * 2017-03-23 2024-04-02 三星电子株式会社 Face verification method and device and computing device
CN108932459A (en) * 2017-05-26 2018-12-04 富士通株式会社 Face recognition model training method and device and recognition algorithms
CN108171182A (en) * 2017-12-29 2018-06-15 广东欧珀移动通信有限公司 Electronic device, face identification method and Related product
CN108171182B (en) * 2017-12-29 2022-01-21 Oppo广东移动通信有限公司 Electronic device, face recognition method and related product
CN108470186A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 A kind of matching process and device of image characteristic point
CN111788572A (en) * 2018-02-26 2020-10-16 三星电子株式会社 Method and system for face recognition
CN111886595A (en) * 2018-03-16 2020-11-03 三星电子株式会社 Screen control method and electronic device supporting the same
CN111886595B (en) * 2018-03-16 2024-05-28 三星电子株式会社 Screen control method and electronic device supporting the same
CN109255327A (en) * 2018-09-07 2019-01-22 北京相貌空间科技有限公司 Acquisition methods, face's plastic operation evaluation method and the device of face characteristic information
CN111382666A (en) * 2018-12-31 2020-07-07 三星电子株式会社 Device and method with user authentication
CN110097035A (en) * 2019-05-15 2019-08-06 成都电科智达科技有限公司 A kind of facial feature points detection method based on 3D human face rebuilding

Also Published As

Publication number Publication date
KR102357340B1 (en) 2022-02-03
KR20160029629A (en) 2016-03-15

Similar Documents

Publication Publication Date Title
CN106203248A (en) Method and apparatus for face recognition
Bustard et al. Toward unconstrained ear recognition from two-dimensional images
JP6754619B2 (en) Face recognition method and device
Ni et al. Multilevel depth and image fusion for human activity detection
Papazov et al. Real-time 3D head pose and facial landmark estimation from depth images using triangular surface patch features
Holte et al. Human pose estimation and activity recognition from multi-view videos: Comparative explorations of recent developments
Holte et al. 3D human action recognition for multi-view camera systems
Alyuz et al. Regional registration for expression resistant 3-D face recognition
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN114627543A (en) Method and apparatus for face recognition
Vishwakarma et al. A proposed unified framework for the recognition of human activity by exploiting the characteristics of action dynamics
Li et al. 3D object recognition and pose estimation from point cloud using stably observed point pair feature
Chowdhary 3D object recognition system based on local shape descriptors and depth data analysis
Muñoz-Salinas et al. Multi-camera head pose estimation
Linda et al. Color-mapped contour gait image for cross-view gait recognition using deep convolutional neural network
CN103745204A (en) Method of comparing physical characteristics based on nevus spilus points
Krzeszowski et al. View independent human gait recognition using markerless 3D human motion capture
Haq et al. On temporal order invariance for view-invariant action recognition
López-Fernández et al. independent gait recognition through morphological descriptions of 3D human reconstructions
Barra et al. Unconstrained ear processing: What is possible and what must be done
Noceti et al. Exploring biological motion regularities of human actions: a new perspective on video analysis
Li et al. Human Action Recognition Using Multi-Velocity STIPs and Motion Energy Orientation Histogram.
Sun et al. Dual camera based feature for face spoofing detection
Lo et al. Vanishing point-based line sampling for real-time people localization
Azary et al. A spatiotemporal descriptor based on radial distances and 3D joint tracking for action classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161207