CN108664908A - Face identification method, equipment and computer readable storage medium - Google Patents
Face identification method, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN108664908A CN108664908A CN201810397466.8A CN201810397466A CN108664908A CN 108664908 A CN108664908 A CN 108664908A CN 201810397466 A CN201810397466 A CN 201810397466A CN 108664908 A CN108664908 A CN 108664908A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- region
- weights
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of face identification methods, the described method comprises the following steps:Facial image is acquired, and gray processing processing is carried out to the picture frame of collected facial image, obtains the first image;Face datection is carried out to described first image, obtains human face region image;The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;The frame region of spectacle-frame is obtained according to the result of identification output;It assigns the frame region first and calculates weights, and assign the non-mirror frame region second in human face region and calculate weights, wherein described first, which calculates weights, is less than the second calculating weights;Weights are calculated according to described first and facial image is identified in the second calculating weights.The invention also discloses a kind of face recognition device and computer readable storage mediums.The present invention can weaken influence of the glasses to recognition of face, improve the accuracy of recognition of face.
Description
Technical field
The present invention relates to a kind of field of face identification more particularly to face identification method, equipment and computer-readable storages
Medium.
Background technology
Recognition of face is a kind of biological identification technology that the facial feature information based on people carries out identification.With camera shooting
Machine or camera acquire image or video containing face, and automatic detect and track face in the picture, and then to detecting
Face carry out a series of the relevant technologies of face, usually also referred to as Identification of Images, face recognition.
Currently, with the maturation of its technology and the raising of Social Agree, recognition of face is used in many fields, example
Such as, recognition of face access control and attendance system, recognition of face antitheft door, face recognition mobile telephone unlock, recognition of face is come the machine that runs
People etc., but identify facial image there are shelter and template image it is unobstructed when, can be dropped due to the difference of feature between image
Low discrimination.
Invention content
It is a primary object of the present invention to propose a kind of face identification method, equipment and computer readable storage medium, purport
Solve identification facial image there are shelter and template image it is unobstructed when, can be reduced due to the difference of feature between image
The technical issues of discrimination.
To achieve the above object, the present invention provides a kind of face identification method, the method includes:
Facial image is acquired, and gray processing processing is carried out to the picture frame of collected facial image, obtains the first image;
Face datection is carried out to described first image, obtains human face region image;
The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;
The frame region of spectacle-frame is obtained according to the result of identification output;
It assigns the frame region first and calculates weights, and assign the non-mirror frame region second in human face region and calculate power
Value, wherein described first, which calculates weights, is less than the second calculating weights;
Weights are calculated according to described first and facial image is identified in the second calculating weights.
Optionally, the basis to described first image carry out Face datection, obtain human face region image the step of after,
Further include:
The reflectance of the human face region image is calculated, and it is anti-more than default to screen reflectance in the human face region image
Second image of luminosity threshold value;
Using second image as the frame region of spectacle-frame.
Optionally, described that the step that facial image is identified in weights and the second calculating weights is calculated according to described first
Suddenly include:
Collected facial image is matched with the facial image to prestore in database, between acquisition face characteristic
Matching degree;
Weights matching degree corresponding with the face characteristic that frame region includes is calculated by described first to be multiplied, and obtains first
With value, calculates weights matching degree corresponding with the face characteristic that non-mirror frame region includes by second and is multiplied, obtain the second matching value,
First matching value is added with the second matching value, obtains the matched matching value of face characteristic after assigning weights;
The matched matching value of face characteristic after the imparting weights is compared with preset matching value, face is obtained and knows
Other recognition result.
Optionally, described to match collected facial image with the facial image in database, it is special to obtain face
The step of matching degree between sign includes:
Extract the collected facial image feature;
According to the collected facial image feature and the facial image face image feature to prestore, the acquisition is calculated
To facial image and the facial image face to prestore between matching degree.
Optionally, the step of extraction collected facial image feature includes:
Key feature point is carried out to the facial image;
The face image of the user is divided into several human face regions according to key feature point result;
Feature extraction is carried out to the human face region using the corresponding depth network model of the human face region;
The feature extracted from each human face region is recombinated, the characteristics of image of the facial image is obtained.
Optionally, described the step of carrying out Face datection to described first image, obtaining human face region image, includes:
Face datection is carried out to described first image by being based on Haar classifier, obtains human face region image;
Or, carrying out Face datection to described first image by being based on skin color detection method, human face region image is obtained.
Optionally, the step of reflectance for calculating the human face region includes:
The human face region is divided into several sub-regions, and calculates the gray average of each sub-regions;
The reflectance of human face region is calculated according to the gray average of each sub-regions.
In addition, to achieve the above object, the present invention also provides a kind of face recognition device, the face recognition device includes
Processor, network interface, user interface and memory are stored with recognition of face program in the memory;The processor is used
In executing the recognition of face program, the step of to realize face identification method as described above.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
Recognition of face program is stored on storage medium, the recognition of face program realizes face as described above when being executed by processor
The step of recognition methods.
Face identification method, equipment and computer readable storage medium proposed by the present invention, acquire facial image first, and
Gray processing processing is carried out to the picture frame of collected facial image, obtains the first image;Then described first image is carried out
Face datection obtains human face region image;The human face region image is input to the mirror that active contour model carries out spectacle-frame
Frame region identifies;It can be obtained the frame region of spectacle-frame according to the result of identification output;The frame region first is assigned to count
Weights are calculated, and assigns the non-mirror frame region second in human face region and calculates weights, wherein described first, which calculates weights, is less than second
Calculate weights;Weights are calculated according to described first and facial image is identified in the second calculating weights, by the above-mentioned means,
To weaken influence of the lens area to recognition of face, the accuracy of recognition of face is improved.
Description of the drawings
Fig. 1 is the flow diagram of the present inventor's face recognition method first embodiment;
Fig. 2 is the flow diagram of the present inventor's face recognition method second embodiment;
Fig. 3 is the refinement flow signal for the step of the present inventor's face recognition method calculates the reflectance of the human face region
Figure;
Fig. 4 is that the human face region is divided into several sub-regions by the present inventor's face recognition method, and is calculated each
The refinement flow diagram of the step of gray average of subregion;
Fig. 5 is to calculate weights and the second calculating power according to described first in the present inventor's face recognition method second embodiment
The refinement flow diagram for the step of facial image is identified in value;
Fig. 6 is in the present inventor's face recognition method 3rd embodiment by the face in collected facial image and database
Image is matched, obtain face characteristic between matching degree the step of refinement flow diagram;
Fig. 7 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The primary solutions of the embodiment of the present invention are:Facial image is acquired first, and to collected facial image
Picture frame carries out gray processing processing, obtains the first image;Then Face datection is carried out to described first image, obtains human face region
Image;The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;It is defeated according to identifying
The result gone out can be obtained the frame region of spectacle-frame;It assigns the frame region first and calculates weights, and assign human face region
In non-mirror frame region second calculate weights, wherein it is described first calculate weights be less than second calculate weights;According to described first
It calculates weights and facial image is identified in the second calculating weights, by the above-mentioned means, to weaken lens area to people
The influence of face identification, improves the accuracy of recognition of face.
The embodiment of the present invention is it is considered that currently, the raising of the maturation and Social Agree with its technology, recognition of face quilt
It applies in many fields, for example, recognition of face access control and attendance system, recognition of face antitheft door, face recognition mobile telephone unlock, face
Identify the robot etc. that runs, but identify facial image there are shelter and template image it is unobstructed when, can due to image it
Between feature difference and reduce discrimination.
For this purpose, the embodiment of the present invention proposes a kind of face identification method, facial image is acquired first, and to collected people
The picture frame of face image carries out gray processing processing, obtains the first image;Then Face datection is carried out to described first image, obtained
Human face region image;The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;Root
It can be obtained the frame region of spectacle-frame according to the result of identification output;It assigns the frame region first and calculates weights, and assign
Non-mirror frame region second in human face region calculates weights, wherein described first, which calculates weights, is less than the second calculating weights;According to
Facial image is identified in the first calculating weights and the second calculating weights, by the above-mentioned means, to weaken glasses
Influence of the region to recognition of face, improves the accuracy of recognition of face.
The present invention provides a kind of face identification method.
Referring to Fig.1, Fig. 1 is the flow diagram of the present inventor's face recognition method first embodiment.
In the present embodiment, this method includes:
Step S100 acquires facial image, and carries out gray processing processing to the picture frame of collected facial image, obtains
First image;
In the present embodiment, when needing to identify by facial image, access control and attendance system is carried out, or know by facial image
Not, mobile phone unlock is carried out, or is identified by facial image, when carrying out robot operation, it is necessary first to be adopted to facial image
Collection, specifically, before user may stand in image capture device, image capture device is acquired equipped with camera by camera
The facial image of user, then triggering collection facial image instruction, specifically, user clicks the acquisition people in image capture device
Face image button, after the instruction of user's triggering collection facial image, image capture device is the facial image to user
It is acquired, in specific implementation process, when the time before image capture device detects that live body appears in camera lens is more than default
Between when, the facial image of live body can also be acquired at this time.After collecting user's facial image, image can be passed through
The background server of collecting device carries out gray processing processing, to obtain the first figure to the picture frame of collected facial image
Picture, wherein coloured image is transformed into the process of gray level image by gray processing processing, and the method for carrying out gray processing processing is existing
Have the universal method of technology, details are not described herein, and image capture device can be PC, can also be smart mobile phone, tablet computer,
The terminal devices such as pocket computer.
Step S200 carries out Face datection to described first image, obtains human face region image;
After obtaining the first image, Face datection can be carried out to first image, obtain human face region image.Its
In, the method for Face datection has two classes:One kind is the method based on machine learning, such as the Face datection side based on Haar classifier
Method;Another kind of is the detection method based on the colour of skin, or fusion above two method;In view of the diversity of user colour, certainly
Adapt to the more difficult acquisition of threshold value.Therefore the present embodiment preferentially uses the Face datection algorithm in the libraries OpenCV to carry out Face datection, the calculation
Method is realized based on Haar classifier.Furthermore, it is contemplated that the training sample in the libraries OpenCV is all the image under visible light environment
Library, the present invention can in advance extend to several facial images near-infrared image of face (or specially) in the Haar classifier
The libraries OpenCV in be used as original training sample, re-start cascade training obtain new data model.For example, with 70000 width
Near-infrared image expands original training sample, re-starts cascade training and obtains new data model.The data model energy of reconstruction
It is enough accurate to the progress human face region positioning of near-infrared facial image.
Further, Face datection is carried out to described first image in the step S200, obtains human face region image
The step of include:
Face datection is carried out to described first image by being based on Haar classifier, obtains human face region;
Or, carrying out Face datection to described first image by being based on skin color detection method, human face region is obtained.
Specifically, may include to the method that face is detected:Method based on machine learning is such as classified based on Haar
The method for detecting human face of device;Another kind of is the detection method based on the colour of skin, or fusion above two method, is not limited herein
It is fixed.
The human face region image is input to the frame region knowledge that active contour model carries out spectacle-frame by step S300
Not;
Specifically, after obtaining human face region image, you can the human face region image is input to active profile die
Type carries out the frame region recognition of spectacle-frame, and specifically, the active contour model is GVF-Snake models, wherein GVF-
Snake models be contour identification universal model, details are not described herein, and in order to improve spectacle-frame frame region recognition standard
Same person wearing spectacles and the not image of wearing spectacles can be all input to GVF-Snake models and be instructed again by true rate
Practice, the GVF-Snake models after being trained, then human face region image is input to the mirror that active contour model carries out spectacle-frame
Frame region identifies.
Step S400 obtains the frame region of spectacle-frame according to the result of identification output;
After the frame region recognition for carrying out spectacle-frame by GVF-Snake models, you can output frame profile, to
Described according to frame profile, you can obtain the frame region of spectacle-frame.
Step S500 assigns the frame region first and calculates weights, and assigns the non-mirror frame region in human face region the
Two calculate weights, wherein described first, which calculates weights, is less than the second calculating weights;
After the frame region for obtaining spectacle-frame, in order to weaken influence of the frame region to recognition of face, it can assign
The frame region first calculates weights, and assigns the non-mirror frame region second in human face region and calculate weights, wherein described the
One, which calculates weights, is less than the second calculating weights, to reduce accounting of the frame region to recognition of face.
Step S600 calculates weights according to described first and facial image is identified in the second calculating weights.
After assigning the different weights of different zones, you can calculate weights and the second calculating weights according to described first
Facial image is identified, specifically, is first matched collected facial image with the facial image in database,
Obtain the matching degree between face characteristic;Then weights and the second calculating weights and corresponding face spy are calculated by described first
The matching degree of sign is multiplied, and the face characteristic after acquisition imparting weights is matched to pass through value;Value is passed through with default by value by described
It is compared, you can obtain the recognition result of recognition of face.
The face identification method that the present embodiment proposes, acquires facial image, and to the figure of collected facial image first
As frame carries out gray processing processing, the first image of acquisition;Then Face datection is carried out to described first image, obtains human face region figure
Picture;The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;It is exported according to identification
Result can be obtained the frame region of spectacle-frame;It assigns the frame region first and calculates weights, and assign in human face region
Non-mirror frame region second calculate weights, wherein it is described first calculate weights be less than second calculate weights;It is counted according to described first
It calculates weights and facial image is identified in the second calculating weights, by the above-mentioned means, to weaken lens area to face
The influence of identification improves the accuracy of recognition of face.
Further, with reference to Fig. 2, recognition of face side of the present invention is proposed based on the present inventor's face recognition method first embodiment
Method second embodiment.
The step of after the step S200, further include:
Step S700, calculates the reflectance of the human face region image, and screens reflectance in the human face region image
More than the second image of default reflectance threshold value;
Step S800, using second image as the frame region of spectacle-frame.
In the present embodiment, the frame region that spectacle-frame can also be obtained by calculating reflectance, specifically, described in calculating
Then the reflectance of human face region image specifically obtains and presets reflective threshold value, and by each height of human face region in the first image
The reflectance in region is compared with the default reflective threshold value, is obtained reflectance and is more than the subregion for presetting reflective threshold value, and
The subregion that reflectance is more than to default reflective threshold value is connected, you can obtains reflectance and is more than the second figure for presetting reflective threshold value
Picture, and using second image as the frame region of spectacle-frame.
Further, include with reference to Fig. 3, the step S700:
The human face region is divided into several sub-regions by step S710, and the gray scale for calculating each sub-regions is equal
Value;
Step S720 calculates the reflectance of human face region according to the gray average of each sub-regions.
Specifically, it when calculating the reflectance of human face region, needs the human face region being divided into several sub-regions,
Positioning feature point is carried out to the human face region of user first, if being divided into the human face region of user according to positioning feature point result
Dry sub-regions, then calculate the gray average of each sub-regions, specifically, can be according to each sub-regions of human face region
Rgb value, calculates the gray value of each sub-regions, and the circular of the gray value of each sub-regions may include:1, floating-point
Algorithm:Gray=R*0.3+G*0.59+B*0.11,2, integer method:Gray=(R*30+G*59+B*11)/100,3, average value
Method:Gray=(R+G+B)/3 etc., will not enumerate herein;After the gray value that each sub-regions are calculated, you can meter
The gray scale total value of each sub-regions is calculated, then by the gray scale total value divided by the number of subregion, you can obtain gray average, together
Reason, it is assumed that human face region is averagely divided into N number of region, the N/2 sub-regions above human face region can be further calculated
The gray scale total value of N/2 sub-regions below gray scale total value and human face region;Then N/2 son below human face region will be calculated
The gray scale total value of N/2 sub-regions above the gray scale total value rain human face region in region is subtracted each other, you can obtains human face region
Reflectance.Similarly, the reflectance that each sub-regions can be further calculated, so as to subsequently to whether there is mirror in human face region
Frame region is judged.
Further, include with reference to Fig. 4, the step S710:
Step S711 carries out positioning feature point to the human face region;
The human face region is divided into several sub-regions according to positioning feature point result, and calculated each by step S712
The gray average of sub-regions.
In the present embodiment, positioning feature point is carried out to the human face region of user first, it will according to positioning feature point result
The human face region of user is divided into several sub-regions, then calculates the gray average of each sub-regions, specifically, can basis
The rgb value of each sub-regions of human face region calculates the gray value of each sub-regions, the specific calculating of the gray value of each sub-regions
Method may include:1, floating-point arithmetic:Gray=R*0.3+G*0.59+B*0.11,2, integer method:Gray=(R*30+G*59
+ B*11)/100,3, mean value method:Gray=(R+G+B)/3 etc., will not enumerate herein;Each sub-regions are being calculated
Gray value after, you can calculate the gray scale total value of each sub-regions, then by the gray scale total value divided by the number of subregion,
It can be obtained gray average.
Further, with reference to Fig. 5, recognition of face side of the present invention is proposed based on the present inventor's face recognition method first embodiment
Method 3rd embodiment.
In the present embodiment, the step S600 includes:
Step S610 matches collected facial image with the facial image in database, obtains face characteristic
Between matching degree;
Step S620 calculates weights matching degree corresponding with the face characteristic that frame region includes by described first and is multiplied,
The first matching value is obtained, calculating weights matching degree corresponding with the face characteristic that non-mirror frame region includes by second is multiplied, and obtains
First matching value is added by the second matching value with the second matching value, obtains matched of face characteristic after assigning weights
With value;
Step S630 compares the matched matching value of face characteristic after the imparting weights with preset matching value,
Obtain the recognition result of recognition of face.
In the present embodiment, collected facial image is matched first with the facial image in database, is obtained
Matching degree between face characteristic, for example the nose of the facial image in the nose and database in facial image is carried out
Match, then calculate the matching degree between the nose of two images of analysis, similarly, calculates and analyze collected facial image and data
Matching degree between the face characteristics such as the face between facial image in library.Then weights and frame area are calculated by described first
The corresponding matching degree of face characteristic that domain includes is multiplied, and obtains the first matching value, and weights and non-mirror frame region packet are calculated by second
The corresponding matching degree of face characteristic contained is multiplied, and obtains the second matching value, first matching value is added with the second matching value,
The matched matching value of face characteristic after assigning weights is obtained, for example, by face characteristic and frame region pair in frame region
The the first calculating weights that should be assigned are multiplied, you can obtain the matching of frame Region Matching in the face characteristic after assigning weights
Value;Then obtain preset matching value, and by the matched matching value of face characteristic and the preset matching value after the imparting weights into
Row comparison, obtains the recognition result of recognition of face, specifically, if the matching value is greater than or equal to preset matching value, you can really
Determine recognition of face to pass through, otherwise, as recognition of face fails.
Further, with reference to Fig. 6, recognition of face side of the present invention is proposed based on the present inventor's face recognition method second embodiment
Method 3rd embodiment.
In the present embodiment, the step S610 includes:
Step S611 extracts the collected facial image feature;
Step S612, according to the collected facial image feature and the facial image face image feature to prestore, meter
Calculate the matching degree between the collected facial image and the facial image face to prestore.
In the present embodiment, the preset algorithm for specifically extracting the collected facial image feature can be joint pattra leaves
This algorithm.In embodiments of the present invention, the characteristics of image for extracting the facial image of user by combining bayesian algorithm, and prestore
Facial image feature, then the characteristics of image of the facial image of the user extracted and the facial image feature that prestores are carried out
Matching, the matching degree between calculating separately the facial image of user and the facial image that prestores.
Further, the step S611 includes:
Key feature point is carried out to the facial image;
The face image of the user is divided into several human face regions according to key feature point result;
Feature extraction is carried out to the human face region using the corresponding depth network model of the human face region;
The feature extracted from each human face region is recombinated, the characteristics of image of the facial image is obtained.
Specifically, the corresponding depth network model of the facial image is used to carry out feature extraction to the facial image first,
Then the feature extracted from facial image is recombinated, you can obtain the characteristics of image of the facial image of user.Facial image
In key feature points refer to the center of such as eyes in face, nose, the both sides corners of the mouth etc characteristic point.For different people
Corresponding depth network model is respectively trained in the human face region for including in face image in advance.Depth network model is used for from face
Characteristics of image is extracted in image, depth convolutional neural networks Convolutional Neural can be used in depth network model
Networks, CNNs).In embodiments of the present invention, facial image is obtained using the face recognition algorithms based on deep learning
Characteristics of image, compared to other face recognition algorithms, recognition accuracy higher.In addition, for including in different facial images
Human face region (such as ocular, nasal area, mouth region), corresponding depth network model is respectively trained, and
Feature extraction is carried out using corresponding depth network model, substantially ensures the accuracy of feature extraction.
The embodiment of the present invention further provides for a kind of face recognition device.
With reference to Fig. 7, Fig. 7 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
As shown in fig. 7, the face recognition device may include:Processor 1001, such as CPU, network interface 1002, user
Interface 1003, memory 1004.Connection communication between these components can be realized by communication bus.Network interface 1002 can
Choosing may include the wireline interface (for connecting cable network) of standard, wireless interface (such as WI-FI interfaces, blue tooth interface, red
Outer line interface etc., for connecting wireless network).User interface 1003 may include display screen (Display), input unit such as
Keyboard (Keyboard), optional user interface 1003 can also include standard wireline interface (such as connect wired keyboard,
Wire mouse etc.) and/or wireless interface (such as connecting Wireless Keyboard, wireless mouse).Memory 1004 can be high speed
RAM memory can also be stable memory (non-volatile memory), such as magnetic disk storage.Memory 1004
The optional storage device that can also be independently of aforementioned processor 1001.
Optionally, which can also include camera, RF (Radio Frequency, radio frequency) circuit, pass
Sensor, voicefrequency circuit, WiFi module etc..
It will be understood by those skilled in the art that face recognition device structure shown in figure is not constituted and is set to recognition of face
Standby restriction may include either combining certain components or different components arrangement than illustrating more or fewer components.
As shown in fig. 7, as may include that operating system, network are logical in a kind of memory 1004 of computer storage media
Believe module, Subscriber Interface Module SIM and recognition of face program.Wherein, operating system is management and control face recognition device hardware
With the program of software resource, network communication module, Subscriber Interface Module SIM, recognition of face program and other programs or software are supported
Operation;Network communication module is for managing and controlling network interface 1002;Subscriber Interface Module SIM is for managing and controlling user
Interface 1003.
In face recognition device shown in Fig. 7, network interface 1002 is mainly used for connecting database, is carried out with database
Data communicate;User interface 1003 is mainly used for connecting client (can be understood as user terminal), and it is logical to carry out data with client
Letter such as shows information to client by window, or receives the operation information that client is sent;And processor 1001 can be used
In executing the recognition of face program stored in memory 1004, to realize following steps:
Facial image is acquired, and gray processing processing is carried out to the picture frame of collected facial image, obtains the first image;
Face datection is carried out to described first image, obtains human face region image;
The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;
The frame region of spectacle-frame is obtained according to the result of identification output;
It assigns the frame region first and calculates weights, and assign the non-mirror frame region second in human face region and calculate power
Value, wherein described first, which calculates weights, is less than the second calculating weights;
Weights are calculated according to described first and facial image is identified in the second calculating weights.
Further, the processor 1001 is additionally operable to execute the recognition of face program stored in memory 1004, with reality
Existing following steps:
The reflectance of the human face region image is calculated, and it is anti-more than default to screen reflectance in the human face region image
Second image of luminosity threshold value;
Using second image as the frame region of spectacle-frame.
Further, the processor 1001 is additionally operable to execute the recognition of face program stored in memory 1004, with reality
Existing following steps:
Collected facial image is matched with the facial image to prestore in database, between acquisition face characteristic
Matching degree;
Weights matching degree corresponding with the face characteristic that frame region includes is calculated by described first to be multiplied, and obtains first
With value, calculates weights matching degree corresponding with the face characteristic that non-mirror frame region includes by second and is multiplied, obtain the second matching value,
First matching value is added with the second matching value, obtains the matched matching value of face characteristic after assigning weights;
The matched matching value of face characteristic after the imparting weights is compared with preset matching value, face is obtained and knows
Other recognition result.
Further, the processor 1001 is additionally operable to execute the recognition of face program stored in memory 1004, with reality
Existing following steps:
Extract the collected facial image feature;
According to the collected facial image feature and the facial image face image feature to prestore, the acquisition is calculated
To facial image and the facial image face to prestore between matching degree.
Further, the processor 1001 is additionally operable to execute the recognition of face program stored in memory 1004, with reality
Existing following steps:
Key feature point is carried out to the facial image;
The face image of the user is divided into several human face regions according to key feature point result;
Feature extraction is carried out to the human face region using the corresponding depth network model of the human face region;
The feature extracted from each human face region is recombinated, the characteristics of image of the facial image is obtained.
Further, the processor 1001 is additionally operable to execute the recognition of face program stored in memory 1004, with reality
Existing following steps:
Face datection is carried out to described first image by being based on Haar classifier, obtains human face region;
Or, carrying out Face datection to described first image by being based on skin color detection method, human face region is obtained.
Further, the processor 1001 is additionally operable to execute the recognition of face program stored in memory 1004, with reality
Existing following steps:
The human face region is divided into several sub-regions, and calculates the gray average of each sub-regions;
The reflectance of human face region is calculated according to the gray average of each sub-regions.
The specific embodiment of face recognition device of the present invention and each embodiment of above-mentioned face identification method are essentially identical, herein
It does not repeat.
The present invention also provides a kind of computer readable storage medium, there are one the computer-readable recording medium storages
Or multiple programs, one or more of programs can be executed by one or more processor, to realize following steps:
Facial image is acquired, and gray processing processing is carried out to the picture frame of collected facial image, obtains the first image;
Face datection is carried out to described first image, obtains human face region image;
The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;
The frame region of spectacle-frame is obtained according to the result of identification output;
It assigns the frame region first and calculates weights, and assign the non-mirror frame region second in human face region and calculate power
Value, wherein described first, which calculates weights, is less than the second calculating weights;
Weights are calculated according to described first and facial image is identified in the second calculating weights.
Further, one or more of programs can be executed by one or more of processors, also realize with
Lower step:
The reflectance of the human face region image is calculated, and it is anti-more than default to screen reflectance in the human face region image
Second image of luminosity threshold value;
Using second image as the frame region of spectacle-frame.
Further, one or more of programs can be executed by one or more of processors, also realize with
Lower step:
Collected facial image is matched with the facial image to prestore in database, between acquisition face characteristic
Matching degree;
Weights matching degree corresponding with the face characteristic that frame region includes is calculated by described first to be multiplied, and obtains first
With value, calculates weights matching degree corresponding with the face characteristic that non-mirror frame region includes by second and is multiplied, obtain the second matching value,
First matching value is added with the second matching value, obtains the matched matching value of face characteristic after assigning weights;
The matched matching value of face characteristic after the imparting weights is compared with preset matching value, face is obtained and knows
Other recognition result.
Further, one or more of programs can be executed by one or more of processors, also realize with
Lower step:
Extract the collected facial image feature;
According to the collected facial image feature and the facial image face image feature to prestore, the acquisition is calculated
To facial image and the facial image face to prestore between matching degree.
Further, one or more of programs can be executed by one or more of processors, also realize with
Lower step:
Key feature point is carried out to the facial image;
The face image of the user is divided into several human face regions according to key feature point result;
Feature extraction is carried out to the human face region using the corresponding depth network model of the human face region;
The feature extracted from each human face region is recombinated, the characteristics of image of the facial image is obtained.
Further, one or more of programs can be executed by one or more of processors, also realize with
Lower step:
Face datection is carried out to described first image by being based on Haar classifier, obtains human face region;
Or, carrying out Face datection to described first image by being based on skin color detection method, human face region is obtained.
Further, one or more of programs can be executed by one or more of processors, also realize with
Lower step:
The human face region is divided into several sub-regions, and calculates the gray average of each sub-regions;
The reflectance of human face region is calculated according to the gray average of each sub-regions.
The specific embodiment of computer readable storage medium of the present invention and above-mentioned face identification method and face recognition device
Each embodiment is essentially identical, and therefore not to repeat here.
It should also be noted that, herein, the terms "include", "comprise" or its any other variant are intended to non-
It is exclusive to include, so that process, method, article or device including a series of elements include not only those elements,
But also include other elements that are not explicitly listed, or further include solid by this process, method, article or device
Some elements.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including
There is also other identical elements in the process of the element, method, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical scheme of the present invention substantially in other words does the prior art
Going out the part of contribution can be expressed in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are used so that a station terminal equipment (can be mobile phone, computer, clothes
Be engaged in device, air conditioner or the network equipment etc.) method that executes each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of face identification method, which is characterized in that the described method comprises the following steps:
Facial image is acquired, and gray processing processing is carried out to the picture frame of collected facial image, obtains the first image;
Face datection is carried out to described first image, obtains human face region image;
The human face region image is input to the frame region recognition that active contour model carries out spectacle-frame;
The frame region of spectacle-frame is obtained according to the result of identification output;
It assigns the frame region first and calculates weights, and assign the non-mirror frame region second in human face region and calculate weights,
In, described first, which calculates weights, is less than the second calculating weights;
Weights are calculated according to described first and facial image is identified in the second calculating weights.
2. face identification method as described in claim 1, which is characterized in that described to carry out face inspection to described first image
After the step of surveying, obtaining human face region image, further include:
The reflectance of the human face region image is calculated, and screens reflectance in the human face region image and is more than default reflectance
Second image of threshold value;
Using second image as the frame region of spectacle-frame.
3. face identification method as described in claim 1, which is characterized in that described to calculate weights and the according to described first
Two, which calculate the step of facial image is identified in weights, includes:
Collected facial image is matched with the facial image to prestore in database, obtains the matching between face characteristic
Degree;
Weights matching degree corresponding with the face characteristic that frame region includes is calculated by described first to be multiplied, and obtains the first matching
Value calculates weights matching degree corresponding with the face characteristic that non-mirror frame region includes by second and is multiplied, obtains the second matching value, will
First matching value is added with the second matching value, obtains the matched matching value of face characteristic after assigning weights;
The matched matching value of face characteristic after the imparting weights is compared with preset matching value, obtains recognition of face
Recognition result.
4. face identification method as claimed in claim 3, which is characterized in that described by collected facial image and database
In the facial image that prestores the step of being matched, obtaining the matching degree between face characteristic include:
Extract the collected facial image feature;
According to the collected facial image feature and the facial image face image feature to prestore, calculate described collected
Matching degree between facial image and the facial image face to prestore.
5. face identification method as claimed in claim 4, which is characterized in that the extraction collected facial image is special
The step of sign includes:
Key feature point is carried out to the facial image;
The face image of the user is divided into several human face regions according to key feature point result;
Feature extraction is carried out to the human face region using the corresponding depth network model of the human face region;
The feature extracted from each human face region is recombinated, the characteristics of image of the facial image is obtained.
6. face identification method as described in claim 1, which is characterized in that described to carry out face inspection to described first image
Survey, obtain human face region image the step of include:
Face datection is carried out to described first image by being based on Haar classifier, obtains human face region image;
Or, carrying out Face datection to described first image by being based on skin color detection method, human face region image is obtained.
7. face identification method as described in claim 1, which is characterized in that the reflectance for calculating the human face region
Step includes:
The human face region is divided into several sub-regions, and calculates the gray average of each sub-regions;
The reflectance of human face region is calculated according to the gray average of each sub-regions.
8. face identification method as claimed in claim 7, which is characterized in that it is described the human face region is divided into it is several
Sub-regions, and the step of calculating the gray average of each sub-regions includes:
Positioning feature point is carried out to the human face region;
The human face region is divided into several sub-regions according to positioning feature point result, and calculates the gray scale of each sub-regions
Mean value.
9. a kind of face recognition device, which is characterized in that the face recognition device includes processor, network interface, Yong Hujie
Mouthful and memory, be stored with recognition of face program in the memory;The processor is used to execute the recognition of face program,
To realize such as the step of face identification method described in any item of the claim 1 to 8.
10. a kind of computer readable storage medium, which is characterized in that be stored with face knowledge on the computer readable storage medium
Other program realizes such as recognition of face described in any item of the claim 1 to 8 when the recognition of face program is executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810397466.8A CN108664908A (en) | 2018-04-27 | 2018-04-27 | Face identification method, equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810397466.8A CN108664908A (en) | 2018-04-27 | 2018-04-27 | Face identification method, equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108664908A true CN108664908A (en) | 2018-10-16 |
Family
ID=63780424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810397466.8A Pending CN108664908A (en) | 2018-04-27 | 2018-04-27 | Face identification method, equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108664908A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418060A (en) * | 2020-11-19 | 2021-02-26 | 西南大学 | Facial recognition system based on neural network |
CN113255401A (en) * | 2020-02-10 | 2021-08-13 | 深圳市光鉴科技有限公司 | 3D face camera device |
CN113657195A (en) * | 2021-07-27 | 2021-11-16 | 浙江大华技术股份有限公司 | Face image recognition method, face image recognition equipment, electronic device and storage medium |
CN114549921A (en) * | 2021-12-30 | 2022-05-27 | 浙江大华技术股份有限公司 | Object recognition method, electronic device, and computer-readable storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102169544A (en) * | 2011-04-18 | 2011-08-31 | 苏州市慧视通讯科技有限公司 | Face-shielding detecting method based on multi-feature fusion |
CN102855496A (en) * | 2012-08-24 | 2013-01-02 | 苏州大学 | Method and system for authenticating shielded face |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
CN104408402A (en) * | 2014-10-29 | 2015-03-11 | 小米科技有限责任公司 | Face identification method and apparatus |
CN105046250A (en) * | 2015-09-06 | 2015-11-11 | 广州广电运通金融电子股份有限公司 | Glasses elimination method for face recognition |
CN105469034A (en) * | 2015-11-17 | 2016-04-06 | 西安电子科技大学 | Face recognition method based on weighted diagnostic sparseness constraint nonnegative matrix decomposition |
US9418287B2 (en) * | 2013-03-13 | 2016-08-16 | Denso Corporation | Object detection apparatus |
CN106407904A (en) * | 2016-08-31 | 2017-02-15 | 浙江大华技术股份有限公司 | Bang zone determining method and device |
CN107292287A (en) * | 2017-07-14 | 2017-10-24 | 深圳云天励飞技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN107633204A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | Face occlusion detection method, apparatus and storage medium |
-
2018
- 2018-04-27 CN CN201810397466.8A patent/CN108664908A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102169544A (en) * | 2011-04-18 | 2011-08-31 | 苏州市慧视通讯科技有限公司 | Face-shielding detecting method based on multi-feature fusion |
CN102855496A (en) * | 2012-08-24 | 2013-01-02 | 苏州大学 | Method and system for authenticating shielded face |
US9418287B2 (en) * | 2013-03-13 | 2016-08-16 | Denso Corporation | Object detection apparatus |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
CN104408402A (en) * | 2014-10-29 | 2015-03-11 | 小米科技有限责任公司 | Face identification method and apparatus |
CN105046250A (en) * | 2015-09-06 | 2015-11-11 | 广州广电运通金融电子股份有限公司 | Glasses elimination method for face recognition |
CN105469034A (en) * | 2015-11-17 | 2016-04-06 | 西安电子科技大学 | Face recognition method based on weighted diagnostic sparseness constraint nonnegative matrix decomposition |
CN106407904A (en) * | 2016-08-31 | 2017-02-15 | 浙江大华技术股份有限公司 | Bang zone determining method and device |
CN107292287A (en) * | 2017-07-14 | 2017-10-24 | 深圳云天励飞技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN107633204A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | Face occlusion detection method, apparatus and storage medium |
Non-Patent Citations (1)
Title |
---|
余宏杰: "《生物序列数值化表征模型的矩阵分解方法及其应用》", 30 June 2014 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255401A (en) * | 2020-02-10 | 2021-08-13 | 深圳市光鉴科技有限公司 | 3D face camera device |
CN112418060A (en) * | 2020-11-19 | 2021-02-26 | 西南大学 | Facial recognition system based on neural network |
CN113657195A (en) * | 2021-07-27 | 2021-11-16 | 浙江大华技术股份有限公司 | Face image recognition method, face image recognition equipment, electronic device and storage medium |
CN114549921A (en) * | 2021-12-30 | 2022-05-27 | 浙江大华技术股份有限公司 | Object recognition method, electronic device, and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056064B (en) | A kind of face identification method and face identification device | |
CN108664908A (en) | Face identification method, equipment and computer readable storage medium | |
CN106570489A (en) | Living body determination method and apparatus, and identity authentication method and device | |
CN101390128B (en) | Detecting method and detecting system for positions of face parts | |
WO2020048140A1 (en) | Living body detection method and apparatus, electronic device, and computer readable storage medium | |
CN109784274B (en) | Method for identifying trailing and related product | |
CN104933344A (en) | Mobile terminal user identity authentication device and method based on multiple biological feature modals | |
CN110163078A (en) | The service system of biopsy method, device and application biopsy method | |
CN109740444B (en) | People flow information display method and related product | |
CN104143086A (en) | Application technology of portrait comparison to mobile terminal operating system | |
CN204791017U (en) | Mobile terminal users authentication device based on many biological characteristics mode | |
CN107958234A (en) | Client-based face identification method, device, client and storage medium | |
CN105975938A (en) | Smart community manager service system with dynamic face identification function | |
US11074469B2 (en) | Methods and systems for detecting user liveness | |
CN107194361A (en) | Two-dimentional pose detection method and device | |
CN105022999A (en) | Man code company real-time acquisition system | |
CN111553266A (en) | Identification verification method and device and electronic equipment | |
CN107622246A (en) | Face identification method and Related product | |
CN110036407B (en) | System and method for correcting digital image color based on human sclera and pupil | |
CN112052730B (en) | 3D dynamic portrait identification monitoring equipment and method | |
CN111784658B (en) | Quality analysis method and system for face image | |
CN108171208A (en) | Information acquisition method and device | |
JP3459950B2 (en) | Face detection and face tracking method and apparatus | |
CN113630721A (en) | Method and device for generating recommended tour route and computer readable storage medium | |
CN110991301A (en) | Face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181016 |