CN107958223A - Face identification method and device, mobile equipment, computer-readable recording medium - Google Patents
Face identification method and device, mobile equipment, computer-readable recording medium Download PDFInfo
- Publication number
- CN107958223A CN107958223A CN201711329593.6A CN201711329593A CN107958223A CN 107958223 A CN107958223 A CN 107958223A CN 201711329593 A CN201711329593 A CN 201711329593A CN 107958223 A CN107958223 A CN 107958223A
- Authority
- CN
- China
- Prior art keywords
- histogram
- face images
- dimensional face
- face
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The disclosure provides a kind of face identification method and device, mobile equipment, computer-readable recording medium.The described method includes:Obtain three-dimensional face images;The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, obtains multiple goal histograms;Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.It can be seen that, in the present embodiment, by obtaining multiple goal histograms under different segmentation granularities, since the corresponding goal histogram of different segmentation granularities has different accuracy, therefore when each goal histogram and the Histogram Matching that prestores are handled, face recognition result can adapt to different use environments, improve the reliability of face recognition result.
Description
Technical field
This disclosure relates to technical field of image processing, more particularly to a kind of face identification method and device, mobile equipment, meter
Calculation machine readable storage medium storing program for executing.
Background technology
At present, face recognition technology is had been applied in intelligent terminal.However, the light of environment changes where user
Or during using face prosthese, then face can not be accurately identified, misleads lock.
The content of the invention
The embodiment of the present disclosure provides a kind of face identification method and device, mobile equipment, computer-readable recording medium,
To solve the problems, such as present in correlation technique.
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of face identification method, the described method includes:
Obtain three-dimensional face images;
The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;
Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, is obtained multiple
Goal histogram;
Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
Alternatively, the three-dimensional face images are pre-processed, obtains pretreated three-dimensional face images, including:
Facial feature points detection is carried out to the three-dimensional face images, obtains the first quantity in the three-dimensional face images
A specific characteristic point;
The three-dimensional face images are normalized based on the first quantity specific characteristic point, obtain normalizing
Three-dimensional face images after change processing.
Alternatively, the three-dimensional face images are normalized based on the first quantity specific characteristic point,
The three-dimensional face images after normalized are obtained, including:
Obtain the position relationship between the first quantity specific characteristic point, and in face template characteristic point position
Relation;Characteristic point is corresponded with the first quantity specific characteristic point in the face template;The position relationship includes
The distance between any two specific characteristic point and deflection angle;
The size and deflection angle of the three-dimensional face images are adjusted, so as to specify characteristic point in the three-dimensional face images
It is less than or equal to given threshold with the distance between character pair point value in the face template, obtains three after registration alignment
Tie up facial image.
Alternatively, histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities,
Multiple goal histograms are obtained, including:
Obtain a segmentation granularity successively from default multiple segmentation granularities, or obtain multiple segmentation granularities parallel;
For each segmentation granularity got, the segmentation grain is obtained based on the pretreated three-dimensional face images
Spend corresponding goal histogram.
Alternatively, the corresponding target Nogata of the segmentation granularity is obtained based on the pretreated three-dimensional face images
Figure, including:
The square region for including face is obtained from the three-dimensional face images;
The square region is split based on the segmentation granularity, obtains multiple cutting units;
According to coordinate value of each pixel on Z coordinate axis in each cutting unit, each cutting unit pair is calculated
The depth value answered, obtains the three-dimensional face images corresponding goal histogram under the segmentation granularity;
Optical axis of the Z coordinate axis parallel to the equipment shooting module for gathering the three-dimensional face images.
Alternatively, matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains recognition of face knot
Fruit, including:
Based on the corresponding depth value of each cutting unit in each goal histogram, each goal histogram is determined
Depth value vector, and with each goal histogram have the depth value of the histogram that prestores of identical segmentation granularity to
Amount;
The distance value of each goal histogram and the corresponding histogram that prestores is calculated based on depth value vector;
Distance value and weight coefficient based on each goal histogram calculate the distance of the three-dimensional face images
Discre value;The segmentation granularity positive correlation of the weight coefficient and the goal histogram;
If described be less than or equal to discre value threshold value apart from discre value, it is determined that the face recognition result is correct people
Face;If more than, it is determined that the face recognition result is wrong face.
Alternatively, after obtaining face recognition result, the method further includes:
Mobile equipment is controlled to be unlocked according to the face recognition result.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of face identification device, described device include:
Acquiring three-dimensional images module, for obtaining three-dimensional face images;
Pretreatment module, for being pre-processed to the three-dimensional face images, obtains pretreated three-dimensional face figure
Picture;
Histogram processing module, for being carried out according to different segmentation granularities to the pretreated three-dimensional face images
Histogram treatment, obtains multiple goal histograms;
Matching treatment module, for carrying out matching treatment based on the multiple goal histogram and the histogram that prestores, obtains
Face recognition result.
Alternatively, the pretreatment module includes:
Characteristic point detection unit, for carrying out facial feature points detection to the three-dimensional face images, obtains the three-dimensional
The first quantity specific characteristic point in facial image;
Normalized unit, for being carried out based on the first quantity specific characteristic point to the three-dimensional face images
Normalized, obtains the three-dimensional face images after normalized.
Alternatively, the normalized unit includes:
Position relationship obtains subelement, for obtaining the position relationship between the first quantity specific characteristic point, with
And in face template characteristic point position relationship;Characteristic point and the first quantity specific characteristic point one in the face template
One corresponds to;The position relationship includes the distance between any two specific characteristic point and deflection angle;
Registration alignment subelement, for adjusting the size and deflection angle of the three-dimensional face images, so that the three-dimensional
Characteristic point is specified to be less than or equal to setting threshold with the distance between character pair point in face template value in facial image
Value, obtains the three-dimensional face images after registration alignment.
Alternatively, the histogram processing module includes:
Segmentation granularity acquiring unit, for obtaining a segmentation granularity successively from default multiple segmentation granularities, or
Multiple segmentation granularities are obtained parallel;
Goal histogram acquiring unit, for each segmentation granularity for getting, based on described pretreated three
Tie up facial image and obtain the corresponding goal histogram of the segmentation granularity.
Alternatively, the goal histogram acquiring unit includes:
Square region obtains subelement, for obtaining the square region for including face from the three-dimensional face images;
Square region splits subelement, for being split based on the segmentation granularity to the square region, obtains more
A cutting unit;
Depth value computation subunit, for according to coordinate value of each pixel on Z coordinate axis in each cutting unit, meter
The corresponding depth value of each cutting unit is calculated, obtains the three-dimensional face images corresponding target under the segmentation granularity
Histogram;
Optical axis of the Z coordinate axis parallel to the shooting module for gathering the three-dimensional face images.
Alternatively, the matching treatment module includes:
Vector determination unit, for based on the corresponding depth value of each cutting unit in each goal histogram, determining
The depth value vector of each goal histogram, and there is prestoring for identical segmentation granularity with each goal histogram
The depth value vector of histogram;
Distance value computing unit, for calculating each goal histogram and corresponding pre- based on depth value vector
Deposit the distance value of histogram;
Discre value computing unit, calculates described for the distance value based on each goal histogram and weight coefficient
Three-dimensional face images apart from discre value;The segmentation granularity positive correlation of the weight coefficient and the goal histogram;
Face identification unit, for it is described be less than or equal to discre value threshold value apart from discre value when, determine the people
Face recognition result is correct face;Be additionally operable to it is described be more than the discre value threshold value apart from discre value when, determine the face
Recognition result is wrong face.
Alternatively, described device further includes:
Control module is unlocked, for controlling mobile equipment to be unlocked according to the face recognition result.
According to the third aspect of the embodiment of the present disclosure, there is provided a kind of mobile equipment, the terminal include:Gather three-dimensional face
The shooting module of image, processor, for storing the memory of the processor-executable instruction;Wherein, the processor is used
In:
Obtain the three-dimensional face images of the shooting module collection;
The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;
Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, is obtained multiple
Goal histogram;
Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
According to the fourth aspect of the embodiment of the present disclosure, there is provided a kind of computer-readable recording medium, is stored thereon with calculating
Machine program, the program are realized when being executed by processor:
Obtain three-dimensional face images;
The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;
Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, is obtained multiple
Goal histogram;
Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
The technical scheme provided by this disclosed embodiment can include the following benefits:
In the embodiment of the present disclosure, by obtaining three-dimensional face images;Then the three-dimensional face images are pre-processed,
Obtain pretreated three-dimensional face images;Afterwards according to different segmentation granularities to the pretreated three-dimensional face images
Histogram treatment is carried out, obtains multiple goal histograms;It is finally based on the multiple goal histogram and the histogram that prestores carries out
Matching treatment, obtains face recognition result.As it can be seen that in the present embodiment, it is straight by obtaining multiple targets under different segmentation granularities
Fang Tu, since the corresponding goal histogram of different segmentation granularities has different accuracy, each goal histogram with
During the Histogram Matching that prestores processing, face recognition result can adapt to different use environments, and that improves face recognition result can
By property.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not
The disclosure can be limited.
Brief description of the drawings
Attached drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the disclosure
Example, and be used to together with specification to explain the principle of the disclosure.
Fig. 1 is the flow diagram of face identification method of the disclosure according to an exemplary embodiment;
Fig. 2 is the flow diagram for the face identification method that the disclosure is shown according to another exemplary embodiment;
Fig. 3 is the flow diagram of face identification method of the disclosure according to another exemplary embodiment;
Fig. 4 (a)~(f) is each stage three-dimensional face figure in identification process of the disclosure according to an exemplary embodiment
The schematic diagram of picture;
Fig. 5 is the flow diagram of face identification method of the disclosure according to further example embodiment;
Fig. 6 is the flow diagram of face identification method of the disclosure according to further example embodiment;
Fig. 7 is the block diagram of face identification device of the disclosure according to an exemplary embodiment;
Fig. 8 is the block diagram for the face identification device that the disclosure is shown according to another exemplary embodiment;
Fig. 9~Figure 13 is the block diagram of face identification device of the disclosure according to further example embodiment;
Figure 14 is the structure diagram of mobile equipment of the disclosure according to an exemplary embodiment.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to
During attached drawing, unless otherwise indicated, the same numbers in different attached drawings represent the same or similar key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended
The example of the consistent apparatus and method of some aspects be described in detail in claims, the disclosure.
It is only merely for the purpose of description specific embodiment in the term that the disclosure uses, and is not intended to be limiting the disclosure.
" one kind " of singulative used in disclosure and the accompanying claims book, " described " and "the" are also intended to including majority
Form, unless context clearly shows that other implications.It is also understood that term "and/or" used herein refers to and wraps
Containing the associated list items purpose of one or more, any or all may be combined.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the disclosure
A little information should not necessarily be limited by these terms.These terms are only used for same type of information being distinguished from each other out.For example, do not departing from
In the case of disclosure scope, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as
One information.Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ...
When " or " in response to determining ".
Fig. 1 is the flow diagram of face identification method of the disclosure according to an exemplary embodiment.The face is known
Other method is suitable for being integrated into the mobile equipment of 3D shooting modules.Above-mentioned mobile equipment can be smart mobile phone, tablet computer
(Portable Android Device, PAD), personal digital assistant, wearable device, and the equipment such as digital camera.Referring to
Fig. 1,101~step 104 that the method comprising the steps of:
101, obtain three-dimensional face images.
In one embodiment, module can be shot by 3D and obtains three-dimensional face images.The three-dimensional face images include sitting
Pixel in mark system XYZ is formed.Each pixel includes 3 coordinate values, i.e. P (x, y, z), corresponds respectively to coordinate system XYZ
In X-coordinate axle, Y-coordinate axle and Z coordinate axis.
Wherein, plane where X-coordinate axle and Y-coordinate axle is parallel with lens plane in 3D shooting modules, Z coordinate axis parallel to
The 3D shoots the optical axis of module.
In one embodiment, the origin O of coordinate system XYZ can select the prenasale of face in three-dimensional face images.Certainly,
Origin O is also an option that other positions point, is not limited thereto.
102, the three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images.
In one embodiment, facial feature points detection is carried out to three-dimensional face images, obtains the first quantity specific characteristic
Point.In the present embodiment, the first quantity is arranged to 3~5, is respectively the central point, prenasale, the top of face two corners of two eyes
Point.Certainly, which can be configured according to concrete scene, be not limited thereto.
After the first quantity specific characteristic point is chosen, based on the first quantity specific characteristic point to three-dimensional face images
It is normalized, so that the three-dimensional face images after being normalized, normalization can be in following embodiment introduction, herein first
It is not illustrated.
103, histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, is obtained
Multiple goal histograms.
In one embodiment, can according to different segmentation granularities to pretreated three-dimensional face images into column hisgram
Processing, so as to obtain multiple goal histograms.Wherein, segmentation granularity refers to square region where face in three-dimensional face images
After being split, which includes the quantity of square.
In the present embodiment, the segmentation granularities of adjacent two goal histograms is into N times of relation.N is more than or equal to 2
Positive integer.In one embodiment, N values are 2.For example, segmentation granularity can be 8*8,4*4,2*2 and 1*1.Of course, it is possible to root
According to the value of concrete scene selection N, it is not limited thereto.
104, matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
In one embodiment, for each goal histogram in multiple goal histograms, each target Nogata of matching treatment
Figure and the histogram that prestores with each goal histogram with identical segmentation granularity, can obtain goal histogram and corresponding pre-
The distance between histogram value is deposited, the distance value and weight coefficient for being then based on each goal histogram obtain recognition of face knot
Fruit.
It will be appreciated that the face recognition result can be correct face, or wrong face.
As it can be seen that by obtaining multiple goal histograms under different segmentation granularities in the present embodiment, due to different segmentation grains
Spending corresponding goal histogram has different accuracy, therefore in each goal histogram and the Histogram Matching processing that prestores
When, face recognition result can adapt to different use environments, improve the reliability of face recognition result.
Fig. 2 is the flow diagram for the face identification method that the disclosure is shown according to another exemplary embodiment.Referring to figure
2, which includes:
201, obtain three-dimensional face images.
Step 201 is consistent with principle with the specific method of step 101, and detailed description please refers to Fig.1 and the correlation of step 101
Content, details are not described herein again.
202, facial feature points detection is carried out to the three-dimensional face images, obtains first in the three-dimensional face images
Quantity specific characteristic point.
In one embodiment, facial feature points detection is carried out to three-dimensional face images, obtains the first quantity specific characteristic
Point.Wherein facial feature points detection can use the detection algorithm in correlation technique to realize, be not limited thereto.
In the present embodiment, the first quantity is arranged to 3~5, is respectively central point, prenasale, the face two of two eyes
The vertex at angle.Certainly, which can be configured according to concrete scene, be not limited thereto.
After the first quantity specific characteristic point is chosen, based on the first quantity specific characteristic point to three-dimensional face images
It is normalized, so that the three-dimensional face images after being normalized.Wherein three-dimensional face images normalization process includes
Step 203 and step 204.
203, obtain the position relationship between the first quantity specific characteristic point, and characteristic point in face template
Position relationship;Characteristic point is corresponded with the first quantity specific characteristic point in the face template;The position relationship
Including the distance between any two specific characteristic point and deflection angle.
Above-mentioned face template can predict setting, such as acquisition for mobile terminal face direct picture, left surface image, right side
The image of multiple predetermined angles such as face image, overhead view image and bottom view picture, is then based on the figure of multiple above-mentioned predetermined angles
Picture and template generation algorithm obtain face template.Template generation algorithm can be realized with the algorithm in correlation technique, not made herein
Limit.
In one embodiment, the position relationship between the first quantity specific characteristic point, and spy in face template are obtained
Levy the position relationship of point.The specific characteristic point of wherein above-mentioned position relationship including the first quantity specific characteristic point any two it
Between distance and deflection angle.
Wherein, the distance between any two specific characteristic point can correspond to X-coordinate axle according to two specific characteristic points
Calculated with the coordinate value in Y-coordinate axle, such as distanceDeflection angle
It will be appreciated that characteristic point is corresponded with the first quantity specific characteristic point in face template.
204, the size and deflection angle of the three-dimensional face images are adjusted, so as to specify spy in the three-dimensional face images
Sign point is less than or equal to given threshold with the distance between character pair point in face template value, after obtaining registration alignment
Three-dimensional face images.
In one embodiment, the size and deflection angle of three-dimensional face images are adjusted, so that first in three-dimensional face images
A quantity point is less than or equal to given threshold with the distance between character pair point in face apperance value, so that it is right to obtain registration
Three-dimensional face images after neat.
It will be appreciated that in registering alignment procedure, first quantity point and the quantity that character pair point coincides are more,
Its registration accuracy is higher, can be adjusted, is not limited thereto according to the first quantity of concrete scene adjustment, registration accuracy.
205, histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, is obtained
Multiple goal histograms.
Step 205 is consistent with principle with the specific method of step 103, and detailed description please refers to Fig.1 and the correlation of step 103
Content, details are not described herein again.
206, matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result
Step 206 is consistent with principle with the specific method of step 104, and detailed description please refers to Fig.1 and the correlation of step 104
Content, details are not described herein again.
As it can be seen that in the present embodiment, based on multiple specified test points in three-dimensional face images, the three-dimensional face images are adjusted
Size and deflection angle, make three-dimensional face images are registering with face template to align, realize at three-dimensional face images normalization
Reason, can so improve and subsequently obtain the accuracy of face recognition result, improve the reliability of face recognition result.
Fig. 3 is the flow diagram for the face identification method that the disclosure is shown according to another exemplary embodiment.Referring to figure
3, which includes step 301~step 305:
301, obtain three-dimensional face images.
In the present embodiment, the three-dimensional face images as shown in Fig. 4 (a) are got.The specific side of step 301 and step 101
Method is consistent with principle, and detailed description please refers to Fig.1 and the related content of step 101, and details are not described herein again.
302, the three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images.
In the present embodiment, after being pre-processed to three-dimensional face images, the three-dimensional face as shown in Fig. 4 (b) can be obtained
Image.Step 302 is consistent with principle with the specific method of step 202~step 204, and detailed description please refers to Fig.2 and step 202
The related content of~step 204, details are not described herein again.
303, obtain a segmentation granularity successively from default multiple segmentation granularities, or obtain multiple segmentation grains parallel
Degree.
In one embodiment, multiple segmentation granularities are preset, such as 8*8,4*4,2*2 and the 1*1 in step 103.The present embodiment
In, a segmentation granularity can be obtained successively, can also obtain multiple segmentation granularities parallel.It will be appreciated that with segmentation grain
The increase of number of degrees amount, calculation amount also increase as, and when parallel computation amount is larger, can obtain the i.e. serial meter of segmentation granularity successively
Calculate, when computing resource is sufficiently large, parallel acquisition segmentation granularity, that is, parallel computation can be used, can be obtained at the same time with serial parallel
Carry out.Acquisition modes can be selected according to concrete scene, be not limited thereto.
For example, segmentation granularity 8*8, carries out the square region (outline border of Fig. 4 (c)~(f)) comprising face smooth vertical and horizontal
It is divided into 8*8 square.As shown in Fig. 4 (c), square region has divided 64 cutting units.Calculate in each square
The sum of z coordinate value of pixel (x, y, z) or average value, the depth value of each cutting unit, with continued reference to Fig. 4 (c), with
Exemplified by the cutting unit of a line, the depth value of each cutting unit is followed successively by { 1,5,20,25,26,25,20,1 }, other rows
Cutting unit is similar with the first row.In this way, can be with the corresponding goal histograms of segmentation granularity 8*8.
For segmentation granularity 4*4, the corresponding goal histogram of 2*2,1*1, respectively as shown in Fig. 4 (d)~(f), is specifically obtained
Method is taken to may be referred to the acquisition methods of the corresponding goal histograms of segmentation granularity 8*8, details are not described herein.
304, for each segmentation granularity got, described point is obtained based on the pretreated three-dimensional face images
Cut the corresponding goal histogram of granularity.
In one embodiment, referring to Fig. 5, square region (the corresponding step comprising face is obtained from three-dimensional face images
501).For example, determine that each pixel is in X-coordinate axle, Y-coordinate axle and Z coordinate axis in the three-dimensional face images after normalization respectively
The maximum and minimum value of upper coordinate value.Based on maximum in maximum Xmax in X-coordinate axle and minimum value Xmin and Y-coordinate axle
Value Ymax and minimum value Ymin determines a square region.Determine the external square of face.
In one embodiment, if obtaining a segmentation granularity successively, the segmentation granularity division based on acquisition is above-mentioned square
Region, so as to obtain multiple cutting units (corresponding step 502).
Include multiple pixels in each cutting unit, according to this coordinate value of multiple pixels on Z coordinate axis it
With or average value, the depth value of the cutting unit can be obtained.After the depth value of all cutting units is obtained, this point is obtained
Cut corresponding goal histogram (corresponding step 503) under granularity.
In another embodiment, multiple segmentation granularities are obtained parallel, based on above-mentioned steps, while obtain multiple target Nogatas
Figure, so as to improve computational efficiency.
305, matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
Referring to Fig. 6, in one embodiment, obtain face recognition result and comprise the following steps:
First, the depth value based on each cutting unit in each goal histogram, it is corresponding to be converted into the goal histogram
Depth value vector (corresponding step 601).Meanwhile determine and the target directly prestore histogram of the figure with identical segmentation granularity
Depth value vector.For example, the corresponding goal histograms of segmentation granularity 2*2Depth value is obtained in the way of writing line by line
Vector { 1,5;5,4 }, it is, of course, also possible to directly be calculated by the way of matrix.It will be appreciated that with vector dimension
Increase, calculation amount is consequently increased, can be according to calculating speed, segmentation granularity etc., the dimension of selected depth value vector, herein not
It is construed as limiting.
Then, the depth value vector sum based on each goal histogram corresponds to the depth value vector for the histogram that prestores, according to
Vectorial calculation formula calculates distance value (the corresponding step 602) of two histograms.The quantity of calculation times and goal histogram
Unanimously, and can to obtain the distance value D1, D2 ... ..., Dn, n identical with goal histogram quantity be positive integer, represents target
The quantity of histogram.
Afterwards, based on distance value D1, D2 ... ..., Dn and respective weight coefficient a1, a2 ... ..., an, can calculate
Go out three-dimensional face images (corresponds to step 603) apart from discre value S.Such as S=D1*a1+D2*a2+ ...+Dn*an.
It will be appreciated that as segmentation granularity increases, details is more obvious in three-dimensional face images, and weight coefficient can be with this time
Bigger, the i.e. segmentation granularity positive correlation of weight coefficient and goal histogram set., can be with as the specific value of weight coefficient
Made choice, be not limited thereto according to concrete scene.
Finally, compare apart from discre value and discre value threshold value, if being less than or equal to discre value threshold value apart from discre value,
It is correct face to determine face recognition result;If more than, it is determined that face recognition result is wrong face (corresponding step 604).
As it can be seen that in the present embodiment, by obtaining multiple goal histograms under different segmentation granularities, in each target Nogata
Scheme with prestore Histogram Matching processing when obtain the corresponding distance value of each goal histogram respectively, according to distance value and weight system
Number can be calculated apart from discre value.Since the histogram image that different segmentation granularities obtain can reflect different details, from
And the distance value obtained in Histogram Matching also has different accuracy, weight coefficient can be adjusted, face can be known
Other result adapts to different use environments, improves the reliability of face recognition result.
Fig. 7 is the block diagram of face identification device of the disclosure according to an exemplary embodiment.Referring to Fig. 7, the dress
Putting 700 includes:
Acquiring three-dimensional images module 701, for obtaining three-dimensional face images;
Pretreatment module 702, for being pre-processed to the three-dimensional face images, obtains pretreated three-dimensional face
Image;
Histogram processing module 703, for according to different segmentation granularities to the pretreated three-dimensional face images
Histogram treatment is carried out, obtains multiple goal histograms;
Matching treatment module 704, for carrying out matching treatment based on the multiple goal histogram and the histogram that prestores, is obtained
Take face recognition result.
Fig. 8 is the block diagram for the face identification device that the disclosure is shown according to another exemplary embodiment.Referring to Fig. 8, in Fig. 7
On the basis of illustrated embodiment, pretreatment module 702 includes:
Characteristic point detection unit 801, for carrying out facial feature points detection to the three-dimensional face images, obtains described three
Tie up the first quantity specific characteristic point in facial image;
Normalized unit 802, for based on the first quantity specific characteristic point to the three-dimensional face images
It is normalized, obtains the three-dimensional face images after normalized.
Fig. 9 is the block diagram for the face identification device that the disclosure is shown according to another exemplary embodiment.Referring to Fig. 9, in Fig. 8
On the basis of illustrated embodiment, normalized unit 802 includes:
Position relationship obtains subelement 901, for obtaining the position relationship between the first quantity specific characteristic point,
And in face template characteristic point position relationship;Characteristic point and the first quantity specific characteristic point in the face template
Correspond;The position relationship includes the distance between any two specific characteristic point and deflection angle;
Registration alignment subelement 902, for adjusting the size and deflection angle of the three-dimensional face images, so that described three
Tie up and specify characteristic point to be less than or equal to setting with the distance between character pair point in face template value in facial image
Threshold value, obtains the three-dimensional face images after registration alignment.
Figure 10 is the block diagram for the face identification device that the disclosure is shown according to another exemplary embodiment.Referring to Figure 10,
On the basis of embodiment illustrated in fig. 7, histogram processing module 703 includes:
Segmentation granularity acquiring unit 1001, for obtaining a segmentation granularity successively from default multiple segmentation granularities,
Or multiple segmentation granularities are obtained parallel;
Goal histogram acquiring unit 1002, for each segmentation granularity for getting, after the pretreatment
Three-dimensional face images obtain the corresponding goal histogram of the segmentation granularity.
Figure 11 is the block diagram for the face identification device that the disclosure is shown according to another exemplary embodiment.Referring to Figure 11,
On the basis of embodiment illustrated in fig. 10, goal histogram acquiring unit 1002 includes:
Square region obtains subelement 1101, for obtaining the squared region for including face from the three-dimensional face images
Domain;
Square region splits subelement 1102, for being split based on the segmentation granularity to the square region, obtains
Obtain multiple cutting units;
Depth value computation subunit 1103, for according to coordinate of each pixel on Z coordinate axis in each cutting unit
Value, calculates the corresponding depth value of each cutting unit, it is corresponding under the segmentation granularity to obtain the three-dimensional face images
Goal histogram;
Optical axis of the Z coordinate axis parallel to the shooting module for gathering the three-dimensional face images.
Figure 12 is the block diagram for the face identification device that the disclosure is shown according to another exemplary embodiment.Referring to Figure 12,
On the basis of embodiment illustrated in fig. 7, matching treatment module 704 includes:
Vector determination unit 1201, for based on the corresponding depth value of each cutting unit in each goal histogram,
Determine the depth value vector of each goal histogram, and there is identical segmentation granularity with each goal histogram
Prestore histogram depth value vector;
Distance value computing unit 1202, for calculating each goal histogram and correspondence based on depth value vector
The histogram that prestores distance value;
Discre value computing unit 1203, calculates for the distance value based on each goal histogram and weight coefficient
The three-dimensional face images apart from discre value;The segmentation granularity positive correlation of the weight coefficient and the goal histogram;
Face identification unit 1204, for it is described be less than or equal to discre value threshold value apart from discre value when, determine institute
It is correct face to state face recognition result;Be additionally operable to it is described be more than the discre value threshold value apart from discre value when, determine described
Face recognition result is wrong face.
Figure 13 is the block diagram for the face identification device that the disclosure is shown according to another exemplary embodiment.Referring to Figure 13,
On the basis of embodiment illustrated in fig. 7, face identification device 700 further includes:
Control module 1301 is unlocked, for controlling mobile equipment to be unlocked according to the face recognition result.
It should be noted that face identification device provided in an embodiment of the present invention, in above-mentioned face identification method embodiment
In made detailed description, the relevent part can refer to the partial explaination of embodiments of method.In addition, the change with usage scenario
Change, face identification method can also make corresponding adjustment, which can also be adjusted again using different functional components
It is whole.Explanation will be not set forth in detail herein.
Figure 14 is a kind of block diagram of mobile equipment according to an exemplary embodiment.For example, mobile equipment 1400 can
To be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, are good for
Body equipment, personal digital assistant etc..
With reference to Figure 14, mobile equipment 1400 can include following one or more assemblies:Processing component 1402, memory
1404, power supply module 1406, multimedia component 1408, audio component 1410, the interface 1412 of input/output (I/O), sensor
Component 1414, communication component 1416 and shooting module 1418.Wherein, shoot module 1418 and gather three-dimensional face images.Storage
Device 1404 is used to store the executable instruction of processing component 1402.Processing component 1402 reads instruction with reality from memory 1404
It is existing:
Obtain three-dimensional face images;
The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;
Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, is obtained multiple
Goal histogram;
Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
The integrated operation of the usual control device 1400 of processing component 1402, such as with display, call, data communication,
The operation that camera operation and record operation are associated.Processing component 1402 can be performed including one or more processors 1420
Instruction.In addition, processing component 1402 can include one or more modules, easy between processing component 1402 and other assemblies
Interaction.For example, processing component 1402 can include multi-media module, with facilitate multimedia component 1408 and processing component 1402 it
Between interaction.
Memory 1404 is configured as storing various types of data to support the operation in device 1400.These data
Example includes being used for the instruction of any application program or method operated on device 1400, contact data, telephone book data,
Message, picture, video etc..Memory 1404 can by any kind of volatibility or non-volatile memory device or they
Combination is realized, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM), it is erasable can
Program read-only memory (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash memory
Reservoir, disk or CD.
Power supply module 1406 provides electric power for the various assemblies of device 1400.Power supply module 1406 can include power management
System, one or more power supplys, and other components associated with generating, managing and distributing electric power for device 1400.
Multimedia component 1408 is included in the screen of one output interface of offer between described device 1400 and user.
In some embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel,
Screen may be implemented as touch-screen, to receive input signal from the user.Touch panel includes one or more touch and passes
Sensor is to sense the gesture on touch, slip and touch panel.The touch sensor can not only sense touch or slide dynamic
The border of work, but also detection and the duration and pressure associated with the touch or slide operation.In certain embodiments, it is more
Media component 1408 includes a front camera and/or rear camera.When device 1400 is in operator scheme, mould is such as shot
When formula or video mode, front camera and/or rear camera can receive exterior multi-medium data.Each preposition shooting
Head and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 1410 is configured as output and/or input audio signal.For example, audio component 1410 includes a wheat
Gram wind (MIC), when device 1400 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone quilt
It is configured to receive external audio signal.The received audio signal can be further stored in memory 1404 or via communication
Component 1416 is sent.In certain embodiments, audio component 1410 further includes a loudspeaker, for exports audio signal.
I/O interfaces 1412 provide interface, above-mentioned peripheral interface module between processing component 1402 and peripheral interface module
Can be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and
Locking press button.
Sensor component 1414 includes one or more sensors, and the state for providing various aspects for device 1400 is commented
Estimate.For example, sensor component 1414 can detect opening/closed mode of device 1400, the relative positioning of component, such as institute
The display and keypad that component is device 1400 are stated, sensor component 1414 can be with detection device 1400 or device 1,400 1
The position of a component changes, the existence or non-existence that user contacts with device 1400,1400 orientation of device or acceleration/deceleration and dress
Put 1400 temperature change.Sensor component 1414 can include proximity sensor, be configured in no any physics
Presence of nearby objects is detected during contact.Sensor component 1414 can also include optical sensor, as CMOS or ccd image are sensed
Device, for being used in imaging applications.In certain embodiments, which can also include acceleration sensing
Device, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 1416 is configured to facilitate the communication of wired or wireless way between device 1400 and other equipment.Dress
The wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof can be accessed by putting 1400.It is exemplary at one
In embodiment, communication component 1416 receives broadcast singal or broadcast correlation from external broadcasting management system via broadcast channel
Information.In one exemplary embodiment, the communication component 1416 further includes near-field communication (NFC) module, to promote short distance
Communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module
(UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, it can be that 3D structure lights camera or 3D shoot the equipment such as module to shoot module 1418.
In the exemplary embodiment, device 1400 can be by one or more application application-specific integrated circuit (ASIC), numeral
Signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided
Such as include the memory 1404 of instruction, above-metioned instruction can be performed by the processor 1420 of device 1400.For example, the non-transitory
Computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage
Equipment etc..
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice disclosure disclosed herein
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be appreciated that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claim.
Claims (16)
- A kind of 1. face identification method, it is characterised in that the described method includes:Obtain three-dimensional face images;The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, obtains multiple targets Histogram;Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
- 2. face identification method according to claim 1, it is characterised in that the three-dimensional face images are located in advance Reason, obtains pretreated three-dimensional face images, including:Facial feature points detection is carried out to the three-dimensional face images, the first quantity obtained in the three-dimensional face images refers to Determine characteristic point;The three-dimensional face images are normalized based on the first quantity specific characteristic point, are obtained at normalization Three-dimensional face images after reason.
- 3. face identification method according to claim 2, it is characterised in that based on the first quantity specific characteristic point The three-dimensional face images are normalized, obtain the three-dimensional face images after normalized, including:The position relationship between the first quantity specific characteristic point is obtained, and the position of characteristic point is closed in face template System;Characteristic point is corresponded with the first quantity specific characteristic point in the face template;The position relationship includes appointing The distance between two specific characteristic points of meaning and deflection angle;The size and deflection angle of the three-dimensional face images are adjusted, so as to specify characteristic point and institute in the three-dimensional face images State the distance between character pair point value in face template and be less than or equal to given threshold, obtain the three-dimensional people after registration alignment Face image.
- 4. face identification method according to claim 1, it is characterised in that according to different segmentation granularities to the pre- place Three-dimensional face images after reason carry out histogram treatment, obtain multiple goal histograms, including:Obtain a segmentation granularity successively from default multiple segmentation granularities, or obtain multiple segmentation granularities parallel;For each segmentation granularity got, the segmentation granularity pair is obtained based on the pretreated three-dimensional face images The goal histogram answered.
- 5. face identification method according to claim 4, it is characterised in that based on the pretreated three-dimensional face figure As obtaining the corresponding goal histogram of the segmentation granularity, including:The square region for including face is obtained from the three-dimensional face images;The square region is split based on the segmentation granularity, obtains multiple cutting units;According to coordinate value of each pixel on Z coordinate axis in each cutting unit, it is corresponding to calculate each cutting unit Depth value, obtains the three-dimensional face images corresponding goal histogram under the segmentation granularity;Optical axis of the Z coordinate axis parallel to the shooting module for gathering the three-dimensional face images.
- 6. face identification method according to claim 5, it is characterised in that based on the multiple goal histogram and prestore Histogram carries out matching treatment, obtains face recognition result, including:Based on the corresponding depth value of each cutting unit in each goal histogram, the depth of each goal histogram is determined Angle value vector, and the depth value vector with each the prestore histogram of the goal histogram with identical segmentation granularity;The distance value of each goal histogram and the corresponding histogram that prestores is calculated based on depth value vector;Distance value and weight coefficient based on each goal histogram calculate the distance identification of the three-dimensional face images Value;The segmentation granularity positive correlation of the weight coefficient and the goal histogram;If described be less than or equal to discre value threshold value apart from discre value, it is determined that the face recognition result is correct face; If more than, it is determined that the face recognition result is wrong face.
- 7. face identification method according to claim 1, it is characterised in that after obtaining face recognition result, the side Method further includes:Mobile equipment is controlled to be unlocked according to the face recognition result.
- 8. a kind of face identification device, it is characterised in that described device includes:Acquiring three-dimensional images module, for obtaining three-dimensional face images;Pretreatment module, for being pre-processed to the three-dimensional face images, obtains pretreated three-dimensional face images;Histogram processing module, for carrying out Nogata to the pretreated three-dimensional face images according to different segmentation granularities Figure processing, obtains multiple goal histograms;Matching treatment module, for carrying out matching treatment based on the multiple goal histogram and the histogram that prestores, obtains face Recognition result.
- 9. face identification device according to claim 8, it is characterised in that the pretreatment module includes:Characteristic point detection unit, for carrying out facial feature points detection to the three-dimensional face images, obtains the three-dimensional face The first quantity specific characteristic point in image;Normalized unit, for carrying out normalizing to the three-dimensional face images based on the first quantity specific characteristic point Change is handled, and obtains the three-dimensional face images after normalized.
- 10. face identification device according to claim 9, it is characterised in that the normalized unit includes:Position relationship obtains subelement, for obtaining the position relationship between the first quantity specific characteristic point, Yi Jiren The position relationship of characteristic point in face template;Characteristic point and the first quantity specific characteristic point one are a pair of in the face template Should;The position relationship includes the distance between any two specific characteristic point and deflection angle;Registration alignment subelement, for adjusting the size and deflection angle of the three-dimensional face images, so that the three-dimensional face Specify characteristic point to be less than or equal to given threshold with the distance between character pair point in face template value in image, obtain Three-dimensional face images after aliging to registration.
- 11. face identification device according to claim 8, it is characterised in that the histogram processing module includes:Segmentation granularity acquiring unit, for obtaining a segmentation granularity successively from default multiple segmentation granularities, or parallel Obtain multiple segmentation granularities;Goal histogram acquiring unit, for each segmentation granularity for getting, based on the pretreated three-dimensional people Face image obtains the corresponding goal histogram of the segmentation granularity.
- 12. face identification device according to claim 11, it is characterised in that the goal histogram acquiring unit bag Include:Square region obtains subelement, for obtaining the square region for including face from the three-dimensional face images;Square region splits subelement, for splitting based on the segmentation granularity to the square region, obtains multiple points Cut unit;Depth value computation subunit, for according to coordinate value of each pixel on Z coordinate axis in each cutting unit, calculating institute The corresponding depth value of each cutting unit is stated, obtains the three-dimensional face images corresponding target Nogata under the segmentation granularity Figure;Optical axis of the Z coordinate axis parallel to the shooting module for gathering the three-dimensional face images.
- 13. face identification device according to claim 12, it is characterised in that the matching treatment module includes:Vector determination unit, for based on the corresponding depth value of each cutting unit in each goal histogram, determining described The depth value of each goal histogram is vectorial, and has the Nogata that prestores of identical segmentation granularity with each goal histogram The depth value vector of figure;Distance value computing unit, for based on depth value vector calculate each goal histogram and it is corresponding prestore it is straight The distance value of square figure;Discre value computing unit, the three-dimensional is calculated for the distance value based on each goal histogram and weight coefficient Facial image apart from discre value;The segmentation granularity positive correlation of the weight coefficient and the goal histogram;Face identification unit, for it is described be less than or equal to discre value threshold value apart from discre value when, determine that the face is known Other result is correct face;Be additionally operable to it is described be more than the discre value threshold value apart from discre value when, determine the recognition of face As a result it is wrong face.
- 14. face identification device according to claim 8, it is characterised in that described device further includes:Control module is unlocked, for controlling mobile equipment to be unlocked according to the face recognition result.
- 15. a kind of mobile equipment, it is characterised in that the terminal includes:Gather the shooting module of three-dimensional face images, processing Device, for storing the memory of the processor-executable instruction;Wherein, the processor is used for:Obtain the three-dimensional face images of the shooting module collection;The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, obtains multiple targets Histogram;Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
- 16. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor Realized during execution:Obtain three-dimensional face images;The three-dimensional face images are pre-processed, obtain pretreated three-dimensional face images;Histogram treatment is carried out to the pretreated three-dimensional face images according to different segmentation granularities, obtains multiple targets Histogram;Matching treatment is carried out based on the multiple goal histogram and the histogram that prestores, obtains face recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711329593.6A CN107958223B (en) | 2017-12-13 | 2017-12-13 | Face recognition method and device, mobile equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711329593.6A CN107958223B (en) | 2017-12-13 | 2017-12-13 | Face recognition method and device, mobile equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107958223A true CN107958223A (en) | 2018-04-24 |
CN107958223B CN107958223B (en) | 2020-09-18 |
Family
ID=61958825
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711329593.6A Active CN107958223B (en) | 2017-12-13 | 2017-12-13 | Face recognition method and device, mobile equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107958223B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108632283A (en) * | 2018-05-10 | 2018-10-09 | Oppo广东移动通信有限公司 | A kind of data processing method and device, computer readable storage medium |
CN109994202A (en) * | 2019-03-22 | 2019-07-09 | 华南理工大学 | A method of the face based on deep learning generates prescriptions of traditional Chinese medicine |
CN111915499A (en) * | 2019-05-10 | 2020-11-10 | 上海西门子医疗器械有限公司 | X-ray image processing method and device and computer readable storage medium |
CN112581357A (en) * | 2020-12-16 | 2021-03-30 | 珠海格力电器股份有限公司 | Face data processing method and device, electronic equipment and storage medium |
CN113689402A (en) * | 2021-08-24 | 2021-11-23 | 北京长木谷医疗科技有限公司 | Deep learning-based femoral medullary cavity form identification method, device and storage medium |
CN114241590A (en) * | 2022-02-28 | 2022-03-25 | 深圳前海清正科技有限公司 | Self-learning face recognition terminal |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104166847A (en) * | 2014-08-27 | 2014-11-26 | 华侨大学 | 2DLDA (two-dimensional linear discriminant analysis) face recognition method based on ULBP (uniform local binary pattern) feature sub-spaces |
US20160132718A1 (en) * | 2014-11-06 | 2016-05-12 | Intel Corporation | Face recognition using gradient based feature analysis |
CN105760865A (en) * | 2016-04-12 | 2016-07-13 | 中国民航大学 | Facial image recognizing method capable of increasing comparison correct rate |
-
2017
- 2017-12-13 CN CN201711329593.6A patent/CN107958223B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104166847A (en) * | 2014-08-27 | 2014-11-26 | 华侨大学 | 2DLDA (two-dimensional linear discriminant analysis) face recognition method based on ULBP (uniform local binary pattern) feature sub-spaces |
US20160132718A1 (en) * | 2014-11-06 | 2016-05-12 | Intel Corporation | Face recognition using gradient based feature analysis |
CN105760865A (en) * | 2016-04-12 | 2016-07-13 | 中国民航大学 | Facial image recognizing method capable of increasing comparison correct rate |
Non-Patent Citations (2)
Title |
---|
王昆翔等: "《智能理论与警用智能技术》", 28 February 1998 * |
陈立生等: ""基于几何特征与深度数据的三维人脸识别"", 《电脑知识与技术》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108632283A (en) * | 2018-05-10 | 2018-10-09 | Oppo广东移动通信有限公司 | A kind of data processing method and device, computer readable storage medium |
CN109994202A (en) * | 2019-03-22 | 2019-07-09 | 华南理工大学 | A method of the face based on deep learning generates prescriptions of traditional Chinese medicine |
CN111915499A (en) * | 2019-05-10 | 2020-11-10 | 上海西门子医疗器械有限公司 | X-ray image processing method and device and computer readable storage medium |
CN112581357A (en) * | 2020-12-16 | 2021-03-30 | 珠海格力电器股份有限公司 | Face data processing method and device, electronic equipment and storage medium |
CN113689402A (en) * | 2021-08-24 | 2021-11-23 | 北京长木谷医疗科技有限公司 | Deep learning-based femoral medullary cavity form identification method, device and storage medium |
CN113689402B (en) * | 2021-08-24 | 2022-04-12 | 北京长木谷医疗科技有限公司 | Deep learning-based femoral medullary cavity form identification method, device and storage medium |
CN114241590A (en) * | 2022-02-28 | 2022-03-25 | 深圳前海清正科技有限公司 | Self-learning face recognition terminal |
Also Published As
Publication number | Publication date |
---|---|
CN107958223B (en) | 2020-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11678734B2 (en) | Method for processing images and electronic device | |
US20220092813A1 (en) | Position and pose determining method, apparatus, smart device, and storage medium | |
CN107958223A (en) | Face identification method and device, mobile equipment, computer-readable recording medium | |
WO2019205850A1 (en) | Pose determination method and device, intelligent apparatus, and storage medium | |
CN105488527B (en) | Image classification method and device | |
CN104408402B (en) | Face identification method and device | |
CN108470322B (en) | Method and device for processing face image and readable storage medium | |
CN104484858B (en) | Character image processing method and processing device | |
CN104077585B (en) | Method for correcting image, device and terminal | |
CN109558837B (en) | Face key point detection method, device and storage medium | |
CN107330868A (en) | image processing method and device | |
CN107688781A (en) | Face identification method and device | |
CN111127509B (en) | Target tracking method, apparatus and computer readable storage medium | |
CN109522863B (en) | Ear key point detection method and device and storage medium | |
WO2021147434A1 (en) | Artificial intelligence-based face recognition method and apparatus, device, and medium | |
CN111541907A (en) | Article display method, apparatus, device and storage medium | |
CN109978996B (en) | Method, device, terminal and storage medium for generating expression three-dimensional model | |
CN106295530A (en) | Face identification method and device | |
CN110933468A (en) | Playing method, playing device, electronic equipment and medium | |
CN107092359A (en) | Virtual reality visual angle method for relocating, device and terminal | |
CN107944367A (en) | Face critical point detection method and device | |
CN107704190A (en) | Gesture identification method, device, terminal and storage medium | |
CN110827195A (en) | Virtual article adding method and device, electronic equipment and storage medium | |
CN107403144A (en) | Face localization method and device | |
CN110796083B (en) | Image display method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |