CN109086692A - A kind of face identification device and method - Google Patents
A kind of face identification device and method Download PDFInfo
- Publication number
- CN109086692A CN109086692A CN201810778917.2A CN201810778917A CN109086692A CN 109086692 A CN109086692 A CN 109086692A CN 201810778917 A CN201810778917 A CN 201810778917A CN 109086692 A CN109086692 A CN 109086692A
- Authority
- CN
- China
- Prior art keywords
- face
- anchor point
- image
- skin detection
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Abstract
The invention discloses a kind of face identification device and method, face identification device includes: image capture module, for acquiring the human face region image in character image to be identified;Framing module, for forming the effective coverage of facial image according to the anchor point for being located at the human face region;Image processing module forms skin detection for carrying out feature extraction according to the face of the effective coverage;Image comparison module, for the skin detection to be compared with the skin detection in preset face database, to identify identity of personage in character image to be identified.Face identification device provided in an embodiment of the present invention and method, when being identified to face, the effective information region that face characteristic information can be obtained is set within the scope of human face region, bring due to face effective information region includes background irrelevant information can be avoided or reduced to interfere, promote the confidence level of recognition of face.
Description
Technical field
The present invention relates to technical field of face recognition, in particular to a kind of face identification device and method.
Background technique
Identification is the important application technical field of Artificial Intelligence Development, and face recognition technology is to be most widely used
One of identity recognizing technology.Recognition of face is generally identified according to the facial characteristics of people, but facial characteristics is easy by outer
The interference effect of boundary's environment.Such as: the factors such as intensity of illumination, human body attitude;And in existing most of papers and patent,
The human face region of face recognition technology acquisition is often bigger, or even comprising the background information other than face, on this basis into
Pedestrian's face characteristic processing necessarily causes identification not accurate enough, and confidence level is poor.
Summary of the invention
(1) technical problems to be solved
The technical problem to be solved by the present invention is a kind of face identification device and method how are provided, to promote face
The confidence level of identification.
(2) technical solution
At least one embodiment of the present invention is to provide a kind of face identification device comprising:
Image capture module, for acquiring the human face region image in character image to be identified;
Framing module, for forming the effective coverage of facial image according to the anchor point for being located at the human face region;
Image processing module forms skin detection for carrying out feature extraction according to the face of the effective coverage;
Image comparison module, for by the skin detection in the skin detection and preset face database
It is compared, to identify identity of personage in character image to be identified.
Optionally, described image acquisition module includes:
Image acquisition unit, for obtaining character image to be identified;
Image focusing unit, for character image to be focused on to the face area of character image.
Optionally, described image locating module includes:
Selection unit, for choosing at least four anchor points for being located at the human face region, wherein;At least one anchor point
Not in same level.
Zoning unit, for forming effective coverage according to the anchor point, effective coverage includes at least the first effective information
Region and the second effective information region.
Optionally, four anchor points: including the first anchor point, the second anchor point, third anchor point, the 4th positioning
Point, wherein first anchor point is face left eye eyebrow outer point, and second anchor point is on the outside of face right eye eyebrow
Endpoint, the third anchor point are the point between face two on the bridge of the nose, and the 4th anchor point is upper lip intermediate point.
It optionally, further include the 5th anchor point, the effective coverage includes a virtual rectangle, and the first of the virtual rectangle
Edge lengths are the distance between first anchor point and second anchor point, and the second edge lengths of the virtual rectangle are institute
State the distance between the 4th anchor point and the 5th anchor point.
Optionally, described image processing module includes:
Feature extraction unit, for being extracted respectively in first effective information region and second effective information region
The gray value of at least one sampled point;
First coding unit, the difference of the average value for adjacent thereto gray value of the gray value to the sampled point into
Row binary-coding forms skin detection.
Optionally, described image processing module further include:
Second coding unit, for carrying out binary-coding using borderline properties of the edge detection operator to the sampled point,
Form skin detection.
Optionally, quantity of the quantity of the sampled point in first effective information region and the first effective information region
It is equal.
Optionally, described image comparison module includes:
First decomposition unit, for the skin detection to be divided at least two subtemplates;
First comparing unit, for by the subtemplate respectively with the skin detection in preset face database
Corresponding submodule is compared;
First judging unit, for judging that the subtemplate is corresponding with the skin detection in preset face database
Whether the similarity of submodule is all larger than or is equal to threshold value, to identify identity of personage in character image to be identified.
Optionally, the threshold value is 60-98%.
The embodiment of the present invention also provides a kind of face identification method, comprising:
Acquire the human face region image in character image to be identified;
The effective coverage of facial image is formed according to the anchor point for being located at the human face region;
Feature extraction is carried out according to the face of the effective coverage, forms skin detection;
The skin detection is compared with the skin detection in preset face database, to identify
Identity of personage in character image to be identified.
Optionally, the step of effective coverage of facial image being formed according to the anchor point for being located at the human face region, comprising:
Choose at least four anchor points for being located at the human face region;Wherein, at least one anchor point is not in same level
On face.
Effective coverage is formed according to the anchor point, effective coverage includes that at least the first effective information region and second are effective
Information area.
Optionally, the step of feature extraction being carried out according to the face of the effective coverage, forming skin detection, packet
It includes:
In first effective information region and second effective information region, at least one sampled point is extracted respectively
Gray value;
Binary-coding is carried out to the difference of the average value of adjacent thereto gray value of the gray value of the sampled point, forms people
Face feature templates.
Optionally, the skin detection is compared with the skin detection in preset face database,
With the step of identifying identity of personage in character image to be identified, may include:
Characteristics of human body's template is divided at least two subtemplates;
The subtemplate is compared with the corresponding submodule of the skin detection in preset face database respectively
It is right;
Judging the similarity of subtemplate submodule corresponding with the skin detection in preset face database is
It is no to be all larger than or be equal to threshold value, to identify identity of personage in character image to be identified.
(3) beneficial effect
Face identification device provided in an embodiment of the present invention and method, at least can be when identifying face, can be with
The anchor point for forming face effective information region is chosen within the scope of human face region, and then retrievable face characteristic information is equal
Within the scope of human face region, it can avoid or reduce because the face effective information region of formation includes background irrelevant information due to band
The interference come, promotes the confidence level of recognition of face.
Detailed description of the invention
Fig. 1 is the schematic diagram for the face identification device that one embodiment of the invention provides;
Fig. 2 (a) is the structural schematic diagram of one embodiment of the invention described image acquisition module;
Fig. 2 (b) is the structural schematic diagram of one embodiment of the invention described image locating module;
Fig. 2 (c) is the structural schematic diagram of one embodiment of the invention described image processing module;
Fig. 2 (d) is the structural schematic diagram of one embodiment of the invention described image comparison module;
Fig. 3 is the schematic diagram for the face identification method that one embodiment of the invention provides;
Fig. 4 (a1)-(a2) is the Image Acquisition schematic diagram of one embodiment of the invention recognition of face;
Fig. 4 (b1)-(b2) is the framing schematic diagram of one embodiment of the invention recognition of face;
Fig. 4 (c1)-(c2) is the image procossing schematic diagram of one embodiment of the invention recognition of face;
Fig. 4 (d1)-(d2) is that the image of one embodiment of the invention recognition of face compares schematic diagram.
Appended drawing reference
100- face identification device;110- image capture module;120- framing module;230- image processing module;
140- image comparison module;1101- image acquisition unit;1102- image focusing unit;1201- selection unit;1202- subregion
Unit;1301- feature extraction unit;The first coding unit of 1302-;The second coding unit of 1303-;The first decomposition unit of 1401-;
The first comparing unit of 1402-;The first judging unit of 1403-;P-focusing face area;The first anchor point of A-;B- second is positioned
Point;C- third anchor point;The 4th anchor point of D-;The 5th anchor point of E-;The effective coverage M-;The first effective information of M1- region;M2-
Second effective information region;F-sampled point.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
Attached drawing, the technical solution of the embodiment of the present invention is clearly and completely described.Obviously, described embodiment is this hair
Bright a part of the embodiment, instead of all the embodiments.Based on described the embodiment of the present invention, ordinary skill
Personnel's all other embodiment obtained under the premise of being not necessarily to creative work, shall fall within the protection scope of the present invention.
Unless otherwise defined, the technical term or scientific term used herein should be in fields of the present invention and has
The ordinary meaning that the personage of general technical ability is understood.Asked in the invention patent used in specification and claims " the
One ", " second " and similar word is not offered as any sequence, quantity or importance, and be used only to distinguish different
Component part.The similar word such as " comprising " or "comprising", which means to occur element or object before the word, to be covered and appears in
The element of the word presented hereinafter perhaps object and its equivalent and be not excluded for other elements or object." interior ", " outer " etc. are only used for
Indicate relative positional relationship, after the absolute position for being described object changes, then the relative positional relationship may also correspondingly change
Become.
Attached drawing in the present invention is not to be strictly drawn to scale, and the number of pixel unit is also not necessarily limited to be institute in figure
The specifically size and number of the number shown, each structure can be determined according to actual needs.It is described in the present invention attached
Figure is only structural schematic diagram.With reference to the accompanying drawings and examples, specific embodiments of the present invention will be described in further detail.
The following examples are used to illustrate the present invention, but are not intended to limit the scope of the present invention..
Fig. 1 is the schematic diagram for the face identification device that one embodiment of the invention provides, as shown in Figure 1, the recognition of face
Device 100 includes: image capture module 110, framing module 120, image processing module 130 and image comparison module 140.
Image capture module 110, for acquiring the human face region image in character image to be identified;
Framing module 120, for forming the effective district of facial image according to the anchor point for being located at the human face region
Domain;
Image processing module 130 forms face characteristic mould for carrying out feature extraction according to the face of the effective coverage
Plate;
Image comparison module 140, for by the face characteristic in the skin detection and preset face database
Template is compared, to identify identity of personage in character image to be identified.
It is effective can will to form face when identifying to face for face identification device provided in an embodiment of the present invention
The anchor point of information area is chosen within the scope of human face region, and then retrievable face characteristic information can be made to be located at
Within the scope of face, avoid or reducing traditional technology anchor point is arranged other than human face region includes background irrelevant information, or
Person promotes the confidence level of recognition of face because the face effective information region of formation includes the interference of the brings such as background irrelevant information.
Fig. 2 (a) be another embodiment of the present invention provides face identification device image capture module schematic diagram, in conjunction with
Shown in Fig. 1 and Fig. 2 (a), the face identification device 100 includes: image capture module 110, for acquiring figure map to be identified
As in human face region image, may include:
Image acquisition unit 1101, for obtaining character image to be identified;
Image focusing unit 1102, for character image to be focused on to the face area of character image.
Image capture module 110 for example images specific crowd to be identified by camera, and acquisition contains personage
Character image to be identified;It first determines overall region of the personage in character image, is then focused on from overall region again
The face area of character image is likely larger than shared by face to reduce or eliminate background information occupied area in character image
The interference of area bring.
Fig. 2 (b) be another embodiment of the present invention provides face identification device framing module schematic diagram, in conjunction with
Shown in Fig. 1 and Fig. 2 (b), the face identification device 100 includes: framing module 120, for according to positioned at the face
The anchor point in region forms the effective coverage of facial image, may include:
Selection unit 1201, for choosing at least four anchor points for being located at the human face region;Wherein, at least one is fixed
Site is not in same level.
Zoning unit 1202, for forming effective coverage according to the anchor point, effective coverage includes at least first effective
Information area and the second effective information region.
The face identification device that the embodiment of the present invention improves, can be when carrying out recognition of face, by setting anchor point
It sets within the scope of face, such as: for example: four anchor points include the first anchor point, the second anchor point, third anchor point, and the 4th
Anchor point, first anchor point are face left eye eyebrow outer point, and second anchor point is on the outside of face right eye eyebrow
Endpoint, the third anchor point are the point between face two on the bridge of the nose, and the 4th anchor point is upper lip intermediate point.
It determines that the effective coverage for recognition of face is located within the scope of face according to anchor point, may include multiple having
Information area is imitated, such as: human face region is divided into the two effective information regions in left and right, is equivalent to and face has been carried out to characteristics of human body
Subsequent feature extraction and identification are convenient in correction.
In this way, the effective coverage for recognition of face is respectively positioned within the scope of face, can be improved for carrying out recognition of face
Region signal-to-noise ratio, greatly improve or solve the excessive bring background information interference problem of selected human face region, Yi Jiren
The few problem of information content caused by face region is too small.
Optionally, as long as meeting the line distance of any two anchor point within the scope of human face region.Anchor point
It is quantity preferred 4-8, unsuitable too little or too much.It is very few may cause extract information it is few;Quantity is excessive, may cause and needs to calculate
Information content is too big.
Optionally, the direction of anchor point is preferably in X, Y, and Z axis is distributed.Such as: choose two eyes institutes of face
Be X-direction in direction, to be Y-direction perpendicular to the extension of the lip of X axis face, with perpendicular to X/Y plane to nose extension side
To for Z axis.
Certainly, above situation is not limited to the quantity of the anchor point of selection, position, direction.For example, described first is fixed
Site is canthus on the outside of face left eye, and second anchor point is canthus on the outside of face right eye.
Fig. 2 (c) be another embodiment of the present invention provides face identification device image processing module schematic diagram, in conjunction with
Shown in Fig. 1 and Fig. 2 (c), the face identification device 100 includes: image processing module 130, for according to the effective coverage
Face carry out feature extraction, formed skin detection, may include:
Feature extraction unit 1301, for distinguishing in first effective information region and second effective information region
Extract the gray value of at least one sampled point;
First coding unit 1302, the difference of the average value for adjacent thereto gray value of the gray value to the sampled point
Value carries out binary-coding, forms skin detection.The binary-coding can be used as the characteristic parameter of sampled point, naturally it is also possible to
Characteristic parameter as some specific region centered on sampled point.It is similar below, it repeats no more.
Optionally, described image processing module further includes to face characteristic extraction, gray value to the sampled point and its
The difference of the adjacent at least two of surrounding or 4 gray value mean values carries out binary-coding, forms skin detection.
Optionally, the binary-coding mode of first coding unit, comprising: the gray value of the sampled point is adjacent thereto
The difference of the average value of point gray value is greater than 3, and use 1 indicates;Otherwise, use 0 indicates.
Optionally, quantity of the quantity of the sampled point in first effective information region and the second effective information region
It is equal.
Alternatively, described image processing module 130 may include:
Second coding unit 1303, for carrying out two-value volume using borderline properties of the edge detection operator to the sampled point
Code forms skin detection.
Edge detection operator carries out binary-coding to the borderline properties of the sampled point, more to the textural characteristics of description image
To be advantageous, more accurately.
Optionally, edge detection operator carries out binary-coding to the borderline properties of the sampled point, comprising: the sampled point
Left pixel point gray value be greater than its right pixel point gray value, use 1 indicates;Otherwise, use 0 indicates.
Certainly, described image processing module 130 may include that the second coding unit 1303 and the first coding unit 1302 are appointed
It one, can also exist simultaneously, and different binary-coding modes can be used in different regions or different points.
Optionally, binary-coding can be carried out to all the points in effective coverage, forms skin detection, improves people
The robustness of face character representation can be solved the problems, such as using texture amplitude as feature very well vulnerable to interference effect.
In the embodiment of the present invention code requirement region, to consecutive points gray value around the gray value of sampled point and its into
Row binary-coding carries out binary-coding using borderline properties of the edge detection operator to the sampled point, forms face characteristic
Template improves the robustness of character representation, solves the problems, such as using texture amplitude as feature vulnerable to interference effect.And
And relative to existing face characteristic extraction algorithm, line is carried out for example, by using Gabor anisotropic filter, local binary patterns etc.
Feature extraction is managed, the embodiment of the present invention uses sampled point gray scale and size of gray average difference carries out binary-coding around it
Or binary-coding is carried out using borderline properties of the edge detection operator to the sampled point, and specification has been carried out in face
Change, formed skin detection can directly be compared with the skin detection in preset face database, be not necessarily to
Cumbersome data calculating and transformation are carried out in subsequent comparison link, promote comparison efficiency and accuracy.
Fig. 2 (d) be another embodiment of the present invention provides face identification device image comparison module schematic diagram, such as scheme
Shown in 2 (d), the face identification device 100 includes: described image comparison module 140, for by the skin detection with
Skin detection in preset face database is compared, to identify the body of personage in character image to be identified
Part;May include:
First decomposition unit 1401, for characteristics of human body's template to be divided at least two subtemplates.
First comparing unit 1402, for by the subtemplate respectively with the face characteristic mould in preset face database
The correspondence submodule of plate is compared.
First judging unit 1403, for judging the skin detection in the subtemplate and preset face database
Whether the similarity of corresponding submodule is all larger than or is equal to threshold value, to identify identity of personage in character image to be identified.
Optionally, the skin detection can be divided into 2 integral multiple by the first decomposition unit 1401.
It optionally, can be 60-98% with given threshold, preferably 65%.
Optionally, the skin detection in preset face database can be a large amount of crowds or the face of single people is special
Levy data.
Skin detection to be compared is divided into several pieces of zonules by the embodiment of the present invention, then respectively with preset people
The correspondence facial image submodule of face database is compared, if the similarity of each zonule is both greater than the threshold value set,
So illustrate that the feature templates that the two are compared come from the same person, otherwise the two feature templates compared may belong to difference
People.
Traditional technology is by the skin detection in the overall data and face database of skin detection to be compared
It is compared, although the two template overall similarities compared are possible to reach given threshold, the spy of certain regional areas
Sign is likely to entirely different, in this way with it is overall reach threshold value and carry out identification easily cause recognition of face to malfunction;And the present invention is real
It applies example and proposes the mode that skin detection piecemeal to be compared is compared, only each segmented areas, which has reached, to be set
Fixed threshold value could assert its identity;Otherwise it is assumed that dissmilarity is locally present, its identity cannot be identified.It greatlys improve in this way
Recognition correct rate and confidence level.
Fig. 3 is the face identification method schematic diagram that further embodiment of this invention provides, as shown in figure 3, a kind of recognition of face
Method, comprising:
S1: the human face region image in character image to be identified is acquired;
S2: the effective coverage of facial image is formed according to the anchor point for being located at the human face region;
S3: feature extraction is carried out according to the face of the effective coverage, forms skin detection;
S4: the skin detection is compared with the skin detection in preset face database, to know
It Chu not identity of personage in character image to be identified.
A kind of face identification method provided in an embodiment of the present invention will form face effective information when carrying out recognition of face
The anchor point in region is chosen within the scope of human face region, and then can make the face characteristic obtained all is to be located at face range
It is interior, it avoid or reduces the background interference that traditional technology anchor point is arranged in other than human face region and other than bring face and believes
Breath, promotes the confidence level of recognition of face.
Optionally, the step of acquiring the human face region image in character image to be identified may include:
S11: character image to be identified is obtained;
S12: character image is focused on to the face area of character image.
Such as: specific crowd to be identified is imaged by camera, obtains the personage to be identified containing personage
Image first determines overall region of the personage in character image, then focuses on the face of character image from overall region again
Region is done to reduce or eliminate background information occupied area in character image and be likely larger than face occupied area bring
It disturbs.
Optionally, the step of forming the effective coverage of facial image according to the anchor point for being located at the human face region, can be with
Include:
S21: at least four anchor points for being located at the human face region are chosen;Wherein, at least one anchor point is not same
On horizontal plane.
S22: effective coverage is formed according to the anchor point, effective coverage includes at least the first effective information region and second
Effective information region.
The embodiment of the present invention can by by anchor point be arranged within the scope of face, such as: two-end-point, eyebrow on the outside of eyebrow
Between bridge of the nose position and upper lip middle position etc..
Multiple effective information regions for recognition of face are determined according to anchor point, such as: human face region is divided into a left side
Right two effective coverages, are equivalent to and have carried out face correction to face characteristic, are convenient for subsequent feature extraction and identification.
In this way, the signal-to-noise ratio in the region for carrying out recognition of face can be improved, greatly improve or solve selected face
The few problem of information content caused by the excessive bring interference problem in region and human face region are too small.
Optionally, as long as meeting the line distance of any two anchor point within the scope of human face region.Anchor point
Direction is preferably in X, Y, and Z axis is distributed, such as: two eyes directions for choosing face are X-direction, perpendicular to X
Axial face lip extension be Y-direction, using perpendicular to X/Y plane to nose extending direction as Z axis.
Certainly, above situation is not limited to the quantity of the anchor point of selection, position, direction.For example, described first is fixed
Site is canthus on the outside of face left eye, and second anchor point is canthus on the outside of face right eye.
Optionally, the step of carrying out feature extraction according to the face of the effective coverage, forming skin detection, can be with
Include:
S31: in first effective information region and second effective information region, at least one is extracted respectively and is adopted
The gray value of sampling point;
S32: binary-coding, shape are carried out to the difference of the average value of adjacent thereto gray value of the gray value of the sampled point
At skin detection.
Optionally, described image processing module further includes to face characteristic extraction, gray value to the sampled point and its
The difference of the adjacent at least two of surrounding or 4 gray value mean values carries out binary-coding, forms skin detection.
Optionally, the first described binary-coding includes: the flat of adjacent thereto gray value of gray value of the sampled point
The difference of mean value is greater than 3, and use 1 indicates;Otherwise, use 0 indicates.
Alternatively, the step of feature extraction is carried out according to the face of the effective coverage, forms skin detection, it can be with
Include:
S33: binary-coding is carried out using borderline properties of the edge detection operator to the sampled point, forms face characteristic mould
Plate.
Optionally, edge detection operator carries out binary-coding to the borderline properties of the sampled point, comprising: the sampled point
Left pixel point gray value be greater than its right pixel point gray value, use 1 indicates;Otherwise, use 0 indicates.
Certainly, it when carrying out binary-coding, can be existed simultaneously in such a way that optional S32 and S33 are a kind of, or both, and
Different binary-coding modes can be used in different regions or different points, such as: in the area on right side on the left of face
Differ or when the extractible face information asymmetry in two sides, can by the left of face using S32 in a manner of, used on the right side of face
The mode of S33;Alternatively, when sample is in the centre in the first effective information region and/or second effective information region,
By the way of S32;When sample is in the edge in the first effective information region and/or second effective information region,
By the way of S33.
In the embodiment of the present invention code requirement region, to consecutive points gray value around the gray value of sampled point and its into
Row binary-coding carries out binary-coding using borderline properties of the edge detection operator to the sampled point, forms face characteristic
Template improves the robustness of character representation, solves the problems, such as using texture amplitude as feature vulnerable to interference effect.And
And relative in existing face characteristic extraction algorithm, texture is carried out using Gabor anisotropic filter, local binary patterns etc.
Feature extraction, the embodiment of the present invention use sampled point gray scale with around it gray average difference size progress binary-coding or
Person carries out binary-coding using borderline properties of the edge detection operator to the sampled point, and standardization has been carried out in face,
Formed feature templates can directly be compared with the skin detection in preset face database, without in subsequent ratio
To carrying out cumbersome data calculating and transformation in link.
Optionally, the skin detection is compared with the skin detection in preset face database,
With the step of identifying identity of personage in character image to be identified, may include:
S41: characteristics of human body's template is divided at least two subtemplates;
S42: by the subtemplate respectively with the corresponding submodule of the skin detection in preset face database into
Row compares;
S43: judge the similar of subtemplate submodule corresponding with the skin detection in preset face database
Whether degree is all larger than or is equal to threshold value, to identify identity of personage in character image to be identified.
Optionally, the skin detection can be divided into 2 integral multiple by the first decomposition unit 1401.
Skin detection to be compared is divided into several pieces of zonules, then divided by the recognition methods of the embodiment of the present invention
It is not compared with the correspondence facial image submodule of preset face database, if the similarity of each zonule is both greater than
The threshold value of setting, such as: it can be 60-98% with given threshold, then illustrating the feature subtemplate of the two comparisons from same
Individual, otherwise the two feature subtemplates compared may belong to different people.
Traditional technology is by the skin detection in the overall data and face database of skin detection to be compared
It is compared, although the two template overall similarities compared are possible to reach given threshold, the spy of certain regional areas
Sign is likely to entirely different, in this way with it is overall reach threshold value and carry out identification easily cause recognition of face to malfunction;And the present invention is real
It applies example and proposes the mode that skin detection piecemeal to be compared is compared, only each segmented areas, which has reached, to be set
Fixed threshold value could assert its identity;Otherwise it is assumed that dissmilarity is locally present, its identity cannot be identified.It greatlys improve in this way
Recognition correct rate.
Fig. 4 (a1) -4 (d2) is the signal that the face identification method that further embodiment of this invention provides carries out recognition of face
Figure, the recognition of face that the embodiment of the present invention carries out can be carried out or be adopted using the face identification device of any embodiment of that present invention
Face identification method is provided with any embodiment of that present invention to carry out.Below with a kind of face identification method of the embodiment of the present invention
It is exemplary illustrated to carry out recognition of face progress.
Firstly, shown in 4 (a2), acquiring the human face region image in character image to be identified such as Fig. 4 (a1);Including first obtaining
Character image 4 (a1) to be identified is taken, then character image is focused on to the face area of character image, such as focus facial regions
Domain P, such as 4 (a2).By character image will be focused on to the focusing face area P of character image, contain substantially no other than face
Information, and it is as much as possible contain face face texture information, for recognition of face it is with a high credibility.
Optionally, it focuses face area P to be respectively positioned within the scope of face, the ratio for focusing face area P and face area can
With according to the size of crowd to be identified, age etc. is adjusted.Such as: focus the 40%- that face area P is about face area
90%.
Secondly, shown in 4 (b2), forming having for facial image according to the anchor point for being located at the human face region such as Fig. 4 (b1)
Imitate region;It include: to choose at least four anchor points for being located at the human face region, the first anchor point A, the second anchor point B, third
Anchor point C, the 4th anchor point D;Wherein, the first anchor point A, the first anchor point B are the point on the outside of the eyebrow of two sides, third anchor point
C is the point between two on the bridge of the nose, and the 4th anchor point D is the central point on upper lip boundary.Wherein, third anchor point C is not
In the plane of one anchor point A, the second anchor point B, third anchor point C formation;It is true by the first anchor point A and the second anchor point B
The degree inclined in the picture for determining face, the center line position of left and right face is determined by third anchor point C and the 4th anchor point D
It sets, the effective coverage for carrying out recognition of face is determined with this.
Optionally, effective coverage M of the region obtained according to anchor point as recognition of face, effective coverage M includes face
The left and right sides the first effective information region M1 and the second effective information region M2.
Optionally, effective coverage M can be entirely located in focus face area P within, for example, effective coverage M area account for it is poly-
The 60-100% of burnt face area P area.
Optionally, first effective information region M1 include the first anchor point A, the third anchor point C and
The 4th anchor point D projects to the region that interconnecting line is formed on same plane;Such as project to plane where XY axis.It is described
Second effective information region M2, including the second anchor point B, the third anchor point C and the 4th anchor point D projection
The region that interconnecting line is formed on to same plane;Such as project to plane where XY axis.
Optionally, the effective coverage M can also include the 5th anchor point E, and the 5th anchor point E is described first fixed
The connecting line midpoint of site A and the second anchor point B;And first effective information region M1 includes first anchor point
A, the 5th anchor point E, the third anchor point C, the 4th anchor point D project to interconnecting line on same plane and are formed
Region;And second effective information region M2 includes that the second anchor point B, the 5th anchor point E, the third are fixed
Site C and the 4th anchor point D projects to the region that interconnecting line is formed on same plane.
Optionally, the 5th anchor point E, the third anchor point C, the 4th anchor point D project to same plane
On be located along the same line.
Optionally, the effective coverage M includes a virtual rectangle, and the first edge lengths of the virtual rectangle are described first
Anchor point A and the second anchor point B line AB, i.e. the distance between the first anchor point A and the second anchor point B;It is described
Second edge lengths of virtual rectangle are the 4th anchor point D and the 5th anchor point E line ED, i.e., described 4th anchor point
D and the 5th anchor point E distance.It is and excessively described that is, the first anchor point A and the second anchor point B line
The intersection point of 4th anchor point D and the third anchor point C line is the 5th anchor point E, and the 5th anchor point E point is protected
It holds on line ED, line AB is allowed to move to position D from position E, sampling is carried out etc. to the region of the two sides ED, can be standardized
Rectangular area.
Optionally, effective coverage M of the region of acquisition as recognition of face, effective coverage M include the left and right sides of face
First effective information region M1 of area equation and the second effective information region M2.Certainly, according to the actual situation, first effectively believes
Ceasing region M1 and the second effective information region M2 can be unequal.Such as: when face left and right face area difference, first effectively believes
The area ratio of region M1 and the second effective information region M2 are ceased, it is directly proportional to corresponding face lateral area ratio.
Again, such as Fig. 4 (c1), shown in 4 (c2), feature extraction is carried out according to the face of the effective coverage, forms face
Feature templates.It include: to be extracted respectively at least in first effective information region M1 and second effective information region M2
The gray value of one sampled point F;Then, to the difference of the average value of the adjacent thereto gray value of gray value of the sampled point F
Binary-coding is carried out, skin detection, such as 4 (c2) are formed.
Such as: can the gray value to the sampled point F and 2 or 4 gray value mean values adjacent around it difference
Value carries out binary-coding, forms skin detection.One of binary-coding mode: when gray value and its phase of the sampled point F
The difference of the average value of adjoint point gray value is greater than 3, indicates local gray level variation, use 1 indicates the feature of the point;Otherwise it is assumed that office
Portion's gray scale is unchanged, and use 0 indicates the feature of the point.
Illustrative: assuming that the gray value of sampled point F is 25, the gray value of same straight line upper left side pixel is 20,
The gray value of right pixel point is 32, and the average value that can calculate consecutive points gray value around sampled point F is 26;The sampling
The difference of the average value of the adjacent thereto gray value of gray value of point is encoded to 0 less than 3.
It is illustrative: assuming that the gray value of sampled point F is 25, the gray value minute of symmetrical 4 points centered on sampled point F
Not Wei 28,26,30,36, can calculate consecutive points gray value around sampled point F average value be 30;The ash of the sampled point
The difference of the average value of adjacent thereto gray value of angle value is greater than 3, is encoded to 1.
It is, of course, also possible to carry out binary-coding using borderline properties of the edge detection operator to the sampled point, people is formed
Face feature templates.
Optionally, edge detection operator carries out binary-coding to the borderline properties of the sampled point, comprising: the sampled point
Left pixel point gray value be greater than its right pixel point gray value, use 1 indicates;Otherwise, use 0 indicates.
Optionally, the quantity of the sampled point is first effective information region M1's and the first effective information region M2
Quantity is equal.
Optionally, the position of the sampled point is closed in first effective information region M1 and the first effective information region M2
In the 5th anchor point E and the 4th anchor point D line axial symmetry.
Secondly, shown in 4 (d2), the skin detection and preset face database are compared such as Fig. 4 (d1)
It is right, to identify identity of personage in character image to be identified.It include: that the skin detection is divided at least two sons
Template 4 (d1) and 4 (d2), then by two subtemplates 4 (d1) of the subtemplate and 4 (d2) respectively with preset face database
In the correspondence submodule of skin detection be compared;And judge the subtemplate 4 (d1) and 4 (d2) and preset face
Whether the similarity that the skin detection in database corresponds to submodule is all larger than or is equal to threshold value, to be identified to identify
Identity of personage in character image.
Such as: it can be 60-98% with given threshold.If the submodule 4 (d1) and 4 (d2) respectively with preset face
The similarity of the correspondence submodule of skin detection in database is all larger than 60%, then it is assumed that the feature submodule of two comparisons
Plate 4 (d1) and 4 (d2) come from the same person, and otherwise the two feature templates compared may belong to different people.
Of course, it is possible to according to the threshold value that the settings such as different regions, varying environment, different human body physiological structure characteristic need,
But it is preferred that threshold value is 65%, such recognition of face confidence level is more preferable.
As can be seen that the recognition of face that the embodiment of the present invention carries out, using at least four positioning within the scope of face
Point forms multiple effective information regions, and the human face region directly after standardization carries out local binary feature extraction, calculation amount
Small, for aspect ratio to simple, real-time is good, improves the confidence level of recognition of face, but also can be in embedded system application;Phase
For existing deep neural network mode, it is trained without mass data, it is more to effectively prevent neural network model parameter,
It is computationally intensive, it is unsuitable for the problems in embedded system.
Any face identification device provided in an embodiment of the present invention and any face identification method, can reside in hardware
On carrier, or be embodied in hardware carrier in use, for example: mobile phone have face identification device and face identification functions.When
So, other a variety of occasions for needing to carry out recognition of face also be can be applied to, such as:
1. residential security and management.Such as: recognition of face gate inhibition, recognition of face security door etc..
2. identification.Such as E-Passport, identity card.
3. public security, the administration of justice and criminal investigation.Face identification system and network are such as utilized, tracks down and arrests runaway convict in China.
4. Self-Service.The Automatic Teller Machine of such as bank.
5. information security.Such as computer or mobile terminal login, E-Government and e-commerce.
The above embodiments are only used to illustrate the present invention, and not limitation of the present invention, in relation to the common of technical field
Technical staff can also make a variety of changes and modification without departing from the spirit and scope of the present invention, therefore all
Equivalent technical solution also belongs to scope of the invention, and scope of patent protection of the invention should be defined by the claims.
Claims (14)
1. a kind of face identification device characterized by comprising
Image capture module, for acquiring the human face region image in character image to be identified;
Framing module, for forming the effective coverage of facial image according to the anchor point for being located at the human face region;
Image processing module forms skin detection for carrying out feature extraction according to the face of the effective coverage;
Image comparison module, for the skin detection to be compared with preset face database, with identify to
Identity of personage in the character image of identification.
2. face identification device as described in claim 1, which is characterized in that described image acquisition module includes:
Image acquisition unit, for obtaining character image to be identified;
Image focusing unit, for character image to be focused on to the face area of character image.
3. face identification device as described in claim 1, which is characterized in that described image locating module includes:
Selection unit, for choosing at least four anchor points for being located at the human face region;Wherein, at least one anchor point does not exist
In same level;
Zoning unit, for forming effective coverage according to the anchor point, effective coverage includes at least the first effective information region
With the second effective information region.
4. face identification device as claimed in claim 3, which is characterized in that four anchor points: including the first anchor point,
Second anchor point, third anchor point, the 4th anchor point, wherein first anchor point is face left eye eyebrow outer point, institute
Stating the second anchor point is face right eye eyebrow outer point, and the third anchor point is the point between face two on the bridge of the nose, institute
Stating the 4th anchor point is upper lip intermediate point.
5. face identification device as claimed in claim 4, which is characterized in that further include the 5th anchor point, the effective coverage
Including a virtual rectangle, the first edge lengths of the virtual rectangle are between first anchor point and second anchor point
Distance, the second edge lengths of the virtual rectangle are the distance between the 4th anchor point and the 5th anchor point.
6. face identification device as described in claim 1, which is characterized in that described image processing module includes:
Feature extraction unit, for being extracted respectively at least in first effective information region and second effective information region
The gray value of one sampled point;
The difference of first coding unit, the average value for adjacent thereto gray value of the gray value to the sampled point carries out two
Value coding, forms skin detection.
7. face identification device as claimed in claim 6, which is characterized in that described image processing module further include:
Second coding unit is formed for carrying out binary-coding using borderline properties of the edge detection operator to the sampled point
Skin detection.
8. face identification device as claimed in claim 6, which is characterized in that the quantity of the sampled point is effective described first
The quantity in information area and the second effective information region is equal.
9. face identification device as described in claim 1, which is characterized in that described image comparison module includes:
First decomposition unit, for the skin detection to be divided at least two subtemplates.
First comparing unit, for the subtemplate is corresponding with the skin detection in preset face database respectively
Submodule is compared.
First judging unit, for judging subtemplate submodule corresponding with the skin detection in preset face database
Whether the similarity of block is all larger than or is equal to threshold value, to identify identity of personage in character image to be identified.
10. face identification device as claimed in claim 9, which is characterized in that the threshold value is 60-98%.
11. a kind of face identification method characterized by comprising
Acquire the human face region image in character image to be identified;
The effective coverage of facial image is formed according to the anchor point for being located at the human face region;
Feature extraction is carried out according to the face of the effective coverage, forms skin detection;
The skin detection is compared with the skin detection in preset face database, to identify wait know
Identity of personage in other character image.
12. face identification method as claimed in claim 11, which is characterized in that according to the anchor point for being located at the human face region
The step of forming the effective coverage of facial image, comprising:
Choose at least four anchor points for being located at the human face region;Wherein, at least one anchor point is not in same level.
Effective coverage is formed according to the anchor point, effective coverage includes at least the first effective information region and the second effective information
Region.
13. face identification method as claimed in claim 11, which is characterized in that carried out according to the face of the effective coverage special
The step of sign is extracted, and skin detection is formed, comprising:
In first effective information region and second effective information region, the ash of at least one sampled point is extracted respectively
Angle value;
Binary-coding is carried out to the difference of the average value of adjacent thereto gray value of the gray value of the sampled point, it is special to form face
Levy template.
14. face identification method as claimed in claim 11, which is characterized in that by the skin detection and preset people
Skin detection in face database is compared, the step of to identify identity of personage in character image to be identified,
May include:
The skin detection is divided at least two subtemplates;
The subtemplate is compared with the correspondence submodule of the skin detection in preset face database respectively;
Judge whether the similarity of subtemplate submodule corresponding with the skin detection in preset face database is equal
More than or equal to threshold value, to identify identity of personage in character image to be identified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810778917.2A CN109086692A (en) | 2018-07-16 | 2018-07-16 | A kind of face identification device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810778917.2A CN109086692A (en) | 2018-07-16 | 2018-07-16 | A kind of face identification device and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109086692A true CN109086692A (en) | 2018-12-25 |
Family
ID=64838011
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810778917.2A Pending CN109086692A (en) | 2018-07-16 | 2018-07-16 | A kind of face identification device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109086692A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245561A (en) * | 2019-05-09 | 2019-09-17 | 深圳市锐明技术股份有限公司 | A kind of face identification method and device |
CN111368674A (en) * | 2020-02-26 | 2020-07-03 | 中国工商银行股份有限公司 | Image recognition method and device |
CN113469091A (en) * | 2021-07-09 | 2021-10-01 | 北京的卢深视科技有限公司 | Face recognition method, training method, electronic device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955690A (en) * | 2014-04-15 | 2014-07-30 | 合肥工业大学 | Method for constructing compact image local feature descriptor |
CN104063690A (en) * | 2014-06-25 | 2014-09-24 | 广州卓腾科技有限公司 | Identity authentication method based on face recognition technology, device thereof and system thereof |
US20160110590A1 (en) * | 2014-10-15 | 2016-04-21 | University Of Seoul Industry Cooperation Foundation | Facial identification method, facial identification apparatus and computer program for executing the method |
CN105930834A (en) * | 2016-07-01 | 2016-09-07 | 北京邮电大学 | Face identification method and apparatus based on spherical hashing binary coding |
CN106446779A (en) * | 2016-08-29 | 2017-02-22 | 深圳市软数科技有限公司 | Method and apparatus for identifying identity |
CN106886739A (en) * | 2015-12-16 | 2017-06-23 | 苏州工业园区洛加大先进技术研究院 | A kind of video frequency monitoring method based on recognition of face |
CN108197529A (en) * | 2017-11-27 | 2018-06-22 | 重庆邮电大学 | A kind of human facial feature extraction method for merging DLDP and sobel |
CN108268814A (en) * | 2016-12-30 | 2018-07-10 | 广东精点数据科技股份有限公司 | A kind of face identification method and device based on the fusion of global and local feature Fuzzy |
-
2018
- 2018-07-16 CN CN201810778917.2A patent/CN109086692A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955690A (en) * | 2014-04-15 | 2014-07-30 | 合肥工业大学 | Method for constructing compact image local feature descriptor |
CN104063690A (en) * | 2014-06-25 | 2014-09-24 | 广州卓腾科技有限公司 | Identity authentication method based on face recognition technology, device thereof and system thereof |
US20160110590A1 (en) * | 2014-10-15 | 2016-04-21 | University Of Seoul Industry Cooperation Foundation | Facial identification method, facial identification apparatus and computer program for executing the method |
CN106886739A (en) * | 2015-12-16 | 2017-06-23 | 苏州工业园区洛加大先进技术研究院 | A kind of video frequency monitoring method based on recognition of face |
CN105930834A (en) * | 2016-07-01 | 2016-09-07 | 北京邮电大学 | Face identification method and apparatus based on spherical hashing binary coding |
CN106446779A (en) * | 2016-08-29 | 2017-02-22 | 深圳市软数科技有限公司 | Method and apparatus for identifying identity |
CN108268814A (en) * | 2016-12-30 | 2018-07-10 | 广东精点数据科技股份有限公司 | A kind of face identification method and device based on the fusion of global and local feature Fuzzy |
CN108197529A (en) * | 2017-11-27 | 2018-06-22 | 重庆邮电大学 | A kind of human facial feature extraction method for merging DLDP and sobel |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245561A (en) * | 2019-05-09 | 2019-09-17 | 深圳市锐明技术股份有限公司 | A kind of face identification method and device |
CN110245561B (en) * | 2019-05-09 | 2021-11-09 | 深圳市锐明技术股份有限公司 | Face recognition method and device |
CN111368674A (en) * | 2020-02-26 | 2020-07-03 | 中国工商银行股份有限公司 | Image recognition method and device |
CN111368674B (en) * | 2020-02-26 | 2023-09-26 | 中国工商银行股份有限公司 | Image recognition method and device |
CN113469091A (en) * | 2021-07-09 | 2021-10-01 | 北京的卢深视科技有限公司 | Face recognition method, training method, electronic device and storage medium |
CN113469091B (en) * | 2021-07-09 | 2022-03-25 | 北京的卢深视科技有限公司 | Face recognition method, training method, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107016370B (en) | A kind of partial occlusion face identification method based on data enhancing | |
Burge et al. | Ear biometrics | |
CN108416336B (en) | A kind of method and system of intelligence community recognition of face | |
CN101030244B (en) | Automatic identity discriminating method based on human-body physiological image sequencing estimating characteristic | |
US9064145B2 (en) | Identity recognition based on multiple feature fusion for an eye image | |
CN103902977B (en) | Face identification method and device based on Gabor binary patterns | |
CN101246544B (en) | Iris positioning method based on boundary point search and minimum kernel value similarity region edge detection | |
AU2007284299B2 (en) | A system for iris detection, tracking and recognition at a distance | |
CN102073843B (en) | Non-contact rapid hand multimodal information fusion identification method | |
CN108229427A (en) | A kind of identity-based certificate and the identity security verification method and system of recognition of face | |
Burge et al. | Ear biometrics for machine vision | |
CN106250821A (en) | The face identification method that a kind of cluster is classified again | |
KR20050025927A (en) | The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its | |
CN101201893A (en) | Iris recognizing preprocessing method based on grey level information | |
TWI318108B (en) | A real-time face detection under complex backgrounds | |
CN106156688A (en) | A kind of dynamic human face recognition methods and system | |
JP2000259814A (en) | Image processor and method therefor | |
CN109086692A (en) | A kind of face identification device and method | |
WO2021217764A1 (en) | Human face liveness detection method based on polarization imaging | |
CN109815780A (en) | A kind of high-precision fingerprint identification method and system based on image procossing | |
CN105760815A (en) | Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait | |
Burge et al. | Using ear biometrics for passive identification | |
Arca et al. | A face recognition system based on local feature analysis | |
Roy et al. | A Personal Biometric Identification technique based on iris recognition | |
CN110135362A (en) | A kind of fast face recognition method based under infrared camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181225 |
|
WD01 | Invention patent application deemed withdrawn after publication |