CN109614910A - A kind of face identification method and device - Google Patents
A kind of face identification method and device Download PDFInfo
- Publication number
- CN109614910A CN109614910A CN201811473646.6A CN201811473646A CN109614910A CN 109614910 A CN109614910 A CN 109614910A CN 201811473646 A CN201811473646 A CN 201811473646A CN 109614910 A CN109614910 A CN 109614910A
- Authority
- CN
- China
- Prior art keywords
- image
- human face
- target
- face region
- region image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000009499 grossing Methods 0.000 claims abstract description 59
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000013441 quality evaluation Methods 0.000 claims abstract description 9
- 230000006399 behavior Effects 0.000 claims description 19
- 230000003068 static effect Effects 0.000 claims description 18
- 230000001815 facial effect Effects 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 238000003706 image smoothing Methods 0.000 claims description 4
- 238000010801 machine learning Methods 0.000 claims description 4
- 235000013399 edible fruits Nutrition 0.000 claims description 2
- 244000061408 Eugenia caryophyllata Species 0.000 claims 1
- 235000016639 Syzygium aromaticum Nutrition 0.000 claims 1
- 238000005286 illumination Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010034960 Photophobia Diseases 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000013469 light sensitivity Diseases 0.000 description 1
- 230000005389 magnetism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention discloses a kind of face identification method and device.Face identification method includes: acquisition infrared image;Human face target detection is carried out based on infrared image, and the human face target detected is smoothed, obtains smoothed out human face region image;Image quality measure is carried out using the face feature point that human face region image includes;Face identification is carried out according to the human face region image for meeting quality evaluation, obtains identification result.The present invention can be avoided the problem of existing illumination relatively can not accurately carry out recognition of face by force or in the weaker situation of illumination when carrying out recognition of face using the visible images to light sensitive, and identification is carried out using smoothing processing and by the human face region image of image quality measure, improve the accuracy of identification.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of face identification methods and device.
Background technique
With the development of computer technology, image processing techniques has been applied to more and more fields, usually can be used
Machine learning method trains human face recognition model, is identified using human face recognition model to the face in image.
Existing human face detection and tracing is mainly identified using visible images, such as carries out people using color image
Face detection and identification, but it is larger to light sensitivity based on the human face detection and tracing of color image, for strong light and dim light
Color image under environment, recognition effect are poor.
Summary of the invention
The present invention provides a kind of face identification method and device, to solve existing recognition of face to the robustness of illumination
Problem.
One aspect of the present invention provides a kind of face identification method, comprising: obtains infrared image;It is carried out based on infrared image
Human face target detection, and the human face target detected is smoothed, obtain smoothed out human face region image;Utilize people
The face feature point that face area image includes carries out image quality measure;It is carried out according to the human face region image for meeting quality evaluation
Face identification obtains identification result.
One aspect of the present invention provides a kind of face identification device, comprising: image acquisition unit, for obtaining infrared figure
Picture;Smoothing processing unit for carrying out human face target detection based on infrared image, and carries out the human face target detected smooth
Processing, obtains smoothed out human face region image;Quality estimation unit, the facial characteristics for including using human face region image
Point carries out image quality measure;Identity recognizing unit, for carrying out face body according to the human face region image for meeting quality evaluation
Part identification, obtains identification result.
The present invention is based on IR images to the insensitivity of illumination, carries out face identification using IR image, avoid using
Under existing illumination is relatively strong when carrying out recognition of face to the visible images of light sensitive or the weaker situation of illumination can not accurately into
The problem of row recognition of face;And by using IR image carry out face identification before, to the human face target of IR image
It is smoothed the human face region image for obtaining steady and continuous, then the human face region image after smoothing processing is carried out about figure
As the image quality measure of validity, identification is carried out using by the human face region image of image quality measure, improves body
The accuracy of part identification.
Detailed description of the invention
Fig. 1 is the flow chart of the face identification method shown in the embodiment of the present invention;
Fig. 2 is to calculate image pitch angle schematic diagram using facial key point shown in the embodiment of the present invention;
Fig. 3 is to calculate image yaw angle schematic diagram using facial key point shown in the embodiment of the present invention;
Fig. 4 is the structural block diagram of the face identification device shown in the embodiment of the present invention;
Fig. 5 is the hardware structural diagram of the face identification device shown in the embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
Hereinafter, will be described with reference to the accompanying drawings the embodiment of the present invention.However, it should be understood that these descriptions are only exemplary
, and be not intended to limit the scope of the invention.In addition, in the following description, descriptions of well-known structures and technologies are omitted, with
Avoid unnecessarily obscuring idea of the invention.
Term as used herein is not intended to limit the present invention just for the sake of description specific embodiment.Used here as
Word " one ", " one (kind) " and "the" etc. also should include " multiple ", " a variety of " the meaning, unless in addition context clearly refers to
Out.In addition, the terms "include", "comprise" as used herein etc. show the presence of the feature, step, operation and/or component,
But it is not excluded that in the presence of or add other one or more features, step, operation or component.
There are all terms (including technical and scientific term) as used herein those skilled in the art to be generally understood
Meaning, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification
Meaning, without that should be explained with idealization or excessively mechanical mode.
Shown in the drawings of some block diagrams and/or flow chart.It should be understood that some sides in block diagram and/or flow chart
Frame or combinations thereof can be realized by computer program instructions.These computer program instructions can be supplied to general purpose computer,
The processor of special purpose computer or other programmable data processing units, so that these instructions are when executed by this processor can be with
Creation is for realizing function/operation device illustrated in these block diagrams and/or flow chart.
Therefore, technology of the invention can be realized in the form of hardware and/or software (including firmware, microcode etc.).Separately
Outside, technology of the invention can take the form of the computer program product on the machine readable media for being stored with instruction, the meter
Calculation machine program product uses for instruction execution system or instruction execution system is combined to use.In the context of the present invention,
Machine readable media, which can be, can include, store, transmitting, propagating or transmitting the arbitrary medium of instruction.For example, machine readable Jie
Matter can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, device or propagation medium.It is machine readable
The specific example of medium includes: magnetic memory apparatus, such as tape or hard disk (HDD);Light storage device, such as CD (CD-ROM);It deposits
Reservoir, such as random access memory (RAM) or flash memory;And/or wire/wireless communication link.
The embodiment of the present invention provides a kind of face identification method.
Fig. 1 is the flow chart of the face identification method shown in the embodiment of the present invention, as shown in Figure 1, the method for the present embodiment
Include:
S110 obtains infrared (Infrared Radiation, IR) image.
The present embodiment can use infrared camera acquisition IR image.
S120 carries out human face target detection based on IR image, and is smoothed to the human face target detected, obtains
Smoothed out human face region image.
S130 carries out image quality measure using the face feature point that human face region image includes.
The present embodiment carries out image quality measure to the human face region image detected to identify image validity, is passing through
When image quality measure, show that the human face region image detected is effective facial image, it can be based on the face detected
Area image carries out identification and shows that the human face region image detected is invalid when not over image quality measure
Facial image, abandon the face area image.
S140 carries out face identification according to the human face region image for meeting quality evaluation, obtains identification result.
The present embodiment, to the insensitivity of illumination, is carried out face identification using IR image, avoids making based on IR image
It can not be accurate under existing illumination is relatively strong when carrying out recognition of face with the visible images to light sensitive or the weaker situation of illumination
The problem of carrying out recognition of face;The present embodiment also by using IR image carry out face identification before, to IR image
Human face target be smoothed obtain steady and continuous human face region image, then to the human face region image after smoothing processing into
Image quality measure of the row about image validity carries out identity knowledge using by the human face region image of image quality measure
Not, the accuracy of identification is improved.
Above-mentioned steps S110-S140 is described in detail below, the executing subject of the present embodiment step S110-S140 can
Think terminal device, for example, equipment such as smart phone, PAD, intelligent sound box.
Firstly, executing step S110, that is, obtain IR image.
Executing subject in the present embodiment can be equipped with the camera arrived for acquiring IR image, need to carry out identity
When identification, the camera that can be opened carries out the shooting of user's head or face, IR image needed for obtaining the present embodiment.
After IR image needed for obtaining, the pretreatment such as necessary noise-removed filtering can be carried out to IR image.
After obtaining IR image, step S120 is continued to execute, human face target detection is carried out based on IR image, and to detection
To human face target be smoothed, obtain smoothed out human face region image.
The present embodiment can carry out human face target detection to IR image using existing method, for example, by using the side of decision tree
Method is scaled by window and is slided to human face target progress multiple scale detecting.Since the stability of human face target detection will affect
The accuracy of subsequent identification, the present embodiment are smoothed human face target testing result, and guarantee obtains stable people
Face detection zone.
In one embodiment, smoothed out human face region image is obtained by following methods: red to what is currently obtained first
Outer image carries out human face target detection, obtains target position information;It is then smooth to target position information progress Kalman initial,
Obtain Kalman smoothing result;The target position smoothing error of Smoothness Index based on setting and setting again, to previous frame is red
The sharpening result and Kalman smoothing result of outer image object position carry out exponential smoothing, obtain exponential smoothing as a result, setting
Target position smoothing error indicates the maximum smoothness error to target position;Then according to exponential smoothing result and previous frame
The sharpening result of infrared image target position determines the wobble information of target position, and according to the static row of wobble information and setting
The sharpening result that target position information is determined for decision errors is corresponding to the human face region after current infrared image smoothing processing
Image, the static behavior decision errors instruction target position of setting correspond to the maximum jitter error of static behavior.
In embodiment, target position includes target's center position pt (i)(x,y)With object edge position s (i)(x,y), i is
The frame number of current IR image, (x, y) are pixel position;After obtaining target position information, respectively to target's center position pt
(i)(x,y)With object edge position s (i)(x,y)Progress Kalman initial is smooth, obtains the Kalman smoothing knot of target's center position
Fruit Kspt(i) and object edge position s (i)(x,y)Kalman smoothing result Kss(i);With target's center in previous frame IR image
The sharpening result M of positionpt(i-1), the sharpening result M of object edge positions(i-1) and the Kalman smoothing knot of current IR image
Fruit Kspt(i) and Kss(i) exponential smoothing is carried out based on, i.e., according to formulaIt obtains target's center position pt (i)(x,y)Exponential smoothing result
Espt(i), and according to formulaObtain object edge position s
(i)(x,y)Exponential smoothing result Ess(i), wherein gptAnd gsIt is followed successively by center smoothing error weighted value, object edge position
Horizontalization slides Error weight value, can set g according to tracking data demand and errorptAnd gsValue, MRptAnd MRsTarget's center position
Set smoothing error, object edge position smoothing error, MRptIndicate the maximum smoothness error of target's center position, MRsIt indicates
The maximum smoothness error of object edge position;It is further continued for calculating the fluctuation Dis of target's center positionpt(i)=| Espt(i)-
Mpt(i-1) | and the fluctuation Dis of object edge positions(i)=| Ess(i)-Ms(i-1) |, setting target's center position corresponds to static
Behavior decision errors are SRpt, it is SR that object edge position, which corresponds to static behavior decision errors,s, according to formulaDetermine the sharpening result M of current IR image object centerpt(i), and according to
FormulaDetermine the sharpening result M of current IR image target edge positions(i), i.e., in mesh
Mark the fluctuation Dis of centerptLess than SRptWhen, illustrate that target's center position corresponds to static behavior, by previous frame IR image
The target's center position obtained after smooth is determined as the target's center position of current IR image, conversely, heart position in the target
Fluctuate DisptNot less than SRptWhen, illustrate that target's center position corresponds to dynamic behaviour, it will be in the corresponding target of current IR image
The exponential smoothing result Es of heart positionpt(i) it is determined as the target's center position of current IR image.Likewise, in object edge position
The fluctuation Dis setsLess than SRsWhen, illustrate that object edge position corresponds to static behavior, will be obtained after previous frame IR image smoothing
Object edge position be determined as the object edge position of current IR image, conversely, the fluctuation Dis in object edge positionsNo
Less than SRsWhen, illustrate that object edge position corresponds to dynamic behaviour, by the index of the corresponding object edge position of current IR image
Sharpening result Ess(i) it is determined as the object edge position of current IR image.
After obtaining smoothed out human face region image, step S130 is continued to execute, that is, utilizes human face region image packet
The face feature point included carries out image quality measure.
In one embodiment, carry out image quality measure by following methods: acquisition human face region image first includes
Five face feature points position, the first to the 5th face feature point be corresponding in turn to for right and left eyes characteristic point, the left and right corners of the mouth it is special
Sign point and nose characteristic point;The pitch angle of human face region image is determined then according to the position of the first to the 5th face feature point
And yaw angle;And human face region image is divided into multiple subimage blocks, energy gradient meter is carried out to multiple subimage blocks
It calculates, obtains the gradient value of human face region image;Then meet in the pitch angle, yaw angle and gradient value of human face region image
When preset condition, determine that human face region image passes through image quality measure.
Since one plate of facial angle is described by three kinds of pitch angle, yaw angle, rotation angle angles, rotation angle passes through figure
The rotation of piece can be corrected, and face alignment is known as in face algorithm, so the present embodiment does not say rotation angle as figure
As the standard of quality evaluation, but using pitch angle, yaw angle as the standard of image quality measure.
Wherein, pitch angle and yaw angle are obtained by following methods:
It obtains first facial characteristic point respectively and the 5th face feature point primary vector formed is in the horizontal direction and vertical
The secondary vector that component on direction, the second face feature point and the 5th face feature point are formed is both horizontally and vertically
On component, third vector that third face feature point and the 5th face feature point are formed is in the horizontal direction and the vertical direction
Point of the 4th vector that component and fourth face portion characteristic point and the 5th face feature point are formed in the horizontal direction and the vertical direction
Amount;Obtain primary vector and third vector vertically-oriented component range difference and secondary vector with the 4th vector vertical
Maximum distance difference in the range difference of the component in direction is pitch angle;It obtains primary vector and third vector is in the horizontal direction
Maximum distance difference in the range difference of the range difference and secondary vector of component and the 4th vector component in the horizontal direction is yaw
Angle.
As Figure 2-3, it is assumed that five facial key points are (El, Er, Ml, Mr, N), right and left eyes characteristic point be respectively El,
Er, left and right corners of the mouth characteristic point are respectively Ml, Mr, and nose characteristic point is N.Left eye characteristic point and nose characteristic point formed first to
Amount isThe secondary vector that right eye characteristic point is formed with nose characteristic point isLeft corners of the mouth characteristic point and nose characteristic point
The third vector of formation isThe 4th vector that right corners of the mouth characteristic point is formed with nose characteristic point is
As shown in Fig. 2, primary vectorIn vertically-oriented componentDistance and third vectorVertical
The component in directionThe distance between distance difference be DHl, secondary vectorIn vertically-oriented componentWith the 4th
VectorIn vertically-oriented componentRange difference be DHr, take pitch angle be DH=max { DHl, DHr }.
As shown in figure 3, primary vectorComponent in the horizontal directionDistance and third vectorIn level
The component in directionThe distance between distance difference be DWl, secondary vectorComponent in the horizontal directionWith the 4th
VectorComponent in the horizontal directionRange difference be DWr, take yaw angle be DW=max { DWl, DWr }.
The method for obtaining the gradient value of human face region image includes: that human face region image is carried out nine grids division, is obtained
Nine subimage blocks of arranged in sequence;According to formulaTo four subimage blocks of odd indexed, i.e.,
First subimage block, third subimage block, the 5th subimage block, the 7th subimage block and the 9th subimage block into
Row energy gradient calculates, and obtains the gradient value FB of human face region image;Wherein, H and W is followed successively by human face region picture altitude and width
Degree, f (x, y) are the pixel value at the pixel (x, y) of subimage block.
The present embodiment is greater than pitch angle threshold value in pitch angle, and yaw angle is greater than yaw angle threshold value, and gradient value is greater than ladder
When spending threshold value, determine that human face region image passes through image quality measure.
After obtaining through the human face region image of image quality measure, step S140 is executed, i.e., according to meeting quality
The human face region image of assessment carries out face identification, obtains identification result.
The present embodiment obtains identification result by following methods: human face region image being input to preparatory training first
Good face identification model, face identification model is including the use of machine learning side and the instruction being made of infrared image
Practice sample and what Training obtained is carried out to convolutional neural networks structure;Obtain the confidence of face identification model output
Value is identified as user pond user when the value of the confidence is greater than the first confidence level, is greater than second less than the first confidence level in the value of the confidence and sets
When reliability, it is identified as new user, new user and its attribute information are added to user pond, customer attribute information includes but do not limit to
In identity ID, such as name, the information such as ID card No.;Face character information, such as gender.The information such as age;Indicate face
The initial detection time information of current time in system after effectively being tracked;The final time persistently tracked, what this was persistently tracked
Final time can all update after every secondary tracking, to ensure to record time for being finally tracked of target, persistently track it is last when
Between complete calculating to target following duration jointly with initial detection time.
Face identification model can carry out the identification of multi-tag to human face region image.
In one example, face identification model includes caffe Open Framework, by modification caffe Open Framework
Source code, Data layers of label reading value int is revised as int* by such as number of tags magnitude label_dim, for receiving more categories
Property training label;And by single tag modification in the multi-tag data packing process in Data layers be multi-tag import.
The training sample for making IR image carries out face critical point detection to face sample image, carries out to every image
Crucial point alignment arranges the label label file of the attribute value of each facial image, for example, the format of the label file is
XXX.jpg attr1attr2attr3 ... then designs convolutional neural networks structure (such as Xception, VGGNet, ResNet
Deng) carry out feature extraction, by taking Xception as an example, design the data Layer of Xception, convolutional layer, separable convolution, active coating,
Pond layer, full articulamentum, loss layer etc. carry out model training to training sample using Xception, the number of iterations can be set
For preset times (for example, 400000 times), it is preset value (for example, 0.0005) that initial learning rate, which is arranged,.Xception is defeated
Characteristic information out is input in the decision device of face identification model, and the decision device in the present embodiment can be various existing
, can be achieved classification feature model (such as model-naive Bayesian (Bayesian Model, NBM), support to
Amount machine (Support Vector Machine, SVM), the mind comprising full articulamentum (fully connected layers, FC)
Through network or classification function (such as softmax function) etc.).
The embodiment of the present invention also provides a kind of face identification device.
Fig. 4 is the structural block diagram of the face identification device shown in the embodiment of the present invention, as shown in figure 4, the dress of the present embodiment
It sets and includes:
Image acquisition unit 41, for obtaining IR image;
Smoothing processing unit 42, for based on IR image carry out human face target detection, and to the human face target detected into
Row smoothing processing obtains smoothed out human face region image;
Quality estimation unit 43, the face feature point for including using human face region image carry out image quality measure;
Identity recognizing unit 44 is obtained for carrying out face identification according to the human face region image for meeting quality evaluation
To identification result.
Based on IR image to the insensitivity of illumination, image acquisition unit is carried out the present embodiment using the IR image got
Face identification, avoids carrying out that existing illumination when recognition of face is relatively strong or illumination using the visible images to light sensitive
Can not accurately be carried out the problem of recognition of face in weaker situation, by using IR image carry out face identification before, benefit
It is smoothed the human face region image for obtaining steady and continuous with human face target of the smoothing processing unit to IR image, recycles
Identity recognizing unit carries out the image quality measure about image validity to the human face region image after smoothing processing, utilizes body
Part recognition unit carries out identification by the human face region image of image quality measure, improves the accuracy of identification.
In one embodiment, smoothing processing unit 42 is used to carry out human face target detection to the IR image currently obtained,
Obtain target position information;It is smooth to target position information progress Kalman initial, obtain Kalman smoothing result;Based on setting
Smoothness Index and setting target position smoothing error, to by the sharpening result and karr of previous frame infrared image target position
Graceful sharpening result carries out exponential smoothing, obtains exponential smoothing as a result, the target position smoothing error of setting is indicated to target position
Maximum smoothness error;Target is determined according to the sharpening result of exponential smoothing result and previous frame infrared image target position
The wobble information of position, and determine according to the static behavior decision errors of wobble information and setting the smooth knot of target position information
Fruit is corresponding to the human face region image after current IR picture smooth treatment, and the static behavior decision errors of setting indicate target position
Set the maximum jitter error of corresponding static behavior.
Wherein, target position includes target's center position and object edge position, and smoothing processing unit 42 is specifically basis
FormulaObtain the exponential smoothing result Es of target's center positionpt
(i), and according to formulaObtain the exponential smoothing of object edge position
As a result Ess(i);Wherein, gptAnd gsIt is followed successively by center smoothing error weighted value, object edge position smoothing error weight
Value, MRptAnd MRsTarget's center's position smoothing error, object edge position smoothing error, Kspt(i) and Kss(i) it is followed successively by target
Center Kalman smoothing result, object edge position Kalman smoothing are as a result, Mpt(i-1) and Ms(i-1) it is followed successively by one
The sharpening result of target's center position in frame infrared image, object edge position sharpening result, i and i-1 are followed successively by current red
The frame number of the frame number of outer image, previous frame infrared image.
Smoothing processing unit 42 is also according to formula Dispt(i)=| Espt(i)-Mpt(i-1) | determine current infrared image mesh
Mark the shake Dis of centerpt(i), and according to formula Diss(i)=| Ess(i)-Ms(i-1) | determine current IR image mesh
Mark the shake Dis of marginal positions(i);According to formulaDetermine current IR image object
The sharpening result M of centerpt(i), and according to formulaDetermine current IR image mesh
Mark the sharpening result M of marginal positions(i);Wherein, SRptAnd SRsThe target's center position for being followed successively by setting corresponds to static behavior and sentences
Determine error, object edge position corresponds to static behavior decision errors.
In one embodiment, quality estimation unit 43 is for obtaining five face feature points that human face region image includes
Position, the first to the 5th face feature point is corresponding in turn to as right and left eyes characteristic point, left and right corners of the mouth characteristic point and nose characteristic point;
The pitch angle and yaw angle of human face region image are determined according to the position of the first to the 5th face feature point;And by people
Face area image is divided into multiple subimage blocks, carries out energy gradient calculating to multiple subimage blocks, obtains the human face region
The gradient value of image;When the pitch angle, yaw angle and gradient value of human face region image meet preset condition, face is determined
Area image passes through image quality measure.
Quality estimation unit 43 includes angle calculation module, sharpness computation module and evaluation module;
Angle calculation module for obtaining the primary vector of first facial characteristic point and the formation of the 5th face feature point respectively
The secondary vector that component in the horizontal direction and the vertical direction, the second face feature point and the 5th face feature point are formed is in water
Square to the component in vertical direction, the third vector that third face feature point and the 5th face feature point are formed is in level side
In the horizontal direction to the 4th vector with component and fourth face portion characteristic point and the formation of the 5th face feature point in vertical direction
With the component in vertical direction;Primary vector and third vector are obtained in the range difference and secondary vector of vertically-oriented component
It is the pitch angle with maximum distance difference of the 4th vector in the range difference of vertically-oriented component;Obtain primary vector with
The range difference and secondary vector of third vector component in the horizontal direction and the range difference of the 4th vector component in the horizontal direction
In maximum distance difference be the yaw angle.
Sharpness computation module is used to human face region image carrying out nine grids division, obtains nine Zhang Zitu of arranged in sequence
As block;According to formulaTo the four of odd indexed
It opens subimage block and carries out energy gradient calculating, obtain the gradient value FB of human face region image;Wherein, H and W are followed successively by human face region
Picture altitude and width, f (x, y) are the pixel value at the pixel (x, y) of subimage block.
Evaluation module is used to be greater than pitch angle threshold value in pitch angle, and yaw angle is greater than yaw angle threshold value, and gradient value is big
When Grads threshold, determine that human face region image passes through image quality measure.
In one embodiment, identity recognizing unit 44 is used to for human face region image to be input to preparatory trained face
Identification model, face identification model is including the use of machine learning side and the training sample pair being made of infrared image
Convolutional neural networks structure carries out what Training obtained;The value of the confidence for obtaining the output of face identification model, in confidence
When value is greater than the first confidence level, it is identified as user pond user, when the value of the confidence is greater than the second confidence level less than the first confidence level, is known
Not Wei new user, new user and its attribute information are added to user pond.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality
Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit
The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with
It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual
It needs that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not
In the case where making the creative labor, it can understand and implement.
Face identification device provided by the invention can also pass through hardware or software and hardware combining by software realization
Mode realize.Taking software implementation as an example, referring to Figure 5, face identification device provided by the invention may include processor
501, it is stored with the machine readable storage medium 502 of machine-executable instruction.Processor 501 and machine readable storage medium 502
It can be communicated via system bus 503.Also, by read and execute in machine readable storage medium 502 with recognition of face logic
Above-described face identification method can be performed in corresponding machine-executable instruction, processor 501.
Machine readable storage medium 502 mentioned in the present invention can be any electronics, magnetism, optics or other physics and deposit
Storage device may include or store information, such as executable instruction, data, etc..For example, machine readable storage medium may is that
RAM (Radom Access Memory, random access memory), volatile memory, nonvolatile memory, flash memory, storage are driven
Dynamic device (such as hard disk drive), solid state hard disk, any kind of storage dish (such as CD, DVD) or similar storage are situated between
Matter or their combination.
Disclosed example according to the present invention, the present invention also provides a kind of including machine-executable instruction machine readable deposits
Machine readable storage medium 502 in storage media, such as Fig. 5, machine-executable instruction can be known by the face of vision navigation system
Processor 501 in other device is executed to realize above-described face identification method.
The above description is merely a specific embodiment, under above-mentioned introduction of the invention, those skilled in the art
Other improvement or deformation can be carried out on the basis of the above embodiments.It will be understood by those skilled in the art that above-mentioned tool
Body description only preferably explains that the purpose of the present invention, protection scope of the present invention should be subject to the protection scope in claims.
Claims (10)
1. a kind of face identification method, which is characterized in that the described method includes:
Obtain infrared image;
Human face target detection is carried out based on infrared image, and the human face target detected is smoothed, after obtaining smoothly
Human face region image;
Image quality measure is carried out using the face feature point that the human face region image includes;
Face identification is carried out according to the human face region image for meeting quality evaluation, obtains identification result.
2. the method according to claim 1, wherein the described pair of human face target detected is smoothed,
Obtain smoothed out human face region image, comprising:
Human face target detection is carried out to the infrared image currently obtained, obtains target position information;
It is smooth to target position information progress Kalman initial, obtain Kalman smoothing result;
The target position smoothing error of Smoothness Index and setting based on setting is put down to by previous frame infrared image target position
Slipped Clove Hitch fruit and the Kalman smoothing result carry out exponential smoothing, obtain exponential smoothing as a result, the target position of the setting is flat
Maximum smoothness error of the sliding error indicator to target position;
The wobble information of target position is determined according to the sharpening result of exponential smoothing result and previous frame infrared image target position,
And according to the wobble information and the static behavior decision errors set determine the sharpening result of the target position information as pair
Target position should be indicated in the human face region image after current infrared image smoothing processing, the static behavior decision errors of the setting
Set the maximum jitter error of corresponding static behavior.
3. according to the method described in claim 2, it is characterized in that, the target position includes target's center position and target side
Edge position, the target position smoothing error of the Smoothness Index and setting based on setting is to by previous frame image and the karr
Graceful sharpening result carries out exponential smoothing, comprising:
According to formulaObtain the exponential smoothing knot of target's center position
Fruit Espt(i), and according to formulaObtain the finger of object edge position
Number sharpening result Ess(i);
Wherein, gptAnd gsIt is followed successively by center smoothing error weighted value, object edge position smoothing error weighted value, MRptWith
MRsTarget's center's position smoothing error, object edge position smoothing error, Kspt(i) and Kss(i) it is followed successively by target's center position
Kalman smoothing result, object edge position Kalman smoothing are as a result, Mpt(i-1) and Ms(i-1) it is followed successively by the infrared figure of previous frame
As in the sharpening result of target's center position, object edge position sharpening result, i and i-1 are followed successively by current infrared image
The frame number of frame number, previous frame infrared image.
4. according to the method described in claim 3, it is characterized in that, described according to exponential smoothing result and previous frame infrared image
The sharpening result of target position determines the wobble information of target position, and is sentenced according to the static behavior of the wobble information and setting
Determine error and determines that the sharpening result of the target position information is corresponding to the human face region after current infrared image smoothing processing
Image, comprising:
According to formula Dispt(i)=| Espt(i)-Mpt(i-1) | determine the shake Dis of current infrared image target's center positionpt
(i), and according to formula Diss(i)=| Ess(i)-Ms(i-1) | determine the shake of current infrared image object edge position
Diss(i);
According to formulaDetermine the sharpening result of current infrared image target's center position
Mpt(i), and according to formulaDetermine the smooth of current infrared image object edge position
As a result Ms(i);
Wherein, SRptAnd SRsThe target's center position for being followed successively by setting corresponds to static behavior decision errors, object edge position pair
Answer static behavior decision errors.
5. the method according to claim 1, wherein the face feature point for including using human face region image
Carry out image quality measure, comprising:
The position for five face feature points that the human face region image includes is obtained, the first to the 5th face feature point is successively right
It should be right and left eyes characteristic point, left and right corners of the mouth characteristic point and nose characteristic point;
The pitch angle and yaw angle of human face region image are determined according to the position of the described first to the 5th face feature point;With
And the human face region image is divided into multiple subimage blocks, energy gradient calculating is carried out to the multiple subimage block, is obtained
Obtain the gradient value of the human face region image;
When pitch angle, yaw angle and the gradient value of the human face region image meet preset condition, the face is determined
Area image passes through image quality measure.
6. according to the method described in claim 5, it is characterized in that, the position according to the described first to the 5th face feature point
Set the pitch angle and yaw angle of determining human face region image, comprising:
The primary vector that acquisition first facial characteristic point and the 5th face feature point are formed respectively is both horizontally and vertically
On component, secondary vector that the second face feature point and the 5th face feature point are formed is in the horizontal direction and the vertical direction
Point of the third vector that component, third face feature point and the 5th face feature point are formed in the horizontal direction and the vertical direction
Point of the 4th vector that amount and fourth face portion characteristic point and the 5th face feature point are formed in the horizontal direction and the vertical direction
Amount;
Obtain primary vector and third vector vertically-oriented component range difference and secondary vector with the 4th vector vertical
Maximum distance difference in the range difference of the component in direction is the pitch angle;
The range difference and secondary vector and the 4th vector of acquisition primary vector and third vector component in the horizontal direction are in level
Maximum distance difference in the range difference of the component in direction is the yaw angle.
7. according to the method described in claim 5, it is characterized in that, described be divided into multiple subgraphs for the human face region image
As block, energy gradient calculating is carried out to the multiple subimage block, obtains the gradient value of the human face region image, comprising:
The human face region image is subjected to nine grids division, obtains nine subimage blocks of arranged in sequence;
According to formulaTo four of odd indexed
Subimage block carries out energy gradient calculating, obtains the gradient value FB of the human face region image;
Wherein, H and W is followed successively by human face region picture altitude and width, and f (x, y) is the picture at the pixel (x, y) of subimage block
Element value.
8. according to the method described in claim 5, it is characterized in that, pitch angle, yaw angle in the human face region image
When degree and gradient value meet preset condition, determine that the human face region image passes through image quality measure, comprising:
It is greater than pitch angle threshold value in the pitch angle, the yaw angle is greater than yaw angle threshold value, and the gradient value is greater than ladder
When spending threshold value, determine that the human face region image passes through image quality measure.
9. the method according to claim 1, wherein it is described according to meet the human face region image of quality evaluation into
Pedestrian's face identification, obtains identification result, comprising:
The human face region image is input to preparatory trained face identification model, face identification model includes
Training is carried out to convolutional neural networks structure using machine learning side and the training sample being made of infrared image to obtain
It arrives;
The value of the confidence for obtaining the face identification model output is identified as when the value of the confidence is greater than the first confidence level
User pond user is identified as new user when the value of the confidence is greater than the second confidence level less than the first confidence level, by new user and
Its attribute information is added to the user pond.
10. a kind of face identification device, which is characterized in that described device includes:
Image acquisition unit, for obtaining infrared image;
Smoothing processing unit for carrying out human face target detection based on infrared image, and carries out the human face target detected flat
Sliding processing, obtains smoothed out human face region image;
Quality estimation unit, the face feature point for including using the human face region image carry out image quality measure;
Identity recognizing unit obtains identity for carrying out face identification according to the human face region image for meeting quality evaluation
Recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811473646.6A CN109614910B (en) | 2018-12-04 | 2018-12-04 | Face recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811473646.6A CN109614910B (en) | 2018-12-04 | 2018-12-04 | Face recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109614910A true CN109614910A (en) | 2019-04-12 |
CN109614910B CN109614910B (en) | 2020-11-20 |
Family
ID=66005301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811473646.6A Active CN109614910B (en) | 2018-12-04 | 2018-12-04 | Face recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109614910B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110196103A (en) * | 2019-06-27 | 2019-09-03 | Oppo广东移动通信有限公司 | Thermometry and relevant device |
CN110728193A (en) * | 2019-09-16 | 2020-01-24 | 连尚(新昌)网络科技有限公司 | Method and device for detecting richness characteristics of face image |
CN110751043A (en) * | 2019-09-19 | 2020-02-04 | 平安科技(深圳)有限公司 | Face recognition method and device based on face visibility and storage medium |
CN110765502A (en) * | 2019-10-30 | 2020-02-07 | Oppo广东移动通信有限公司 | Information processing method and related product |
CN110889355A (en) * | 2019-11-19 | 2020-03-17 | 深圳市紫金支点技术股份有限公司 | Face recognition verification method, system and storage medium |
CN111462379A (en) * | 2020-03-17 | 2020-07-28 | 广东网深锐识科技有限公司 | Access control management method, system and medium containing palm vein and face recognition |
CN112036277A (en) * | 2020-08-20 | 2020-12-04 | 浙江大华技术股份有限公司 | Face recognition method, electronic equipment and computer readable storage medium |
CN112883925A (en) * | 2021-03-23 | 2021-06-01 | 杭州海康威视数字技术股份有限公司 | Face image processing method, device and equipment |
CN113449567A (en) * | 2020-03-27 | 2021-09-28 | 深圳云天励飞技术有限公司 | Face temperature detection method and device, electronic equipment and storage medium |
CN114021100A (en) * | 2022-01-10 | 2022-02-08 | 广东省出版集团数字出版有限公司 | Safety management system for digital teaching material storage |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120114934A (en) * | 2011-04-08 | 2012-10-17 | 대구대학교 산학협력단 | A face recognition system for user authentication of an unmanned receipt system |
CN105975908A (en) * | 2016-04-26 | 2016-09-28 | 汉柏科技有限公司 | Face recognition method and device thereof |
CN106446873A (en) * | 2016-11-03 | 2017-02-22 | 北京旷视科技有限公司 | Face detection method and device |
CN106778607A (en) * | 2016-12-15 | 2017-05-31 | 国政通科技股份有限公司 | A kind of people based on recognition of face and identity card homogeneity authentication device and method |
CN107273875A (en) * | 2017-07-18 | 2017-10-20 | 广东欧珀移动通信有限公司 | Human face in-vivo detection method and Related product |
CN107437067A (en) * | 2017-07-11 | 2017-12-05 | 广东欧珀移动通信有限公司 | Human face in-vivo detection method and Related product |
CN107798279A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | Face living body detection method and device |
CN108090428A (en) * | 2017-12-08 | 2018-05-29 | 广西师范大学 | A kind of face identification method and its system |
CN108228696A (en) * | 2017-08-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Research on face image retrieval and system, filming apparatus, computer storage media |
CN108230293A (en) * | 2017-05-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image |
CN108416326A (en) * | 2018-03-27 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Face identification method and device |
CN108564041A (en) * | 2018-04-17 | 2018-09-21 | 广州云从信息科技有限公司 | A kind of Face datection and restorative procedure based on RGBD cameras |
CN108805024A (en) * | 2018-04-28 | 2018-11-13 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
-
2018
- 2018-12-04 CN CN201811473646.6A patent/CN109614910B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120114934A (en) * | 2011-04-08 | 2012-10-17 | 대구대학교 산학협력단 | A face recognition system for user authentication of an unmanned receipt system |
CN105975908A (en) * | 2016-04-26 | 2016-09-28 | 汉柏科技有限公司 | Face recognition method and device thereof |
CN107798279A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | Face living body detection method and device |
CN106446873A (en) * | 2016-11-03 | 2017-02-22 | 北京旷视科技有限公司 | Face detection method and device |
CN106778607A (en) * | 2016-12-15 | 2017-05-31 | 国政通科技股份有限公司 | A kind of people based on recognition of face and identity card homogeneity authentication device and method |
CN108230293A (en) * | 2017-05-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image |
CN107437067A (en) * | 2017-07-11 | 2017-12-05 | 广东欧珀移动通信有限公司 | Human face in-vivo detection method and Related product |
CN107273875A (en) * | 2017-07-18 | 2017-10-20 | 广东欧珀移动通信有限公司 | Human face in-vivo detection method and Related product |
CN108228696A (en) * | 2017-08-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Research on face image retrieval and system, filming apparatus, computer storage media |
CN108090428A (en) * | 2017-12-08 | 2018-05-29 | 广西师范大学 | A kind of face identification method and its system |
CN108416326A (en) * | 2018-03-27 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Face identification method and device |
CN108564041A (en) * | 2018-04-17 | 2018-09-21 | 广州云从信息科技有限公司 | A kind of Face datection and restorative procedure based on RGBD cameras |
CN108805024A (en) * | 2018-04-28 | 2018-11-13 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
Non-Patent Citations (2)
Title |
---|
徐志鹏 等: "基于嵌入式的近红外人脸识别系统设计", 《数据采集与处理》 * |
林冠辰 等: "人脸识别图像质量研究", 《质量管理与产品认证》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110196103A (en) * | 2019-06-27 | 2019-09-03 | Oppo广东移动通信有限公司 | Thermometry and relevant device |
CN110728193A (en) * | 2019-09-16 | 2020-01-24 | 连尚(新昌)网络科技有限公司 | Method and device for detecting richness characteristics of face image |
CN110751043A (en) * | 2019-09-19 | 2020-02-04 | 平安科技(深圳)有限公司 | Face recognition method and device based on face visibility and storage medium |
CN110751043B (en) * | 2019-09-19 | 2023-08-22 | 平安科技(深圳)有限公司 | Face recognition method and device based on face visibility and storage medium |
CN110765502A (en) * | 2019-10-30 | 2020-02-07 | Oppo广东移动通信有限公司 | Information processing method and related product |
CN110889355A (en) * | 2019-11-19 | 2020-03-17 | 深圳市紫金支点技术股份有限公司 | Face recognition verification method, system and storage medium |
CN110889355B (en) * | 2019-11-19 | 2023-09-19 | 深圳市紫金支点技术股份有限公司 | Face recognition verification method, face recognition verification system and storage medium |
CN111462379A (en) * | 2020-03-17 | 2020-07-28 | 广东网深锐识科技有限公司 | Access control management method, system and medium containing palm vein and face recognition |
CN113449567A (en) * | 2020-03-27 | 2021-09-28 | 深圳云天励飞技术有限公司 | Face temperature detection method and device, electronic equipment and storage medium |
CN113449567B (en) * | 2020-03-27 | 2024-04-02 | 深圳云天励飞技术有限公司 | Face temperature detection method and device, electronic equipment and storage medium |
CN112036277A (en) * | 2020-08-20 | 2020-12-04 | 浙江大华技术股份有限公司 | Face recognition method, electronic equipment and computer readable storage medium |
CN112036277B (en) * | 2020-08-20 | 2023-09-29 | 浙江大华技术股份有限公司 | Face recognition method, electronic equipment and computer readable storage medium |
CN112883925A (en) * | 2021-03-23 | 2021-06-01 | 杭州海康威视数字技术股份有限公司 | Face image processing method, device and equipment |
CN112883925B (en) * | 2021-03-23 | 2023-08-29 | 杭州海康威视数字技术股份有限公司 | Face image processing method, device and equipment |
CN114021100A (en) * | 2022-01-10 | 2022-02-08 | 广东省出版集团数字出版有限公司 | Safety management system for digital teaching material storage |
CN114021100B (en) * | 2022-01-10 | 2022-03-15 | 广东省出版集团数字出版有限公司 | Safety management system for digital teaching material storage |
Also Published As
Publication number | Publication date |
---|---|
CN109614910B (en) | 2020-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109614910A (en) | A kind of face identification method and device | |
CN111709409B (en) | Face living body detection method, device, equipment and medium | |
US8335348B2 (en) | Visual object tracking with scale and orientation adaptation | |
CN104978548B (en) | A kind of gaze estimation method and device based on three-dimensional active shape model | |
WO2021169637A1 (en) | Image recognition method and apparatus, computer device and storage medium | |
US9846845B2 (en) | Hierarchical model for human activity recognition | |
CN109740413A (en) | Pedestrian recognition methods, device, computer equipment and computer storage medium again | |
CN110659646A (en) | Automatic multitask certificate image processing method, device, equipment and readable storage medium | |
CN110008806A (en) | Storage medium, learning processing method, learning device and object identification device | |
CN106469298A (en) | Age recognition methodss based on facial image and device | |
WO2013175792A1 (en) | Person attribute estimation system and learning-use data generation device | |
CN111401521B (en) | Neural network model training method and device, and image recognition method and device | |
KR101288447B1 (en) | Gaze tracking apparatus, display apparatus and method therof | |
CN109598234A (en) | Critical point detection method and apparatus | |
US20140098988A1 (en) | Fitting Contours to Features | |
CN108198172B (en) | Image significance detection method and device | |
CN113689503B (en) | Target object posture detection method, device, equipment and storage medium | |
US20140099031A1 (en) | Adjusting a Contour by a Shape Model | |
CN109993021A (en) | The positive face detecting method of face, device and electronic equipment | |
CN112036457A (en) | Method and device for training target detection model and target detection method and device | |
JP6713422B2 (en) | Learning device, event detection device, learning method, event detection method, program | |
CN117037244A (en) | Face security detection method, device, computer equipment and storage medium | |
CN109598201A (en) | Motion detection method, device, electronic equipment and readable storage medium storing program for executing | |
Zhang et al. | Accuracy and long-term tracking via overlap maximization integrated with motion continuity | |
CN110008803B (en) | Pedestrian detection method, device and equipment for training detector |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |