CN108875485A - A kind of base map input method, apparatus and system - Google Patents
A kind of base map input method, apparatus and system Download PDFInfo
- Publication number
- CN108875485A CN108875485A CN201710867203.4A CN201710867203A CN108875485A CN 108875485 A CN108875485 A CN 108875485A CN 201710867203 A CN201710867203 A CN 201710867203A CN 108875485 A CN108875485 A CN 108875485A
- Authority
- CN
- China
- Prior art keywords
- face
- base map
- image
- facial image
- quality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Abstract
The present invention provides a kind of base map input method, apparatus and system, the base map input method includes:Obtain the image comprising target object face;Quality estimation is carried out to the image comprising target object face;It is that a qualified at least image is determined as base map by Quality estimation;Save the base map.By the shooting for guiding user, so that user is when shooting photo or typing is used for the base map of recognition of face when video, the higher image of image quality can be obtained as base map, pass through the judgement of base map quality, the image for being not suitable as identification base map has been excluded, subsequent face recognition result accuracy is improved.
Description
Technical field
The present invention relates to a kind of artificial intelligence approach, apparatus and system, in particular to a kind of base map for recognition of face
Input method, apparatus and system.
Background technique
With the continuous development of artificial intelligence technology and the continuous improvement of computer computation ability, face recognition technology is got over
To be applied to actual industrial circle more.Recognition of face process at this stage typically constructs a recognition of face number first
According to library, the personal information with personnel to be identified is store in this database, when needing to identify someone, is just made
Compared one by one with face recognition algorithms and the image in face recognition database, using select a most like image as
Recognition result.Since the face base map quantity that the database for recognition of face includes is very big, so the matter of the base map to typing
Amount requires just relatively high.The case where typing of bottom library is carried out by way of independently shooting particularly with regard to user, due to user
Do not know base map typing requirement, often can not typing meet identification require base map.
Summary of the invention
The present invention is proposed in view of the above problem.The present invention provides a kind of base map input methods, apparatus and system.
According to an aspect of the present invention, a kind of base map input method is provided, including, it obtains comprising target object face
Image;Quality estimation is carried out to the image comprising target object face;It is a qualified at least image by Quality estimation
It is determined as base map;Save the base map.
Illustratively, the method also includes carrying out Face datection to the image comprising target object face, to obtain
It obtains face and surrounds frame.
Illustratively, the image to described comprising target object face carries out Quality estimation and includes, to the face
It surrounds block diagram picture and carries out Quality estimation;The Quality estimation includes judging face 3 d pose, the fog-level of facial image, people
Whether at least one of size of the occlusion state of face, the brightness of facial image and facial image meets quality requirement.
Illustratively, the method to the face surround block diagram picture carry out Quality estimation be based on deep neural network into
Capable;It is described to judge whether face 3 d pose meets quality requirement and include:Determine the face in three dimensions each
Dimension deviates the angle of positive face;And if described be not more than predetermined threshold per the one-dimensional angle for deviateing positive face, it is determined that the people
The 3 d pose of face meets quality requirement, conversely, being then unsatisfactory for quality requirement.
Illustratively, whether the fog-level for judging facial image meets quality requirement and includes:Determine the face
The fog-level of block diagram picture is surrounded, if the fog-level is not more than predetermined threshold, it is determined that the facial image obscures
Degree meets quality requirement, conversely, being then unsatisfactory for quality requirement.
Illustratively, whether the occlusion state for judging face meets quality requirement and includes:Determine the pass of the face
Whether key position is blocked;And if the key position of the face is not blocked, it is determined that the people in the facial image
The occlusion state of face meets quality requirement, conversely, being then unsatisfactory for quality requirement.
Illustratively, whether the brightness for judging facial image meets quality requirement and includes:Determine that the face surrounds
The brightness of block diagram picture, if the brightness is between the first luminance threshold and the second luminance threshold, it is determined that the facial image
Brightness meet quality requirement, conversely, being then unsatisfactory for quality requirement.
Illustratively, whether the size for judging facial image meets quality requirement and includes:If the face surrounds
The size of block diagram picture is between the first size threshold value and the second size threshold value, it is determined that the size of the facial image meets quality
It is required that conversely, being then unsatisfactory for quality requirement.
Illustratively, the method also includes carrying out In vivo detection to the facial image comprising target object.
Illustratively, the method also includes being not conform in the Quality estimation of the image comprising target object face
In the case where lattice, the first prompt is issued, target object is prompted to be adjusted.
According to another aspect of the present invention, a kind of base map input device is additionally provided, including:Image collection module is used for
Obtain the image comprising target object face;Quality estimation module, for being carried out to the image comprising target object face
Quality estimation;Base map determining module, for being that a qualified at least image is determined as base map by Quality estimation;Base map saves mould
Block, for saving the base map.
Illustratively, described device further includes face detection module, for the image comprising target object face
Face datection is carried out, surrounds frame to obtain face.
Illustratively, the Quality estimation module is specifically used for surrounding the face block diagram picture progress Quality estimation;Institute
It states Quality estimation module and specifically includes face 3 d pose judging submodule, the fog-level judging submodule of facial image, people
In the size judging submodule of the occlusion state judging submodule of face, the brightness judging submodule of facial image and facial image
At least one.
Illustratively, it is based on depth mind that the Quality estimation module surrounds block diagram picture to carry out Quality estimation to the face
It is carried out through network;The face 3 d pose judging submodule, for determine the face in three dimensions per one-dimensional
Deviate the angle of positive face;And if described be not more than predetermined threshold per the one-dimensional angle for deviateing positive face, it is determined that the face
3 d pose meet quality requirement, conversely, being then unsatisfactory for quality requirement.
Illustratively, the fog-level judging submodule of the facial image, for determining that the face surrounds block diagram picture
Fog-level, if the fog-level be not more than predetermined threshold, it is determined that the fog-level of the facial image meets matter
Amount requires, conversely, being then unsatisfactory for quality requirement.
Illustratively, the occlusion state judging submodule of the face, for determine the face key position whether
It is blocked;And if the key position of the face is not blocked, it is determined that face in the facial image blocks shape
State meets quality requirement, conversely, being then unsatisfactory for quality requirement.
Illustratively, the brightness judging submodule of the facial image, for determining that the face surrounds the bright of block diagram picture
Degree, if the brightness is between the first luminance threshold and the second luminance threshold, it is determined that the brightness of the facial image meets
Quality requirement, conversely, being then unsatisfactory for quality requirement.
Illustratively, the size judging submodule of the facial image, if surrounding the big of block diagram picture for the face
It is small between the first size threshold value and the second size threshold value, it is determined that the size of the facial image meets quality requirement, conversely,
Then it is unsatisfactory for quality requirement.
Illustratively, described device further includes In vivo detection module, for the facial image comprising target object
Carry out In vivo detection.
Illustratively, described device further includes cue module, for the matter in the image comprising target object face
Amount is judged as in underproof situation, issues the first prompt, target object is prompted to be adjusted.
According to the another method of invention, a kind of base map input system, including imaging sensor, storage device and place are also provided
Device is managed, described image sensor is stored with the meter run by the processor for acquiring facial image on the storage device
Calculation machine program, the computer program execute above-mentioned base map input method when being run by the processor.
Base map input method according to an embodiment of the present invention, apparatus and system, by guiding the shooting of user, so that user
When typing is used for the base map of recognition of face when shooting photo or video, image quality higher image can be obtained the bottom of as
Figure has excluded the image for being not suitable as identification base map, has improved subsequent recognition of face knot by the judgement of base map quality
Fruit accuracy.
Detailed description of the invention
The embodiment of the present invention is described in more detail in conjunction with the accompanying drawings, the above and other purposes of the present invention,
Feature and advantage will be apparent.Attached drawing is used to provide to further understand the embodiment of the present invention, and constitutes explanation
A part of book, is used to explain the present invention together with the embodiment of the present invention, is not construed as limiting the invention.In the accompanying drawings,
Identical reference label typically represents same parts or step.
Fig. 1 shows the schematic flow chart of base map input method according to an embodiment of the invention;
Fig. 2 shows the schematic flow charts of Quality estimation step according to an embodiment of the invention;
Fig. 3 shows the schematic diagram of base map input device according to an embodiment of the invention.
Specific embodiment
In order to enable the object, technical solutions and advantages of the present invention become apparent, root is described in detail below with reference to accompanying drawings
According to example embodiments of the present invention.Obviously, described embodiment is only a part of the embodiments of the present invention, rather than this hair
Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Based on described in the present invention
The embodiment of the present invention, those skilled in the art's obtained all other embodiment in the case where not making the creative labor
It should all fall under the scope of the present invention.
To solve problem as described above, the embodiment of the present invention provides a kind of base map input method, firstly, according to Fig. 1 pairs
The base map input method is illustrated.
Step 110, the image comprising target object face is obtained.
Target object refers to the people that will carry out face base map typing, and the image of face of the acquisition comprising the people can be logical
The photo of shooting is used as base map photo, is also possible to pass through by the mode for crossing shooting photo by shooting one or more photo
The mode that real-time video obtains opens the photographic device of mobile terminal, the video flowing comprising target object face is obtained, at this
Video frame images are obtained in video flowing, with the typing for base map.The device for obtaining facial image can be camera, can also be with
It is the mobile phone or other mobile terminals for having shooting function.In one embodiment, it is adopted by the image collecting device on mobile phone
Collect facial image, the typing as base map, with the unlock for mobile phone screen, in this embodiment, mobile phone has recognition of face
Function, when the image acquisition device of mobile phone to face and base map in face be same people when, i.e., releasing mobile phone screen
Screen lock state.
Step 120, the image to described comprising target object face carries out Quality estimation.
When getting the image comprising target object face through the above way, in order to enable subsequent recognition of face energy
It is enough more accurate, need the image quality to the facial image to judge.If what is obtained is individual or multiple images,
To this individual or multiple images judged one by one, judge whether to meet scheduled quality requirement.If what is obtained is video flowing,
All video frame images in video flowing can then be judged one by one, can also extract in video flowing a part carry out by
One judgement, in one embodiment, extracts 3 frames in every continuous 10 frame and is judged, specific pumping frame method can basis
Concrete condition is determined.
It step 130, is that a qualified at least image is determined as base map by Quality estimation.
It can determine that a figure as base map, can also determine that multiple figures are base map, at multiple bottoms in another embodiment
In the case where figure, similarity calculation will can be carried out respectively with multiple base maps, be averaged as final recognition result.Into
During row Quality estimation, when having determined that one or more image meets the quality of requirements, it can stop judging,
And using the image of the determination as base map.
Step 140, the base map is saved.
After having determined base map, the base map of the determination can be stored in local.In one embodiment, by the bottom
Figure is stored in the scheduled base map storage position of mobile terminal, in another embodiment, base map can be passed through communication transfer hand
Section is transferred to cloud and is stored, such as the upload of base map is carried out by the modes such as mobile network, WIFI, WLAN.At another
In embodiment, it can also be stored simultaneously in local and cloud.
According to an embodiment of the invention, the method also includes carrying out people to the image comprising target object face
Face detection surrounds frame to obtain face.
The methods of neural network, AdaBoost can be used by carrying out Face datection, only be wrapped by the way that Face datection is available
The encirclement frame in the region containing facial image, it is subsequent that only face encirclement frame is handled, it can reduce calculation amount, while also excluding
Incoherent information in image, improves accuracy of identification.Face, which surrounds frame, can be the minimum box comprising face, can also be with
It is that the minimum box comprising face extends to the outside a part of box, such as 1.2 times of minimum box again, suitably amplifies some
Face surrounds frame can be to avoid loss face information.
According to fig. 2, the Quality estimation step can also include multiple sub-steps, when carrying out Quality estimation, Ke Yixuan
It selects all sub-steps and carries out Quality estimation, also can choose a part therein and carry out Quality estimation.
According to an embodiment of the invention, the image progress Quality estimation to described comprising target object face includes,
Block diagram picture is surrounded to the face and carries out Quality estimation;The Quality estimation includes judging face 3 d pose, facial image
Whether at least one of fog-level, the occlusion state of face, the brightness of facial image and size of facial image meet matter
Amount requires.
The general only registration of the registration of face base map includes the image of face information, therefore, only to face surround block diagram picture into
Row Quality estimation.In face recognition process, influence many because being known as of recognition effect, and typing face base map when
Wait to select as far as possible positive face, clearly, unobstructed, brightness is moderate and the image that is of moderate size.Therefore, judge in picture quality
When, also mainly consider face 3 d pose, the fog-level of facial image, the occlusion state of face, facial image brightness and
The size of facial image.In the specific implementation, the one or more that can choose in these conditions judged, specifically can be with
It is selected according to calculation amount and the difference of required precision.
According to an embodiment of the invention, step 220 is judged 3 d pose, the method surrounds the face
Block diagram as carrying out Quality estimation is carried out based on deep neural network;It is described to judge whether face 3 d pose meets quality and want
Ask including:Determine the angle per the one-dimensional positive face of deviation of the face in three dimensions;And it is if described per one-dimensional deviation
The angle of positive face is not more than predetermined threshold, it is determined that the 3 d pose of the face meets quality requirement, conversely, being then unsatisfactory for matter
Amount requires.
In one embodiment, the judgement that deep neural network carries out picture quality can be used, pass through one mind of training
The three dimensional angular angle value of face in the image, the fuzzy value of facial image, face can be exported for the image of input through network
Whether have block, the size of the brightness value of facial image and facial image.In one embodiment using pitch angle (pitch),
Yaw angle (yaw), roll angle (roll) define the 3 d pose of face, input an image into neural network, neural network will
The size for exporting above-mentioned three kinds of angles respectively, three kinds of angles are compared with scheduled angle threshold respectively, if every kind of angle
Degree is no more than scheduled angle threshold, it is determined that the 3 d pose of the face meets quality requirement.If there is a kind of angle
It has been more than predetermined threshold, it is determined that the image is unsatisfactory for quality requirement.It is several to be respectively corresponded more than the case where predetermined angular threshold value
It comes back, bow, head left avertence, head right avertence, situations such as head is not positive, in terminal real-time image acquisition, user can be prompted to be unsatisfactory for
The type of quality requirement, so that user corrects.
According to an embodiment of the invention, step 230 is judged facial image fog-level, the judgement face figure
Whether the fog-level of picture meets quality requirement:Determine that the face surrounds the fog-level of block diagram picture, if the mould
Paste degree is not more than predetermined threshold, it is determined that the fog-level of the facial image meets quality requirement, conversely, being then unsatisfactory for matter
Amount requires.
It in one embodiment, can be with to whether the fog-level of acquired facial image meets determination that identification requires
It is carried out based on depth convolutional network.The fog-level of facial image can be defined as a numerical value, for example, one can be normalized to
A 0 to 1 numerical value.In one example, determining whether the fog-level of acquired facial image meets identification requirement can be with
Including:Motion blur and Gaussian Blur based on acquired facial image determine the fog-level of acquired facial image;
If the fog-level of acquired facial image is not more than predetermined threshold, it is determined that the fog-level of facial image meets identification
It is required that being required conversely, being then unsatisfactory for identification.It can implement the process based on the good depth convolutional network model of off-line training.
Wherein, the setting of the predetermined threshold can be based on specific application scenarios.In other examples, it can also be closed by any other
Suitable mode requires to determine whether the fog-level of acquired facial image meets identification.
According to an embodiment of the invention, step 240 is judged face occlusion state, the judgement face is blocked
Whether state meets quality requirement:Determine whether the key position of the face is blocked;And the if face
Key position is not blocked, it is determined that the occlusion state of the face in the facial image meets quality requirement, conversely, being then discontented with
Sufficient quality requirement.
In one embodiment, whether what identification required is met to the occlusion state of the face in acquired facial image
Determination can be carried out based on depth convolutional network.In one example, blocking for the face in acquired facial image is determined
Whether state meets identification requirement:Determine whether the key position of face is blocked;If the key position of face
It is not blocked, it is determined that the occlusion state of the face in facial image meets identification and requires, and requires conversely, being then unsatisfactory for identification.
Wherein, the key position of face may include at least one of organs such as eyebrow, eyes, nose, mouth.For example, showing at one
In example, can eyes to face and mouth carry out shadowing.Using the good depth convolutional network model of off-line training, according to defeated
Whether the facial image entered, output three key positions of left-eye/right-eye/mouth are blocked.If any one position is blocked,
Then facial image is unsatisfactory for identification requirement.In other examples, it can also be determined by any other suitable mode through adopting
Whether the occlusion state of the face in the facial image of collection, which meets identification, requires.
According to an embodiment of the invention, step 250 is judged facial image brightness, the judgement facial image
Whether brightness meets quality requirement:Determine that the face surrounds the brightness of block diagram picture, if the brightness is in the first brightness
Between threshold value and the second luminance threshold, it is determined that the brightness of the facial image meets quality requirement, conversely, being then unsatisfactory for quality
It is required that.
In one embodiment, depth convolutional network can be used to determine brightness of image.For the image of input,
Depth convolutional network will export a numerical value, and for indicating the brightness of image, which can be the numerical value between 0 to 255,
Can be normalized to 0 to 1 numerical value.Under normal circumstances, brightness it is too high or too it is low all indicate image quality it is not so good, image
It crosses bright or is secretly all unfavorable for recognition of face excessively.Two threshold values are set in one embodiment, if being lower than lower threshold value quilt
It is considered that facial image is excessively dark, is considered as that facial image is excessively bright if being higher than higher threshold value, both of which determines
To be off quality, final choice goes out the moderate facial image of brightness.In another embodiment, to picture each in facial image
The brightness of vegetarian refreshments is for statistical analysis, calculates the variance of the brightness of each pixel, if variance is excessive, such as more than predetermined threshold
Value, then it is assumed that brightness of image is uneven, such facial image be also identified as it is off quality, it is main by such judgement
The face brightness of the case where being for excluding " yin-yang face ", i.e. side are higher, and the face brightness of the other side is lower.
In another embodiment, the determination for whether meeting identification requirement to the brightness of acquired facial image can be with base
It is carried out in grey level histogram.It in one example, can be to the face in facial image in face entirety, eye part, right eye
Part and mouth respectively extract grey level histogram feature, obtain four histograms, calculate aforementioned four histogram and its 30% He
The brightness of 70% quantile differs greatly if there is two or more numerical value and normal illumination face corresponding data, then sentences
Break and be unsatisfactory for identification requirement for the brightness of facial image, is otherwise judged as that satisfaction identification requires.In other examples, can also lead to
Any other suitable mode is crossed to determine whether the brightness of acquired facial image meets identification and require.
According to an embodiment of the invention, step 260 is judged facial image size, the judgement facial image
Whether size meets quality requirement:If the face surrounds the size of block diagram picture in the first size threshold value and the second size
Between threshold value, it is determined that the size of the facial image meets quality requirement, conversely, being then unsatisfactory for quality requirement.
In one embodiment, the size of the facial image after Face datection is judged, for example, judging face frame
Size be all unfavorable for the operation of recognition of face if face frame is too large or too small.It illustratively, can be by counting people
The modes such as face frame number of pixels or the area for calculating face frame determine size.In one embodiment, if the people detected
Face is too large or too small, user can be prompted to be adjusted, and adjustment face obtains one big at a distance from image collecting device
Small moderate facial image.
According to an embodiment of the invention, the method also includes living to the facial image comprising target object
Physical examination is surveyed.
In one example, it can indicate that object to be identified reads aloud passage, by acquiring facial image, judge its lip
Dynamic whether move with the lip of corresponding text matches, if matching, In vivo detection success.
In one example, can indicate object to be identified make required movement (required movement be, for example, finger pressing
It gulps down gas in two cheek skins or mouth to heave two cheeks).In an exemplary example, when object to be identified has done one
Or when multiple instructions movement, acquire its facial image, judge whether its actions taken qualified, if so, In vivo detection success,
Conversely, In vivo detection fails.In another exemplary example, when object to be identified has done one or more instruction movements
When, the skin area image before capturing object to be identified movement in image respectively and after movement, and by skin area image
It is transferred to skin elasticity classifier, which is the disaggregated model succeeded in school in advance.For example, if it is work
Body skin, then model output is 1, and otherwise output is 0.In this embodiment it is possible to based on referring to object to be identified in execution
Show that the comparison of the skin area image of movement front and back carries out In vivo detection.
Illustratively, the study of skin elasticity classifier can carry out offline.A kind of possible embodiment is to search in advance
Collection living body true man do the before and after frames image of compulsory exercise, while collecting using photo, video playback, scraps of paper mask and 3D model
Etc. the attack image for doing compulsory exercise.The former as positive sample, the latter as negative sample, then use deep learning, support to
The statistical learning methods such as amount machine train skin elasticity classifier.
It illustratively, can be based on Face datection and face key point location algorithm come real to the capture of skin area image
It is existing, such as a large amount of facial images are collected in advance, the canthus of face is manually marked out in every image, the corners of the mouth, the wing of nose, cheekbone is most
High point, a series of key points such as outer profile point use machine learning algorithm (such as deep learning, or returning based on local feature
Reduction method) and using the aforementioned image marked as input training Face datection, face key point location model.It will be collected
After the facial image of movement front and back inputs trained Face datection, face key point location model, will output face location and
Human face region is cut into a series of triangular plate members according to key point position coordinates, will be located at chin, cheekbone by key point position coordinates
The triangular plate member image block in the regions such as bone, two cheeks is as face skin area.
In another embodiment, living body acquisition device, such as binocular camera can be done using special hardware, for one
The higher scene of a little safety requirements.In this embodiment it is possible to the judgement based on the sub-surface scattering degree to face to be identified
Carry out In vivo detection.Due to the sub-surface scattering degree of 3D mask etc. and true man's face it is different (when sub-surface scatters stronger, image
Gradient is smaller, so that diffusion is smaller), for example, the sub-surface scattering degree of the mask of the materials such as general paper or plastics is remote
It is weaker than face, and the sub-surface of the mask of the materials such as general silica gel scatters degree much stronger than face, therefore by diffusion
Judgement can effectively defend mask attacker.It therefore, in embodiments of the present invention, can be by binocular camera and structure light knot
It closes, has the 3D face of structured light patterns by binocular camera acquisition, then according to structured light patterns in 3D face sub-surface
Scattering degree carries out living body judgement.
According to an embodiment of the invention, the method also includes in the quality of the image comprising target object face
It is judged as in underproof situation, issues the first prompt, target object is prompted to be adjusted.
In one embodiment, clear in order to enable users in the case where the Quality estimation of facial image is underproof situation
Ground understands underproof reason, provides feedback mechanism, and according to the result of Quality estimation to user feedback original off quality
Cause, for example, providing, light is too strong, light is too weak, face is to the left, face is to the right, face is on the lower side, come back, face is too big, face is too small etc. is mentioned
Show.The mode prompted can there are many, can be prompted, can also be prompted by text by voice, for example,
It is shown in the form of text on terminal display device.By prompt appropriate, user can be guided to carry out adaptability tune
It is whole, to obtain the facial image to conform to quality requirements.
According to the another aspect of invention, a kind of base map input device is provided, Fig. 3 shows one of base map input device
The schematic diagram of embodiment.
A kind of base map input device 300, including:Image collection module 310, for obtaining the figure comprising target object face
Picture;Quality estimation module 320, for carrying out Quality estimation to the image comprising target object face;Base map determining module
330, for being that a qualified at least image is determined as base map by Quality estimation;Base map preserving module 340, it is described for saving
Base map.
According to an embodiment of the invention, described device further includes, face detection module is used for described comprising target object
The image of face carries out Face datection, surrounds frame to obtain face.
The methods of neural network, AdaBoost can be used by carrying out Face datection, only be wrapped by the way that Face datection is available
The encirclement frame in the region containing facial image, it is subsequent that only face encirclement frame is handled, it can reduce calculation amount, while also excluding
Incoherent information in image, improves accuracy of identification.Face, which surrounds frame, can be the minimum box comprising face, can also be with
It is that the minimum box comprising face extends to the outside a part of box, such as 1.2 times of minimum box again, suitably amplifies some
Face surrounds frame can be to avoid loss face information.
According to an embodiment of the invention, the Quality estimation module is specifically used for surrounding the face block diagram picture progress matter
Amount judgement;The Quality estimation module specifically includes the fog-level judgement of face 3 d pose judging submodule, facial image
The size judgement of submodule, the occlusion state judging submodule of face, the brightness judging submodule of facial image and facial image
At least one of submodule.
The general only registration of the registration of face base map includes the image of face information, therefore, only to face surround block diagram picture into
Row Quality estimation.In face recognition process, influence many because being known as of recognition effect, and typing face base map when
Wait to select as far as possible positive face, clearly, unobstructed, brightness is moderate and the image that is of moderate size.Therefore, judge in picture quality
When, also mainly consider face 3 d pose, the fog-level of facial image, the occlusion state of face, facial image brightness and
The size of facial image.In the specific implementation, the one or more that can choose in these conditions judged, specifically can be with
It is selected according to calculation amount and the difference of required precision.
According to an embodiment of the invention, the Quality estimation module is to face encirclement block diagram picture progress Quality estimation
It is carried out based on deep neural network;The face 3 d pose judging submodule, for determining the face in three-dimensional space
In per the one-dimensional angle for deviateing positive face;And if described be not more than predetermined threshold per the one-dimensional angle for deviateing positive face, really
The 3 d pose of the fixed face meets quality requirement, conversely, being then unsatisfactory for quality requirement.
In one embodiment, the judgement that deep neural network carries out picture quality can be used, pass through one mind of training
The three dimensional angular angle value of face in the image, the fuzzy value of facial image, face can be exported for the image of input through network
Whether have block, the size of the brightness value of facial image and facial image.In one embodiment using pitch angle (pitch),
Yaw angle (yaw), roll angle (roll) define the 3 d pose of face, input an image into neural network, neural network will
The size for exporting above-mentioned three kinds of angles respectively, three kinds of angles are compared with scheduled angle threshold respectively, if every kind of angle
Degree is no more than scheduled angle threshold, it is determined that the 3 d pose of the face meets quality requirement.If there is a kind of angle
It has been more than predetermined threshold, it is determined that the image is unsatisfactory for quality requirement.It is several to be respectively corresponded more than the case where predetermined angular threshold value
It comes back, bow, head left avertence, head right avertence, situations such as head is not positive, in terminal real-time image acquisition, user can be prompted to be unsatisfactory for
The type of quality requirement, so that user corrects.
According to an embodiment of the invention, the fog-level judging submodule of the facial image, for determining the face
The fog-level of block diagram picture is surrounded, if the fog-level is not more than predetermined threshold, it is determined that the facial image obscures
Degree meets quality requirement, conversely, being then unsatisfactory for quality requirement.
It in one embodiment, can be with to whether the fog-level of acquired facial image meets determination that identification requires
It is carried out based on depth convolutional network.The fog-level of facial image can be defined as a numerical value, for example, one can be normalized to
A 0 to 1 numerical value.In one example, determining whether the fog-level of acquired facial image meets identification requirement can be with
Including:Motion blur and Gaussian Blur based on acquired facial image determine the fog-level of acquired facial image;
If the fog-level of acquired facial image is not more than predetermined threshold, it is determined that the fog-level of facial image meets identification
It is required that being required conversely, being then unsatisfactory for identification.It can implement the process based on the good depth convolutional network model of off-line training.
Wherein, the setting of the predetermined threshold can be based on specific application scenarios.In other examples, it can also be closed by any other
Suitable mode requires to determine whether the fog-level of acquired facial image meets identification.
According to an embodiment of the invention, the occlusion state judging submodule of the face, for determining the pass of the face
Whether key position is blocked;And if the key position of the face is not blocked, it is determined that the people in the facial image
The occlusion state of face meets quality requirement, conversely, being then unsatisfactory for quality requirement.
In one embodiment, whether what identification required is met to the occlusion state of the face in acquired facial image
Determination can be carried out based on depth convolutional network.In one example, blocking for the face in acquired facial image is determined
Whether state meets identification requirement:Determine whether the key position of face is blocked;If the key position of face
It is not blocked, it is determined that the occlusion state of the face in facial image meets identification and requires, and requires conversely, being then unsatisfactory for identification.
Wherein, the key position of face may include at least one of organs such as eyebrow, eyes, nose, mouth.For example, showing at one
In example, can eyes to face and mouth carry out shadowing.Using the good depth convolutional network model of off-line training, according to defeated
Whether the facial image entered, output three key positions of left-eye/right-eye/mouth are blocked.If any one position is blocked,
Then facial image is unsatisfactory for identification requirement.In other examples, it can also be determined by any other suitable mode through adopting
Whether the occlusion state of the face in the facial image of collection, which meets identification, requires.
According to an embodiment of the invention, the brightness judging submodule of the facial image, for determining that the face surrounds
The brightness of block diagram picture, if the brightness is between the first luminance threshold and the second luminance threshold, it is determined that the facial image
Brightness meet quality requirement, conversely, being then unsatisfactory for quality requirement.
In one embodiment, depth convolutional network can be used to determine brightness of image.For the image of input,
Depth convolutional network will export a numerical value, and for indicating the brightness of image, which can be the numerical value between 0 to 255,
Can be normalized to 0 to 1 numerical value.Under normal circumstances, brightness it is too high or too it is low all indicate image quality it is not so good, image
It crosses bright or is secretly all unfavorable for recognition of face excessively.Two threshold values are set in one embodiment, if being lower than lower threshold value quilt
It is considered that facial image is excessively dark, is considered as that facial image is excessively bright if being higher than higher threshold value, both of which determines
To be off quality, final choice goes out the moderate facial image of brightness.In another embodiment, to picture each in facial image
The brightness of vegetarian refreshments is for statistical analysis, calculates the variance of the brightness of each pixel, if variance is excessive, such as more than predetermined threshold
Value, then it is assumed that brightness of image is uneven, such facial image be also identified as it is off quality, it is main by such judgement
The face brightness of the case where being for excluding " yin-yang face ", i.e. side are higher, and the face brightness of the other side is lower.
In another embodiment, the determination for whether meeting identification requirement to the brightness of acquired facial image can be with base
It is carried out in grey level histogram.It in one example, can be to the face in facial image in face entirety, eye part, right eye
Part and mouth respectively extract grey level histogram feature, obtain four histograms, calculate aforementioned four histogram and its 30% He
The brightness of 70% quantile differs greatly if there is two or more numerical value and normal illumination face corresponding data, then sentences
Break and be unsatisfactory for identification requirement for the brightness of facial image, is otherwise judged as that satisfaction identification requires.In other examples, can also lead to
Any other suitable mode is crossed to determine whether the brightness of acquired facial image meets identification and require.
According to an embodiment of the invention, the size judging submodule of the facial image, if surrounded for the face
The size of block diagram picture is between the first size threshold value and the second size threshold value, it is determined that the size of the facial image meets quality
It is required that conversely, being then unsatisfactory for quality requirement.
In one embodiment, the size of the facial image after Face datection is judged, for example, judging face frame
Size be all unfavorable for the operation of recognition of face if face frame is too large or too small.It illustratively, can be by counting people
The modes such as face frame number of pixels or the area for calculating face frame determine size.In one embodiment, if the people detected
Face is too large or too small, user can be prompted to be adjusted, and adjustment face obtains one big at a distance from image collecting device
Small moderate facial image.
According to an embodiment of the invention, described device further includes, In vivo detection module is used for described comprising target object
Facial image carry out In vivo detection.
In one example, it can indicate that object to be identified reads aloud passage, by acquiring facial image, judge its lip
Dynamic whether move with the lip of corresponding text matches, if matching, In vivo detection success.
In one example, can indicate object to be identified make required movement (required movement be, for example, finger pressing
It gulps down gas in two cheek skins or mouth to heave two cheeks).In an exemplary example, when object to be identified has done one
Or when multiple instructions movement, acquire its facial image, judge whether its actions taken qualified, if so, In vivo detection success,
Conversely, In vivo detection fails.In another exemplary example, when object to be identified has done one or more instruction movements
When, the skin area image before capturing object to be identified movement in image respectively and after movement, and by skin area image
It is transferred to skin elasticity classifier, which is the disaggregated model succeeded in school in advance.For example, if it is work
Body skin, then model output is 1, and otherwise output is 0.In this embodiment it is possible to based on referring to object to be identified in execution
Show that the comparison of the skin area image of movement front and back carries out In vivo detection.
Illustratively, the study of skin elasticity classifier can carry out offline.A kind of possible embodiment is to search in advance
Collection living body true man do the before and after frames image of compulsory exercise, while collecting using photo, video playback, scraps of paper mask and 3D model
Etc. the attack image for doing compulsory exercise.The former as positive sample, the latter as negative sample, then use deep learning, support to
The statistical learning methods such as amount machine train skin elasticity classifier.
It illustratively, can be based on Face datection and face key point location algorithm come real to the capture of skin area image
It is existing, such as a large amount of facial images are collected in advance, the canthus of face is manually marked out in every image, the corners of the mouth, the wing of nose, cheekbone is most
High point, a series of key points such as outer profile point use machine learning algorithm (such as deep learning, or returning based on local feature
Reduction method) and using the aforementioned image marked as input training Face datection, face key point location model.It will be collected
After the facial image of movement front and back inputs trained Face datection, face key point location model, will output face location and
Human face region is cut into a series of triangular plate members according to key point position coordinates, will be located at chin, cheekbone by key point position coordinates
The triangular plate member image block in the regions such as bone, two cheeks is as face skin area.
In another embodiment, living body acquisition device, such as binocular camera can be done using special hardware, for one
The higher scene of a little safety requirements.In this embodiment it is possible to the judgement based on the sub-surface scattering degree to face to be identified
Carry out In vivo detection.Due to the sub-surface scattering degree of 3D mask etc. and true man's face it is different (when sub-surface scatters stronger, image
Gradient is smaller, so that diffusion is smaller), for example, the sub-surface scattering degree of the mask of the materials such as general paper or plastics is remote
It is weaker than face, and the sub-surface of the mask of the materials such as general silica gel scatters degree much stronger than face, therefore by diffusion
Judgement can effectively defend mask attacker.It therefore, in embodiments of the present invention, can be by binocular camera and structure light knot
It closes, has the 3D face of structured light patterns by binocular camera acquisition, then according to structured light patterns in 3D face sub-surface
Scattering degree carries out living body judgement.
According to an embodiment of the invention, described device further includes, cue module is used for described comprising target object face
Image Quality estimation be underproof situation under, issue first prompt, prompt target object be adjusted.
In one embodiment, clear in order to enable users in the case where the Quality estimation of facial image is underproof situation
Ground understands underproof reason, provides feedback mechanism, and according to the result of Quality estimation to user feedback original off quality
Cause, for example, providing, light is too strong, light is too weak, face is to the left, face is to the right, face is on the lower side, come back, face is too big, face is too small etc. is mentioned
Show.The mode prompted can there are many, can be prompted, can also be prompted by text by voice, for example,
It is shown in the form of text on terminal display device.By prompt appropriate, user can be guided to carry out adaptability tune
It is whole, to obtain the facial image to conform to quality requirements.
According to the another aspect of invention, a kind of base map input system, including imaging sensor, storage device and place are also provided
Device is managed, described image sensor is stored with the meter run by the processor for acquiring facial image on the storage device
Calculation machine program, the computer program execute above-mentioned base map input method when being run by the processor.
In one embodiment, what the face that above-mentioned base map input method, apparatus and system are applied to mobile terminal unlocked
Scene.In order to realize that face unlocks, the preparatory typing base map in mobile terminal (such as mobile phone) in advance is needed, after typing base map,
When user need the mobile terminal of screen locking is unlocked when, only need to by the image collecting device of mobile terminal against oneself
Face, image collecting device acquires facial image in real time, and is compared with the base map of typing, if similarity is greater than
Or be equal to predetermined threshold, then unlock operation is executed, if similarity is less than predetermined threshold, without unlock.It is carrying out in real time
When acquisition, In vivo detection can be carried out, detection mode can be identical as above-mentioned biopsy method.
In another embodiment, above-mentioned base map input method, apparatus and system are used for the scene in unattended shop.Nothing
People shop on duty generally requires recognition of face to control the disengaging of client, and the registration that client carries out recognition of face is all by certainly
What oneself mobile terminal was realized.It is shot by hand-held mobile terminal, uploads base map to cloud server, to realize account
Registration.
Although describing example embodiment by reference to attached drawing here, it should be understood that above example embodiment are only exemplary
, and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various changes wherein
And modification, it is made without departing from the scope of the present invention and spiritual.All such changes and modifications are intended to be included in appended claims
Within required the scope of the present invention.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, apparatus embodiments described above are merely indicative, for example, the division of the unit, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another equipment is closed or is desirably integrated into, or some features can be ignored or not executed.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of the various inventive aspects,
To in the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure,
Or in descriptions thereof.However, the method for the invention should not be construed to reflect following intention:It is i.e. claimed
The present invention claims features more more than feature expressly recited in each claim.More precisely, such as corresponding power
As sharp claim reflects, inventive point is that the spy of all features less than some disclosed single embodiment can be used
Sign is to solve corresponding technical problem.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in this specific
Embodiment, wherein each, the claims themselves are regarded as separate embodiments of the invention.
It will be understood to those skilled in the art that any combination pair can be used other than mutually exclusive between feature
All features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed any method
Or all process or units of equipment are combined.Unless expressly stated otherwise, this specification (is wanted including adjoint right
Ask, make a summary and attached drawing) disclosed in each feature can be replaced with an alternative feature that provides the same, equivalent, or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any
Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice
Microprocessor or digital signal processor (DSP) are realized in the device according to an embodiment of the present invention for pedestrian detection
The some or all functions of some modules.The present invention is also implemented as a part for executing method as described herein
Or whole program of device (for example, computer program and computer program product).It is such to realize that program of the invention
May be stored on the computer-readable medium, or may be in the form of one or more signals.Such signal can be from
Downloading obtains on internet website, is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch
To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame
Claim.
The above description is merely a specific embodiment or to the explanation of specific embodiment, protection of the invention
Range is not limited thereto, and anyone skilled in the art in the technical scope disclosed by the present invention, can be easily
Expect change or replacement, should be covered by the protection scope of the present invention.Protection scope of the present invention should be with claim
Subject to protection scope.
Claims (21)
1. a kind of base map input method, including,
Obtain the image comprising target object face;
Quality estimation is carried out to the image comprising target object face;
It is that a qualified at least image is determined as base map by Quality estimation;
Save the base map.
2. base map input method according to claim 1, further includes,
Face datection is carried out to the image comprising target object face, surrounds frame to obtain face.
3. base map input method according to claim 2, the image to described comprising target object face carries out matter
Measuring judgement includes,
Block diagram picture is surrounded to the face and carries out Quality estimation;
The Quality estimation includes judging face 3 d pose, the fog-level of facial image, the occlusion state of face, face figure
Whether at least one of the brightness of picture and the size of facial image meet quality requirement.
4. base map input method according to claim 3, surrounding block diagram picture to carry out Quality estimation to the face is to be based on
What deep neural network carried out;
It is described to judge whether face 3 d pose meets quality requirement and include:Determine the face in three dimensions per one-dimensional
Deviate the angle of positive face;And if described be not more than predetermined threshold per the one-dimensional angle for deviateing positive face, it is determined that the face
3 d pose meet quality requirement, conversely, being then unsatisfactory for quality requirement.
5. base map input method according to claim 3,
Whether the fog-level for judging facial image meets quality requirement:Determine that the face surrounds the mould of block diagram picture
Paste degree, if the fog-level is not more than predetermined threshold, it is determined that the fog-level of the facial image, which meets quality, to be wanted
It asks, conversely, being then unsatisfactory for quality requirement.
6. base map input method according to claim 3,
Whether the occlusion state for judging face meets quality requirement:Determine whether the key position of the face is hidden
Gear;And if the key position of the face is not blocked, it is determined that the occlusion state of the face in the facial image is full
Sufficient quality requirement, conversely, being then unsatisfactory for quality requirement.
7. base map input method according to claim 3,
Whether the brightness for judging facial image meets quality requirement:Determine that the face surrounds the brightness of block diagram picture,
If the brightness is between the first luminance threshold and the second luminance threshold, it is determined that the brightness of the facial image meets quality
It is required that conversely, being then unsatisfactory for quality requirement.
8. base map input method according to claim 3,
Whether the size for judging facial image meets quality requirement:If the size that the face surrounds block diagram picture exists
Between first size threshold value and the second size threshold value, it is determined that the size of the facial image meets quality requirement, conversely, then not
Meet quality requirement.
9. base map input method according to claim 1 further includes,
In vivo detection is carried out to the facial image comprising target object.
10. base map input method according to claim 1, further includes, in the image comprising target object face
Quality estimation is to issue the first prompt in underproof situation, target object is prompted to be adjusted.
11. a kind of base map input device, including,
Image collection module, for obtaining the image comprising target object face;
Quality estimation module, for carrying out Quality estimation to the image comprising target object face;
Base map determining module, for being that a qualified at least image is determined as base map by Quality estimation;
Base map preserving module, for saving the base map.
12. base map input device according to claim 11, further includes,
Face detection module, for carrying out Face datection to the image comprising target object face, to obtain face encirclement
Frame.
13. base map input device according to claim 12, the Quality estimation module is specifically used for the face packet
Peripheral frame image carries out Quality estimation;
The Quality estimation module specifically includes face 3 d pose judging submodule, the fog-level of facial image judges submodule
Block, the occlusion state judging submodule of face, the size of the brightness judging submodule of facial image and facial image judge submodule
At least one of block.
14. base map input device according to claim 13,
The Quality estimation module, which surrounds block diagram as carrying out Quality estimation to the face, to be carried out based on deep neural network;
The face 3 d pose judging submodule, for determining the every one-dimensional positive face of deviation of the face in three dimensions
Angle;And if described be not more than predetermined threshold per the one-dimensional angle for deviateing positive face, it is determined that the 3 d pose of the face
Meet quality requirement, conversely, being then unsatisfactory for quality requirement.
15. base map input device according to claim 13,
The fog-level judging submodule of the facial image, for determining that the face surrounds the fog-level of block diagram picture, such as
Fog-level described in fruit is not more than predetermined threshold, it is determined that the fog-level of the facial image meets quality requirement, conversely, then
It is unsatisfactory for quality requirement.
16. base map input device according to claim 13,
The occlusion state judging submodule of the face, for determining whether the key position of the face is blocked;And such as
The key position of face described in fruit is not blocked, it is determined that the occlusion state of the face in the facial image meets quality and wants
It asks, conversely, being then unsatisfactory for quality requirement.
17. base map input device according to claim 13,
The brightness judging submodule of the facial image, for determining that the face surrounds the brightness of block diagram picture, if described bright
Degree is between the first luminance threshold and the second luminance threshold, it is determined that and the brightness of the facial image meets quality requirement, conversely,
Then it is unsatisfactory for quality requirement.
18. base map input device according to claim 13,
The size judging submodule of the facial image, if surrounding the size of block diagram picture in the first size threshold for the face
Between value and the second size threshold value, it is determined that the size of the facial image meets quality requirement, wants conversely, being then unsatisfactory for quality
It asks.
19. base map input device according to claim 11, further includes,
In vivo detection module, for carrying out In vivo detection to the facial image comprising target object.
20. base map input device according to claim 11, further includes,
Cue module is to issue in underproof situation for the Quality estimation in the image comprising target object face
First prompt, prompts target object to be adjusted.
21. a kind of base map input system, including imaging sensor, storage device and processor, described image sensor is for adopting
Collect facial image, is stored with the computer program run by the processor on the storage device, the computer program exists
The base map input method as described in any one of claim 1-10 is executed when being run by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710867203.4A CN108875485A (en) | 2017-09-22 | 2017-09-22 | A kind of base map input method, apparatus and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710867203.4A CN108875485A (en) | 2017-09-22 | 2017-09-22 | A kind of base map input method, apparatus and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108875485A true CN108875485A (en) | 2018-11-23 |
Family
ID=64325787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710867203.4A Pending CN108875485A (en) | 2017-09-22 | 2017-09-22 | A kind of base map input method, apparatus and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875485A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670436A (en) * | 2018-12-13 | 2019-04-23 | 北京旷视科技有限公司 | Vehicle operator's auth method, device and electronic equipment |
CN109684951A (en) * | 2018-12-12 | 2019-04-26 | 北京旷视科技有限公司 | Face identification method, bottom library input method, device and electronic equipment |
CN109740503A (en) * | 2018-12-28 | 2019-05-10 | 北京旷视科技有限公司 | Face authentication method, image bottom library input method, device and processing equipment |
CN109919035A (en) * | 2019-01-31 | 2019-06-21 | 平安科技(深圳)有限公司 | Improve method, apparatus, computer equipment and storage medium that attendance is identified by |
CN110321843A (en) * | 2019-07-04 | 2019-10-11 | 杭州视洞科技有限公司 | A kind of face out of kilter method based on deep learning |
CN110619656A (en) * | 2019-09-05 | 2019-12-27 | 杭州宇泛智能科技有限公司 | Face detection tracking method and device based on binocular camera and electronic equipment |
CN110688967A (en) * | 2019-09-30 | 2020-01-14 | 上海依图信息技术有限公司 | System and method for static human face living body detection |
CN111405249A (en) * | 2020-03-20 | 2020-07-10 | 腾讯云计算(北京)有限责任公司 | Monitoring method, monitoring device, server and computer-readable storage medium |
WO2020155486A1 (en) * | 2019-01-28 | 2020-08-06 | 平安科技(深圳)有限公司 | Facial recognition optimization method and apparatus, computer device and storage medium |
CN111860394A (en) * | 2020-07-28 | 2020-10-30 | 成都新希望金融信息有限公司 | Gesture estimation and gesture detection-based action living body recognition method |
CN113060094A (en) * | 2021-04-29 | 2021-07-02 | 北京车和家信息技术有限公司 | Vehicle control method and device and vehicle-mounted equipment |
CN113132615A (en) * | 2019-12-31 | 2021-07-16 | 深圳云天励飞技术有限公司 | Object image acquisition method and device, electronic equipment and storage medium |
CN113628243A (en) * | 2020-05-08 | 2021-11-09 | 广州海格通信集团股份有限公司 | Motion trajectory acquisition method and device, computer equipment and storage medium |
CN114004779A (en) * | 2020-07-27 | 2022-02-01 | 中移物联网有限公司 | Face quality evaluation method and device based on deep learning |
WO2022047492A1 (en) * | 2020-08-27 | 2022-03-03 | Sensormatic Electronics, LLC | Method and system for facial feature information generation |
EP3867735A4 (en) * | 2018-12-14 | 2022-04-20 | Samsung Electronics Co., Ltd. | Method of performing function of electronic device and electronic device using same |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071288A1 (en) * | 2005-09-29 | 2007-03-29 | Quen-Zong Wu | Facial features based human face recognition method |
CN102930261A (en) * | 2012-12-05 | 2013-02-13 | 上海市电力公司 | Face snapshot recognition method |
CN104881644A (en) * | 2015-05-25 | 2015-09-02 | 华南理工大学 | Face image acquisition method under uneven lighting condition |
CN204698510U (en) * | 2015-06-02 | 2015-10-14 | 福州大学 | The diabetic retinopathy optical fundus examination photographic means that picture quality ensures |
CN105120167A (en) * | 2015-08-31 | 2015-12-02 | 广州市幸福网络技术有限公司 | Certificate picture camera and certificate picture photographing method |
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
CN105631439A (en) * | 2016-02-18 | 2016-06-01 | 北京旷视科技有限公司 | Human face image collection method and device |
CN105938552A (en) * | 2016-06-29 | 2016-09-14 | 北京旷视科技有限公司 | Face recognition method capable of realizing base image automatic update and face recognition device |
CN106331510A (en) * | 2016-10-31 | 2017-01-11 | 维沃移动通信有限公司 | Backlight photographing method and mobile terminal |
CN106503614A (en) * | 2016-09-14 | 2017-03-15 | 厦门幻世网络科技有限公司 | A kind of photo acquisition methods and device |
CN106682187A (en) * | 2016-12-29 | 2017-05-17 | 北京旷视科技有限公司 | Method and device for establishing image bottom libraries |
-
2017
- 2017-09-22 CN CN201710867203.4A patent/CN108875485A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071288A1 (en) * | 2005-09-29 | 2007-03-29 | Quen-Zong Wu | Facial features based human face recognition method |
CN102930261A (en) * | 2012-12-05 | 2013-02-13 | 上海市电力公司 | Face snapshot recognition method |
CN104881644A (en) * | 2015-05-25 | 2015-09-02 | 华南理工大学 | Face image acquisition method under uneven lighting condition |
CN204698510U (en) * | 2015-06-02 | 2015-10-14 | 福州大学 | The diabetic retinopathy optical fundus examination photographic means that picture quality ensures |
CN105120167A (en) * | 2015-08-31 | 2015-12-02 | 广州市幸福网络技术有限公司 | Certificate picture camera and certificate picture photographing method |
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
CN105631439A (en) * | 2016-02-18 | 2016-06-01 | 北京旷视科技有限公司 | Human face image collection method and device |
CN105938552A (en) * | 2016-06-29 | 2016-09-14 | 北京旷视科技有限公司 | Face recognition method capable of realizing base image automatic update and face recognition device |
CN106503614A (en) * | 2016-09-14 | 2017-03-15 | 厦门幻世网络科技有限公司 | A kind of photo acquisition methods and device |
CN106331510A (en) * | 2016-10-31 | 2017-01-11 | 维沃移动通信有限公司 | Backlight photographing method and mobile terminal |
CN106682187A (en) * | 2016-12-29 | 2017-05-17 | 北京旷视科技有限公司 | Method and device for establishing image bottom libraries |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684951A (en) * | 2018-12-12 | 2019-04-26 | 北京旷视科技有限公司 | Face identification method, bottom library input method, device and electronic equipment |
CN109670436A (en) * | 2018-12-13 | 2019-04-23 | 北京旷视科技有限公司 | Vehicle operator's auth method, device and electronic equipment |
EP3867735A4 (en) * | 2018-12-14 | 2022-04-20 | Samsung Electronics Co., Ltd. | Method of performing function of electronic device and electronic device using same |
US11551682B2 (en) | 2018-12-14 | 2023-01-10 | Samsung Electronics Co., Ltd. | Method of performing function of electronic device and electronic device using same |
CN109740503A (en) * | 2018-12-28 | 2019-05-10 | 北京旷视科技有限公司 | Face authentication method, image bottom library input method, device and processing equipment |
WO2020155486A1 (en) * | 2019-01-28 | 2020-08-06 | 平安科技(深圳)有限公司 | Facial recognition optimization method and apparatus, computer device and storage medium |
CN109919035A (en) * | 2019-01-31 | 2019-06-21 | 平安科技(深圳)有限公司 | Improve method, apparatus, computer equipment and storage medium that attendance is identified by |
CN110321843A (en) * | 2019-07-04 | 2019-10-11 | 杭州视洞科技有限公司 | A kind of face out of kilter method based on deep learning |
CN110321843B (en) * | 2019-07-04 | 2021-11-09 | 杭州视洞科技有限公司 | Face optimization method based on deep learning |
CN110619656A (en) * | 2019-09-05 | 2019-12-27 | 杭州宇泛智能科技有限公司 | Face detection tracking method and device based on binocular camera and electronic equipment |
CN110619656B (en) * | 2019-09-05 | 2022-12-02 | 杭州宇泛智能科技有限公司 | Face detection tracking method and device based on binocular camera and electronic equipment |
CN110688967A (en) * | 2019-09-30 | 2020-01-14 | 上海依图信息技术有限公司 | System and method for static human face living body detection |
CN113132615A (en) * | 2019-12-31 | 2021-07-16 | 深圳云天励飞技术有限公司 | Object image acquisition method and device, electronic equipment and storage medium |
CN111405249A (en) * | 2020-03-20 | 2020-07-10 | 腾讯云计算(北京)有限责任公司 | Monitoring method, monitoring device, server and computer-readable storage medium |
CN113628243A (en) * | 2020-05-08 | 2021-11-09 | 广州海格通信集团股份有限公司 | Motion trajectory acquisition method and device, computer equipment and storage medium |
CN114004779A (en) * | 2020-07-27 | 2022-02-01 | 中移物联网有限公司 | Face quality evaluation method and device based on deep learning |
CN111860394A (en) * | 2020-07-28 | 2020-10-30 | 成都新希望金融信息有限公司 | Gesture estimation and gesture detection-based action living body recognition method |
WO2022047492A1 (en) * | 2020-08-27 | 2022-03-03 | Sensormatic Electronics, LLC | Method and system for facial feature information generation |
US11763595B2 (en) | 2020-08-27 | 2023-09-19 | Sensormatic Electronics, LLC | Method and system for identifying, tracking, and collecting data on a person of interest |
CN113060094A (en) * | 2021-04-29 | 2021-07-02 | 北京车和家信息技术有限公司 | Vehicle control method and device and vehicle-mounted equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875485A (en) | A kind of base map input method, apparatus and system | |
CN108875452A (en) | Face identification method, device, system and computer-readable medium | |
CN108319953B (en) | Occlusion detection method and device, electronic equipment and the storage medium of target object | |
CN106778518B (en) | Face living body detection method and device | |
CN105426827B (en) | Living body verification method, device and system | |
CN106407914B (en) | Method and device for detecting human face and remote teller machine system | |
CN105243386B (en) | Face living body judgment method and system | |
CN104143086B (en) | Portrait compares the application process on mobile terminal operating system | |
JP4692526B2 (en) | Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method | |
KR101569268B1 (en) | Acquisition System and Method of Iris image for iris recognition by using facial component distance | |
CN105659200B (en) | For showing the method, apparatus and system of graphic user interface | |
CN106056064B (en) | A kind of face identification method and face identification device | |
CN109558764A (en) | Face identification method and device, computer equipment | |
CN109766785B (en) | Living body detection method and device for human face | |
CN108985210A (en) | A kind of Eye-controlling focus method and system based on human eye geometrical characteristic | |
JP4936491B2 (en) | Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method | |
CN111524080A (en) | Face skin feature identification method, terminal and computer equipment | |
CN106372629A (en) | Living body detection method and device | |
CN112487921B (en) | Face image preprocessing method and system for living body detection | |
JP6822482B2 (en) | Line-of-sight estimation device, line-of-sight estimation method, and program recording medium | |
Weidenbacher et al. | A comprehensive head pose and gaze database | |
CN109858375A (en) | Living body faces detection method, terminal and computer readable storage medium | |
CN108875469A (en) | In vivo detection and identity authentication method, device and computer storage medium | |
US20210256244A1 (en) | Method for authentication or identification of an individual | |
KR20160009972A (en) | Iris recognition apparatus for detecting false face image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |
|
RJ01 | Rejection of invention patent application after publication |