CN107657218B - Face recognition method and related product - Google Patents
Face recognition method and related product Download PDFInfo
- Publication number
- CN107657218B CN107657218B CN201710817259.9A CN201710817259A CN107657218B CN 107657218 B CN107657218 B CN 107657218B CN 201710817259 A CN201710817259 A CN 201710817259A CN 107657218 B CN107657218 B CN 107657218B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- recognition result
- target
- peripheral contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 230000002093 peripheral effect Effects 0.000 claims description 77
- 238000013441 quality evaluation Methods 0.000 claims description 50
- 230000015654 memory Effects 0.000 claims description 32
- 230000001133 acceleration Effects 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 14
- 238000003709 image segmentation Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 16
- 238000000354 decomposition reaction Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention discloses a face recognition method and a related product, wherein the method comprises the following steps: shooting a target object in a face image input process to obtain a first image; carrying out face recognition on the first image to obtain a recognition result; and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency. The embodiment of the invention can carry out face recognition on the shot image, and remind the user through the specified frequency if the shot image meets the condition of failed input, on one hand, the user can not be frequently reminded, and on the other hand, the aim of warning the user is also fulfilled, so that the user can adjust the face input posture in time, and the face recognition efficiency is improved.
Description
Technical Field
The invention relates to the technical field of mobile terminals, in particular to a face recognition method and a related product.
Background
With the widespread application of mobile terminals (mobile phones, tablet computers, etc.), the applications that the mobile terminals can support are increasing, the functions are increasing, and the mobile terminals are developing towards diversification and individuation, and become indispensable electronic products in the life of users.
At present, people face unblock is more and more favored by mobile terminal producer, because people face unblock does not need the user to contact mobile terminal, alright in order to realize face image acquisition, therefore, face image acquisition is very convenient, the collection of face image is as the key of people face unblock, face image's good or bad has directly decided face unblock's success or failure, especially type into incorrect condition at the face, lead to face identification efficiency lower easily, therefore, how to solve when the face type fails, the problem of rationally reminding the user is urgent to be solved.
Disclosure of Invention
The embodiment of the invention provides a face recognition method and a related product, which can reasonably remind a user when face input fails.
In a first aspect, an embodiment of the present invention provides a mobile terminal, including an Application Processor (AP), and a face recognition device and a memory, which are connected to the AP, wherein,
the face recognition device is used for shooting a target object in the face image input process to obtain a first image;
the memory is used for storing preset conditions;
the AP is used for carrying out face recognition on the first image to obtain a recognition result; and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
In a second aspect, an embodiment of the present invention provides a face recognition method, which is applied to a mobile terminal including an application processor AP, and a face recognition device and a memory connected to the AP, where the method includes:
the face recognition device shoots a target object in a face image input process to obtain a first image;
the memory stores preset conditions;
the AP carries out face recognition on the first image to obtain a recognition result; and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
In a third aspect, an embodiment of the present invention provides a face recognition method, including:
shooting a target object in a face image input process to obtain a first image;
carrying out face recognition on the first image to obtain a recognition result;
and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
In a fourth aspect, an embodiment of the present invention provides a face recognition apparatus, including:
the shooting unit is used for shooting a target object in the process of inputting the face image to obtain a first image;
the recognition unit is used for carrying out face recognition on the first image to obtain a recognition result;
and the prompting unit is used for performing prompting operation according to a specified prompting frequency if the identification result meets a preset condition.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs including instructions for some or all of the steps as described in the third aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, where the computer program is used to make a computer execute some or all of the steps described in the third aspect of the present invention.
In a seventh aspect, embodiments of the present invention provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the third aspect of embodiments of the present invention. The computer program product may be a software installation package.
The embodiment of the invention has the following beneficial effects:
it can be seen that, in the embodiment of the present invention, in the process of inputting a face image, a target object is photographed to obtain a first image, the face of the first image is recognized to obtain a recognition result, if the recognition result meets a preset condition, a prompt operation is performed according to a specified prompt frequency, so that the face of the photographed image can be recognized, and if the recognition result meets a condition of input failure, the user is prompted through the specified frequency, on one hand, the user is not frequently prompted, and on the other hand, the purpose of warning the user is also achieved, so that the user can adjust a face input posture in time, and the face recognition efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic diagram of an architecture of an exemplary mobile terminal according to an embodiment of the present invention;
fig. 1B is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 1C is another schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 1D is a schematic flow chart of a face recognition method disclosed in the embodiment of the present invention;
fig. 1E is another schematic flow chart of a face recognition method disclosed in the embodiment of the present invention;
FIG. 2 is a schematic flow chart of another face recognition method disclosed in the embodiment of the present invention;
fig. 3 is another schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 4A is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present invention;
fig. 4B is a schematic structural diagram of a recognition unit of the face recognition apparatus depicted in fig. 4A according to an embodiment of the present invention;
fig. 4C is a schematic structural diagram of a prompt unit of the face recognition apparatus depicted in fig. 4A according to an embodiment of the present invention;
FIG. 4D is a schematic diagram of another structure of the face recognition apparatus depicted in FIG. 4A according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The Mobile terminal according to the embodiment of the present invention may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
The following describes embodiments of the present invention in detail. As shown in fig. 1A, an exemplary mobile terminal 1000, the face recognition device of the mobile terminal 1000 may include a front camera 21, which may be at least one of: infrared camera, two cameras, visible light camera etc. two cameras can be following at least one: infrared camera + can gather facial image through face identification device with light camera, two visible light cameras etc. at face identification in-process, above-mentioned leading camera can possess the function of zooming, can shoot same target based on different focuses, obtains a plurality of images, and above-mentioned target can be for the people's face.
Referring to fig. 1B, fig. 1B is a schematic structural diagram of a mobile terminal 100, where the mobile terminal 100 includes: the application processor AP110, the face recognition device 130, and the memory 140, wherein the AP110 is connected to the face recognition device 130 and the memory 140 through a bus 150, and further, referring to fig. 1C, fig. 1C is a modified structure of the mobile terminal 100 described in fig. 1B, and with respect to fig. 1B, fig. 1C further includes an acceleration sensor 160.
The mobile terminal described based on fig. 1A-1C can be used to implement the following functions:
the face recognition device 130 is configured to shoot a target object in a face image input process to obtain a first image;
the memory 140 is used for storing preset conditions;
the AP110 is configured to perform face recognition on the first image to obtain a recognition result; and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
Optionally, the preset condition is at least one of the following:
the recognition result is that the integrity of the face in the first image is lower than a first preset threshold;
or,
the identification result is that the image quality evaluation value of the first image is lower than a second preset threshold;
or,
the identification result is that the first image is from a non-living body.
Optionally, when the recognition result is the face integrity;
in the aspect of performing face recognition on the first image to obtain a recognition result, the AP110 is specifically configured to:
carrying out image segmentation on the first image to obtain a face region;
detecting whether the peripheral outline of the face area is complete;
when the peripheral contour of the face region is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle to obtain a first target peripheral contour, determining the break points of the peripheral contour of the face region, and connecting adjacent break points to obtain a second target peripheral contour;
and taking the area ratio between the second target peripheral outline and the first target peripheral outline as the face integrity.
Optionally, in the aspect of performing the prompt operation according to the designated prompt frequency, the AP110 is specifically configured to:
determining a target deviation degree corresponding to the recognition result according to the recognition result and a preset recognition result;
and determining the designated prompt frequency corresponding to the target deviation according to the corresponding relation between the preset deviation and the prompt frequency.
Optionally, the acceleration sensor 160 is configured to detect an acceleration of the mobile terminal, and when the acceleration is lower than a third preset threshold, the face recognition device performs the step of shooting the target object.
Further optionally, based on the mobile terminal described in fig. 1A to 1C, a face recognition method described in the following may be performed, specifically as follows:
the face recognition device 130 shoots a target object in a face image input process to obtain a first image;
the memory 140 stores preset conditions;
the AP110 performs face recognition on the first image to obtain a recognition result; and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
Fig. 1D is a schematic flowchart illustrating an embodiment of a face recognition method according to an embodiment of the present invention. The face recognition method described in this embodiment is applied to a mobile terminal including a face recognition device and an application processor AP, and its physical diagram and structure diagram can be seen in fig. 1A to 1C, which includes the following steps:
101. in the face image input process, a target object is shot to obtain a first image.
The target object may be a human face or other objects.
102. And carrying out face recognition on the first image to obtain a recognition result.
In the step 102, the performing face recognition on the first image may include, but is not limited to: determining whether the first image contains a face, determining the face integrity of the face in the first image, determining the image quality of the first image, determining whether the first image is from a living body, determining the angle of the face in the first image, and the like. Furthermore, after the face recognition is performed on the first image, a recognition result can be obtained.
Optionally, in step 102, when the recognition result is the face integrity;
the face recognition of the first image to obtain a recognition result includes:
a1, carrying out image segmentation on the first image to obtain a face region;
a2, detecting whether the peripheral outline of the face area is complete;
a3, when the peripheral contour of the face region is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle to obtain a first target peripheral contour, determining the break points of the peripheral contour of the face region, and connecting adjacent break points to obtain a second target peripheral contour;
and A4, taking the area ratio between the second target peripheral outline and the first target peripheral outline as the face integrity.
The image segmentation method includes the steps of performing image segmentation on a first image, determining a face region in the first image, namely removing a background region in the first image, further detecting whether a peripheral contour of the face region is complete, and if the peripheral contour is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle of a face to obtain a first target peripheral contour, wherein the first target peripheral contour meets symmetry, in addition, determining a breakpoint of the peripheral contour of the face region, then connecting adjacent breakpoints by a line segment or a straight line to obtain a second target peripheral contour, and taking an area ratio between the second target peripheral contour and the first target peripheral contour as face integrity.
Optionally, in step 102, when the recognition result is an image quality evaluation value;
the face recognition of the first image to obtain a recognition result includes:
and evaluating the image quality of the first image to obtain an image evaluation value.
The quality of the face image directly determines the face unlocking efficiency, so that the quality of the face image can be used as an important index for screening N face images. Therefore, the image quality evaluation can be performed on the N face images using at least one image quality evaluation index. The image quality evaluation index may be at least one of: mean, standard deviation, entropy, sharpness, signal-to-noise ratio, etc.
It should be noted that, since there is a certain limitation in evaluating image quality by using a single evaluation index, it is possible to evaluate image quality by using a plurality of image quality evaluation indexes, and certainly, when evaluating image quality, it is not preferable that the image quality evaluation indexes are more, because the image quality evaluation indexes are more, the calculation complexity of the image quality evaluation process is higher, and the image quality evaluation effect is not better, and therefore, in a case where the image quality evaluation requirement is higher, it is possible to evaluate image quality by using 2 to 10 image quality evaluation indexes. Specifically, the number of image quality evaluation indexes and which index is selected is determined according to the specific implementation situation. Of course, the image quality evaluation index selected in combination with the specific scene selection image quality evaluation index may be different between the image quality evaluation performed in the dark environment and the image quality evaluation performed in the bright environment.
Therefore, in the process of executing step 102, when the image quality evaluation can be performed on the N face images, a plurality of image quality evaluation indexes may be included, and each image quality evaluation index also corresponds to one weight, so that when each image quality evaluation index performs the image quality evaluation on the image, one evaluation result can be obtained, and finally, a weighting operation is performed, so as to obtain a final image quality evaluation value.
For example, when the requirement on the accuracy of the image quality evaluation is not high, the evaluation may be performed by using one image quality evaluation index, for example, the image quality evaluation value is performed on the image to be processed by using entropy, and it is considered that the larger the entropy is, the better the image quality is, and conversely, the smaller the entropy is, the worse the image quality is.
For example, when the requirement for the image quality evaluation accuracy is high, the image may be evaluated by using a plurality of image quality evaluation indexes, and when the image quality evaluation is performed by using a plurality of image quality evaluation indexes, a weight for each of the plurality of image quality evaluation indexes may be set, so that a plurality of image quality evaluation values may be obtained, and a final image quality evaluation value may be obtained from the plurality of image quality evaluation values and their corresponding weights, for example, three image quality evaluation indexes are: when an image quality evaluation is performed on a certain image by using A, B and C, the image quality evaluation value corresponding to a is B1, the image quality evaluation value corresponding to B is B2, and the image quality evaluation value corresponding to C is B3, the final image quality evaluation value is a1B1+ a2B2+ a3B 3. In general, the larger the image quality evaluation value, the better the image quality.
Optionally, in step 102, when the recognition result is a non-living body;
the face recognition of the first image to obtain a recognition result includes:
acquiring an infrared image corresponding to the first image; carrying out image segmentation on the infrared image to obtain a contour image; and judging whether the contour image contains a face contour or not, and when the contour image does not contain the face contour, determining that the identification result is that the first image is from a non-living body.
Wherein the non-living body may be at least one of: photos, 3D models of human faces, face masks, etc. The infrared image corresponding to the first image can be acquired through the infrared camera, namely the first image and the infrared image are both directed at the same target object, further, the infrared image can be subjected to infrared segmentation to obtain a contour image, whether the contour image contains a face contour or not is judged, if yes, the identification result is that the first image is from a living body, and if not, the identification result is that the first image is from a non-living body.
103. And if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
The specified prompting frequency can be set by the user or defaulted by the system. If the recognition result meets the preset condition, the input face image is not in accordance with the face recognition condition, and then the user needs to be prompted, so that the user can be prompted to input the face image again according to the appointed prompting frequency. In general, face images are continuously input, the input face images enter a face recognition stage, and if recognition fails, a user is prompted, so that the user is continuously prompted, and therefore the user is prompted too frequently, and user experience is reduced. Certainly, in the process of acquiring the face image, the camera can acquire the face image at a fixed face acquisition frequency, and thus, the designated prompt frequency is less than the face acquisition frequency. The prompting mode of the prompting operation can be one of the following modes: voice prompts, vibration prompts, text prompts, flash light prompts, and the like. The prompting content of the prompting operation can be one of: prompting to enter a living body, prompting to adjust a face angle, prompting a user not to shake, and the like.
Optionally, in the step 103, the preset condition is at least one of the following:
the recognition result is that the integrity of the face in the first image is lower than a first preset threshold;
or,
the identification result is that the image quality evaluation value of the first image is lower than a second preset threshold;
or,
the identification result is that the first image is from a non-living body.
The first preset threshold and the second preset threshold can be set by a user or defaulted by a system, and the preset condition can be at least one of the following conditions: the recognition result is that the integrity of the face in the first image is lower than a first preset threshold, the recognition result is that the image quality evaluation value of the first image is lower than a second preset threshold, the recognition result is that the first image is from a non-living body, the recognition result is that the face angle in the first image is not in a preset range, and the like, wherein the preset range can be set by a user or is default by a system.
Optionally, in the step 103, performing a prompt operation according to a specified prompt frequency may include the following steps:
31. determining a target deviation degree corresponding to the recognition result according to the recognition result and a preset recognition result;
32. and determining the designated prompt frequency corresponding to the target deviation according to the corresponding relation between the preset deviation and the prompt frequency.
The preset recognition result is a recognition result when the entry is successful, and then, the deviation degree of the recognition result from the preset recognition result can be determined by comparing the recognition result with the preset recognition result, for example, when the recognition result is image quality, a difference value between an image quality evaluation value corresponding to the recognition result and a preset image quality evaluation value is determined as a target deviation degree, and the preset image quality evaluation value can be set by a user or default by a system. The mobile terminal can pre-store the corresponding relation between the deviation degree and the prompting frequency, further, the appointed prompting frequency corresponding to the target deviation degree can be directly determined according to the corresponding relation, and further, the prompting operation is executed according to the appointed prompting frequency.
Optionally, as shown in fig. 1E, fig. 1E is another embodiment of the face recognition method described in fig. 1D according to the embodiment of the present invention, and compared with the face recognition method described in fig. 1D, the method may further include the following steps:
104. and matching the first image with a preset face template, and executing unlocking operation when the first image is successfully matched with the preset face template.
The preset face template may be stored in advance before the step 101 is executed, and may be implemented by acquiring a face image of a user through a face recognition device, where the preset face template may be stored in a face template library.
Optionally, in the process of executing the step 104, the first image is matched with a preset face template, and when the matching value between the face image and the preset face template is greater than the face recognition threshold, the matching is successful, and further, the following unlocking process is executed, and when the matching value between the first image and the preset face template is less than or equal to the face recognition threshold, the whole process of face recognition may be ended, or the user is prompted to perform face recognition again.
Specifically, in the process of executing step 104, feature extraction may be performed on the first image and the preset face template, and feature matching may be performed on features obtained after feature extraction. The above feature extraction can be implemented by the following algorithm: a Harris corner detection algorithm, Scale Invariant Feature Transform (SIFT), SUSAN corner detection algorithm, etc., which are not described herein again. In performing step 104, the face image may be pre-processed, which may include but is not limited to: the method comprises the steps of image enhancement processing, binarization processing, smoothing processing, color image conversion into gray level images and the like, then carrying out feature extraction on a preprocessed first image to obtain a feature set of a face image, then selecting at least one face template from a face template library, wherein the face template can be an original face image or a group of feature sets, further carrying out feature matching on the feature set of the face image and the feature set of the face template to obtain a matching result, and judging whether matching is successful or not according to the matching result.
When the matching value between the first image and the preset face template is greater than the face recognition threshold, a next unlocking process may be executed, and the next unlocking process may include, but is not limited to: unlocking is achieved to enter the main page, or a designated page of an application, or to enter the next biometric step.
Optionally, in the step 104, matching the first image with a preset face template may include the following steps:
d1, performing multi-scale decomposition on the first image by adopting a multi-scale decomposition algorithm to obtain a first high-frequency component image of the first image, and performing feature extraction on the first high-frequency component image to obtain a first feature set;
d2, performing multi-scale decomposition on the preset face template by adopting the multi-scale decomposition algorithm to obtain a second high-frequency component image of the preset face template, and performing feature extraction on the second high-frequency component image to obtain a second feature set;
d3, screening the first characteristic set and the second characteristic set to obtain a first stable characteristic set and a second stable characteristic set;
d4, performing feature matching on the first stable feature set and the second stable feature set, and confirming that the face image is successfully matched with a preset face template when the number of matched feature points between the first stable feature set and the second stable feature set is greater than a preset quantity threshold.
The first image may be subjected to multi-scale decomposition by using a multi-scale decomposition algorithm to obtain a low-frequency component image and a plurality of high-frequency component images, where the first high-frequency component image may be one of the plurality of high-frequency component images, and the multi-scale decomposition algorithm may include, but is not limited to: wavelet transformation, laplacian transformation, Contourlet Transformation (CT), nonsubsampled Contourlet transformation (NSCT), shear wave transformation, etc., taking a Contourlet as an example, performing multi-scale decomposition on a face image by using the Contourlet transformation to obtain a low-frequency component image and a plurality of high-frequency component images, taking NSCT as an example, performing multi-scale decomposition on the face image by using the NSCT to obtain a low-frequency component image and a plurality of high-frequency component images, and taking the sizes of each image in the plurality of high-frequency component images as an example, and performing multi-scale decomposition on the face image by using the NSCT to obtain a low-frequency component image and a plurality of high-frequency component images, wherein the sizes of each image in the plurality of high-frequency component images are the same. For high frequency component images, it contains more detail information of the original image. Similarly, a multi-scale decomposition algorithm may be used to perform multi-scale decomposition on the preset face template to obtain a low-frequency component image and a plurality of high-frequency component images, where the second high-frequency component image may be one of the plurality of high-frequency component images, and the first high-frequency component image corresponds to the second high-frequency component image in position, that is, the hierarchical position between the first high-frequency component image and the second high-frequency component image is the same as the scale position, for example, the first high-frequency component image is located at the 2 nd layer and the 3 rd scale, and the second high-frequency component image is also located at the 2 nd layer and the 3 rd scale. In the step D3, the first feature set and the second feature set are filtered to obtain a first stable feature set and a second stable feature set, and the filtering process may be implemented in such a manner that the first feature set may include a plurality of feature points, the second feature set also includes a plurality of feature points, each feature point is a vector and includes a magnitude and a direction, so that a modulus of each feature point may be calculated, and if the modulus is greater than a certain threshold, the feature point is retained, so that the feature point may be filtered. In the steps D1-D4, the fine features between the first image and the preset face template are mainly considered to be matched, so that the accuracy of face recognition can be improved, and in general, the more detailed features are more difficult to forge, so that the safety of face unlocking is improved.
Optionally, between the step 103 and the step 104, the following steps may be further included:
and performing image enhancement processing on the first image.
Among them, the image enhancement processing may include, but is not limited to: image denoising (e.g., wavelet transform for image denoising), image restoration (e.g., wiener filtering), dark vision enhancement algorithms (e.g., histogram equalization, gray scale stretching, etc.), and after image enhancement processing is performed on the face image, the quality of the face image can be improved to some extent.
It can be seen that the face recognition method described in the embodiment of the present invention can shoot a target object in a face image entry process to obtain a first image, perform face recognition on the first image to obtain a recognition result, and perform a prompt operation according to a specified prompt frequency if the recognition result meets a preset condition, so that the shot image can be subjected to face recognition, and if the recognition result meets a condition of entry failure, the user is prompted through the specified frequency, on one hand, the user is not frequently prompted, and on the other hand, the purpose of warning the user is also achieved, so that the user can adjust a face entry posture in time, and the face recognition efficiency is improved.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of a face recognition method according to an embodiment of the present invention. The face recognition method described in this embodiment is applied to a mobile terminal including a face recognition device and an application processor AP, and its physical diagram and structure diagram can be seen in fig. 1A to 1C, which includes the following steps:
201. and in the human face image input process, detecting the acceleration of the mobile terminal through an acceleration sensor.
In the process of inputting the face image, if the face image is in a motion state, the inputting may be failed, for example, if the moving speed is too fast in the process of picking up the mobile terminal by the user, the inputting of the face image may be failed. Therefore, the acceleration of the mobile terminal can be detected through the acceleration sensor, when the acceleration of the mobile terminal is small, the fact that the user tends to face recognition is indicated, and at the moment, if shooting is carried out, the quality of the obtained image face is better.
202. And when the acceleration is lower than a third preset threshold value, shooting the target object to obtain a first image.
The third preset threshold value can be set by the user, or the system is default, when the acceleration is lower than the third preset threshold value, the situation that the work of the user for taking the mobile terminal tends to be stable is indicated, the user is likely to want to unlock the human face, at this time, the target object can be shot to obtain the first image, and at this time, the image quality of the first image is good, and the fuzzy probability is low.
203. And carrying out face recognition on the first image to obtain a recognition result.
204. And if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
The specific description of the steps 203 to 204 may refer to the corresponding steps of the face recognition method described in fig. 1D, and will not be described herein again.
It can be seen that, in the embodiment of the present invention, in the process of inputting a face image, the acceleration of the mobile terminal is detected by the acceleration sensor, when the acceleration is lower than the third preset threshold, the target object is photographed to obtain the first image, the face of the first image is recognized to obtain the recognition result, if the recognition result meets the preset condition, the prompt operation is performed according to the specified prompt frequency, so that the face of the photographed image can be recognized, and if the recognition result meets the condition of input failure, the user is prompted by the specified frequency, on one hand, the user is not frequently prompted, on the other hand, the purpose of warning the user is also achieved, so that the user can adjust the face input posture in time, and the face recognition efficiency is improved.
Referring to fig. 3, fig. 3 is a mobile terminal according to an embodiment of the present invention, including: an application processor AP and a memory; and one or more programs stored in the memory and configured for execution by the AP, the programs including instructions for performing the steps of:
shooting a target object in a face image input process to obtain a first image;
carrying out face recognition on the first image to obtain a recognition result;
and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
In one possible example, the preset condition is at least one of:
the recognition result is that the integrity of the face in the first image is lower than a first preset threshold;
or,
the identification result is that the image quality evaluation value of the first image is lower than a second preset threshold;
or,
the identification result is that the first image is from a non-living body.
In one possible example, when the recognition result is a face integrity;
in the aspect of performing face recognition on the first image to obtain a recognition result, the program includes instructions for performing the following steps:
carrying out image segmentation on the first image to obtain a face region;
detecting whether the peripheral outline of the face area is complete;
when the peripheral contour of the face region is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle to obtain a first target peripheral contour, determining the break points of the peripheral contour of the face region, and connecting adjacent break points to obtain a second target peripheral contour;
and taking the area ratio between the second target peripheral outline and the first target peripheral outline as the face integrity.
In one possible example, in connection with the hinting operation at the specified hinting frequency, the program includes instructions for:
determining a target deviation degree corresponding to the recognition result according to the recognition result and a preset recognition result;
and determining the designated prompt frequency corresponding to the target deviation according to the corresponding relation between the preset deviation and the prompt frequency.
In one possible example, the program further comprises instructions for performing the steps of:
detecting the acceleration of the mobile terminal through an acceleration sensor;
and when the acceleration is lower than a third preset threshold value, executing the step of shooting the target object.
Referring to fig. 4A, fig. 4A is a schematic structural diagram of a face recognition device according to the present embodiment. The face recognition apparatus is applied to a mobile terminal, and comprises a shooting unit 401, a recognition unit 402 and a prompt unit 403, wherein,
the shooting unit 401 is configured to shoot a target object in a face image input process to obtain a first image;
an identifying unit 402, configured to perform face identification on the first image to obtain an identification result;
a prompt unit 403, configured to perform a prompt operation according to a specified prompt frequency if the recognition result meets a preset condition.
Optionally, the preset condition is at least one of the following:
the recognition result is that the integrity of the face in the first image is lower than a first preset threshold;
or,
the identification result is that the image quality evaluation value of the first image is lower than a second preset threshold;
or,
the identification result is that the first image is from a non-living body.
Optionally, when the recognition result is the face integrity; as shown in fig. 4B, fig. 4B is a detailed structure of the recognition unit 402 of the face recognition apparatus depicted in fig. 4A, where the recognition unit 402 may include: the segmentation module 4021, the detection module 4022, the processing module 4023, and the first determination module 4024 are as follows:
the segmentation module 4021 is configured to perform image segmentation on the first image to obtain a face region;
a detection module 4022, configured to detect whether a peripheral contour of the face region is complete;
the processing module 4023 is configured to, when the peripheral contour of the face region is incomplete, perfect the peripheral contour of the face region according to a symmetry principle to obtain a first target peripheral contour, determine break points of the peripheral contour of the face region, and connect adjacent break points to obtain a second target peripheral contour;
a first determining module 4024, configured to determine an area ratio between the second target peripheral contour and the first target peripheral contour as the face integrity.
Alternatively, as shown in fig. 4C, fig. 4C is a detailed structure of the prompting unit 403 of the face recognition apparatus depicted in fig. 4A, where the prompting unit 403 may include: a second determination module 4031 and a third determination module 4032, specifically as follows;
a second determining module 4031, configured to determine, according to the recognition result and a preset recognition result, a target deviation degree corresponding to the recognition result;
a third determining module 4032, configured to determine the specified prompt frequency corresponding to the target deviation according to a preset correspondence between the deviation and the prompt frequency.
Optionally, as shown in fig. 4D, fig. 4D is a modified structure of the face recognition apparatus depicted in fig. 4A, the apparatus may further include: the detecting unit 404 is specifically as follows:
a detection unit 404 for detecting an acceleration of the mobile terminal by an acceleration sensor; when the acceleration is lower than a third preset threshold, the step of photographing the target object is performed by the photographing unit 401.
It can be seen that the face recognition device described in the embodiment of the present invention can shoot a target object in a face image entry process to obtain a first image, perform face recognition on the first image to obtain a recognition result, and perform a prompt operation according to a specified prompt frequency if the recognition result meets a preset condition, so that the face recognition can be performed on the shot image, and if the recognition result meets a condition of entry failure, the user is prompted through the specified frequency, on one hand, the user is not frequently prompted, and on the other hand, the purpose of warning the user is also achieved, so that the user can adjust a face entry posture in time, and the face recognition efficiency is improved.
It can be understood that the functions of each program module of the face recognition apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 5 is a block diagram illustrating a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention. Referring to fig. 5, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, sensor 950, audio circuit 960, Wireless Fidelity (WiFi) module 970, application processor AP980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 5:
the input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch display 933, a face recognition device 931, and other input devices 932. The specific structure and composition of the face recognition device 931 can refer to the above description, and will not be described in detail herein. The input unit 930 may also include other input devices 932. In particular, other input devices 932 may include, but are not limited to, one or more of physical keys, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Wherein, the AP980 is configured to perform the following steps:
shooting a target object in a face image input process to obtain a first image;
carrying out face recognition on the first image to obtain a recognition result;
and if the recognition result meets the preset condition, performing prompt operation according to a specified prompt frequency.
The AP980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions and processes of the mobile phone by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Optionally, the AP980 may include one or more processing units, which may be artificial intelligence chips, quantum chips; preferably, the AP980 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the AP 980.
Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the touch display screen according to the brightness of ambient light, and the proximity sensor may turn off the touch display screen and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 5 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, and preferably, the power supply may be logically connected to the AP980 via a power management system, so that functions such as managing charging, discharging, and power consumption may be performed via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiments shown in fig. 1D, fig. 1E, or fig. 2, the method flows of the steps may be implemented based on the structure of the mobile phone.
In the embodiments shown in fig. 3 and fig. 4A to fig. 4D, the functions of the units may be implemented based on the structure of the mobile phone.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the face recognition methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to make a computer execute part or all of the steps of any one of the face recognition methods as described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (12)
1. A mobile terminal comprising an application processor AP, and a face recognition device and a memory connected to the AP, wherein,
the face recognition device is used for shooting a target object in the face image input process to obtain a first image;
the memory is used for storing preset conditions;
the AP is configured to perform face recognition on the first image to obtain a recognition result, and includes: if the recognition result is the face integrity, when the peripheral contour of the face region of the first image is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle to obtain a first target peripheral contour, determining the break points of the peripheral contour of the face region, and connecting adjacent break points to obtain a second target peripheral contour; taking the area ratio between the second target peripheral outline and the first target peripheral outline as the face integrity; if the recognition result meets the preset condition, determining a target deviation degree corresponding to the recognition result according to the recognition result and a preset recognition result; determining a designated prompt frequency corresponding to the target deviation according to a preset corresponding relation between the deviation and the prompt frequency, and performing prompt operation according to the designated prompt frequency; the specified prompt frequency is less than the face acquisition frequency.
2. The mobile terminal according to claim 1, wherein the preset condition is at least one of the following:
the recognition result is that the integrity of the face in the first image is lower than a first preset threshold;
or,
the identification result is that the image quality evaluation value of the first image is lower than a second preset threshold;
or,
the identification result is that the first image is from a non-living body.
3. The mobile terminal according to claim 1 or 2, wherein when the peripheral contour of the face region is incomplete, the AP is further configured to, before the peripheral contour of the face region is perfected according to a symmetry principle to obtain a first target peripheral contour:
carrying out image segmentation on the first image to obtain a face region;
and detecting whether the peripheral outline of the face area is complete.
4. The mobile terminal of claim 1, wherein the mobile terminal further comprises: an acceleration sensor;
the acceleration sensor is used for detecting the acceleration of the mobile terminal, and when the acceleration is lower than a third preset threshold value, the step of shooting the target object is executed by the face recognition device.
5. A face recognition method is applied to a mobile terminal comprising an application processor AP, a face recognition device connected with the AP and a memory, and the method comprises the following steps:
the face recognition device shoots a target object in a face image input process to obtain a first image;
the memory stores preset conditions;
the AP carries out face recognition on the first image to obtain a recognition result, and the recognition result comprises the following steps: if the recognition result is the face integrity, when the peripheral contour of the face region of the first image is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle to obtain a first target peripheral contour, determining the break points of the peripheral contour of the face region, and connecting adjacent break points to obtain a second target peripheral contour; taking the area ratio between the second target peripheral outline and the first target peripheral outline as the face integrity; if the recognition result meets the preset condition, determining a target deviation degree corresponding to the recognition result according to the recognition result and a preset recognition result; determining a designated prompt frequency corresponding to the target deviation according to a preset corresponding relation between the deviation and the prompt frequency, and performing prompt operation according to the designated prompt frequency; the specified prompt frequency is less than the face acquisition frequency.
6. A face recognition method, comprising:
shooting a target object in a face image input process to obtain a first image;
carrying out face recognition on the first image to obtain a recognition result, wherein the recognition result comprises the following steps: if the recognition result is the face integrity, when the peripheral contour of the face region of the first image is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle to obtain a first target peripheral contour, determining the break points of the peripheral contour of the face region, and connecting adjacent break points to obtain a second target peripheral contour; taking the area ratio between the second target peripheral outline and the first target peripheral outline as the face integrity;
if the recognition result meets a preset condition, determining a target deviation degree corresponding to the recognition result according to the recognition result and a preset recognition result; determining a designated prompt frequency corresponding to the target deviation according to a preset corresponding relation between the deviation and the prompt frequency, and performing prompt operation according to the designated prompt frequency; the specified prompt frequency is less than the face acquisition frequency.
7. The method of claim 6, wherein the preset condition is at least one of:
the recognition result is that the integrity of the face in the first image is lower than a first preset threshold;
or,
the identification result is that the image quality evaluation value of the first image is lower than a second preset threshold;
or,
the identification result is that the first image is from a non-living body.
8. The method according to claim 6 or 7, wherein before the peripheral contour of the face region is completed according to the symmetry principle when the peripheral contour of the face region is incomplete, and a first target peripheral contour is obtained, the method further comprises:
carrying out image segmentation on the first image to obtain a face region;
and detecting whether the peripheral outline of the face area is complete.
9. The method of claim 6, further comprising:
detecting the acceleration of the mobile terminal through an acceleration sensor;
and when the acceleration is lower than a third preset threshold value, executing the step of shooting the target object.
10. A face recognition apparatus, comprising:
the shooting unit is used for shooting a target object in the process of inputting the face image to obtain a first image;
the identification unit is used for carrying out face identification on the first image to obtain an identification result, and comprises: if the recognition result is the face integrity, when the peripheral contour of the face region of the first image is incomplete, perfecting the peripheral contour of the face region according to the symmetry principle to obtain a first target peripheral contour, determining the break points of the peripheral contour of the face region, and connecting adjacent break points to obtain a second target peripheral contour; taking the area ratio between the second target peripheral outline and the first target peripheral outline as the face integrity;
the prompting unit is used for determining the target deviation degree corresponding to the recognition result according to the recognition result and a preset recognition result if the recognition result meets a preset condition; determining a designated prompt frequency corresponding to the target deviation according to a preset corresponding relation between the deviation and the prompt frequency, and performing prompt operation according to the designated prompt frequency; the specified prompt frequency is less than the face acquisition frequency.
11. A mobile terminal, comprising: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs comprising instructions for the method of any of claims 6-9.
12. A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method according to any one of claims 6-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710817259.9A CN107657218B (en) | 2017-09-12 | 2017-09-12 | Face recognition method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710817259.9A CN107657218B (en) | 2017-09-12 | 2017-09-12 | Face recognition method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107657218A CN107657218A (en) | 2018-02-02 |
CN107657218B true CN107657218B (en) | 2021-03-09 |
Family
ID=61129616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710817259.9A Expired - Fee Related CN107657218B (en) | 2017-09-12 | 2017-09-12 | Face recognition method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107657218B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090340B (en) * | 2018-02-09 | 2020-01-10 | Oppo广东移动通信有限公司 | Face recognition processing method, face recognition processing device and intelligent terminal |
CN108491798A (en) * | 2018-03-23 | 2018-09-04 | 四川意高汇智科技有限公司 | Face identification method based on individualized feature |
CN108446642A (en) * | 2018-03-23 | 2018-08-24 | 四川意高汇智科技有限公司 | A kind of Distributive System of Face Recognition |
CN108737733B (en) * | 2018-06-08 | 2020-08-04 | Oppo广东移动通信有限公司 | Information prompting method and device, electronic equipment and computer readable storage medium |
CN108985212B (en) * | 2018-07-06 | 2021-06-04 | 深圳市科脉技术股份有限公司 | Face recognition method and device |
CN110874876B (en) * | 2018-08-30 | 2022-07-05 | 阿里巴巴集团控股有限公司 | Unlocking method and device |
CN109859112B (en) * | 2018-12-21 | 2023-09-26 | 航天信息股份有限公司 | Method and system for realizing face completion |
CN110059607B (en) * | 2019-04-11 | 2023-07-11 | 深圳华付技术股份有限公司 | Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium |
CN112395436B (en) * | 2019-08-14 | 2024-07-02 | 天津极豪科技有限公司 | Bottom library input method and device |
CN112487921B (en) * | 2020-11-25 | 2023-09-08 | 奥比中光科技集团股份有限公司 | Face image preprocessing method and system for living body detection |
CN113240428B (en) * | 2021-05-27 | 2023-09-08 | 支付宝(杭州)信息技术有限公司 | Payment processing method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103139480A (en) * | 2013-02-28 | 2013-06-05 | 华为终端有限公司 | Image acquisition method and image acquisition device |
CN104935698A (en) * | 2015-06-23 | 2015-09-23 | 上海卓易科技股份有限公司 | Photographing method of smart terminal, photographing device and smart phone |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2428927A (en) * | 2005-08-05 | 2007-02-07 | Hewlett Packard Development Co | Accurate positioning of a time lapse camera |
CN101526997A (en) * | 2009-04-22 | 2009-09-09 | 无锡名鹰科技发展有限公司 | Embedded infrared face image identifying method and identifying device |
CN102930257B (en) * | 2012-11-14 | 2016-04-20 | 汉王科技股份有限公司 | Face identification device |
CN104780308A (en) * | 2014-01-09 | 2015-07-15 | 联想(北京)有限公司 | Information processing method and electronic device |
US10736517B2 (en) * | 2014-10-09 | 2020-08-11 | Panasonic Intellectual Property Management Co., Ltd. | Non-contact blood-pressure measuring device and non-contact blood-pressure measuring method |
CN105187719A (en) * | 2015-08-21 | 2015-12-23 | 深圳市金立通信设备有限公司 | Shooting method and terminal |
CN105611142B (en) * | 2015-09-11 | 2018-03-27 | 广东欧珀移动通信有限公司 | A kind of photographic method and device |
CN105868613A (en) * | 2016-06-08 | 2016-08-17 | 广东欧珀移动通信有限公司 | Biometric feature recognition method, biometric feature recognition device and mobile terminal |
CN106650635B (en) * | 2016-11-30 | 2019-12-13 | 厦门理工学院 | Method and system for detecting viewing behavior of rearview mirror of driver |
CN107147799A (en) * | 2017-05-31 | 2017-09-08 | 东莞市联臣电子科技股份有限公司 | A kind of electronic equipment, eyes protecting system and eye care method |
-
2017
- 2017-09-12 CN CN201710817259.9A patent/CN107657218B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103139480A (en) * | 2013-02-28 | 2013-06-05 | 华为终端有限公司 | Image acquisition method and image acquisition device |
CN104935698A (en) * | 2015-06-23 | 2015-09-23 | 上海卓易科技股份有限公司 | Photographing method of smart terminal, photographing device and smart phone |
Non-Patent Citations (3)
Title |
---|
Maximum Correntropy Criterion for Robust Face Recognition;Ran He 等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20110831;第33卷(第8期);第1561-1576页 * |
人脸识别考勤系统中有效人脸特征提取;张婷 等;《上海大学学报(自然科学版)》;20060630;第12卷(第3期);第244-247、255页 * |
基于完全二维对称主成分分析的人脸识别;王丽华 等;《计算机工程》;20100630;第36卷(第12期);第207-208、212页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107657218A (en) | 2018-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107657218B (en) | Face recognition method and related product | |
CN107590461B (en) | Face recognition method and related product | |
CN107292285B (en) | Iris living body detection method and related product | |
CN107609514B (en) | Face recognition method and related product | |
CN107679482B (en) | Unlocking control method and related product | |
CN107480496B (en) | Unlocking control method and related product | |
CN108985212B (en) | Face recognition method and device | |
CN107463818B (en) | Unlocking control method and related product | |
CN107403147B (en) | Iris living body detection method and related product | |
CN107862265B (en) | Image processing method and related product | |
CN107679481B (en) | Unlocking control method and related product | |
CN107451446B (en) | Unlocking control method and related product | |
CN107506687B (en) | Living body detection method and related product | |
CN107423699B (en) | Biopsy method and Related product | |
CN107451454B (en) | Unlocking control method and related product | |
CN107480488B (en) | Unlocking control method and related product | |
CN107633499B (en) | Image processing method and related product | |
CN107506708B (en) | Unlocking control method and related product | |
CN107784271B (en) | Fingerprint identification method and related product | |
CN107613550B (en) | Unlocking control method and related product | |
CN107644219B (en) | Face registration method and related product | |
CN107506697B (en) | Anti-counterfeiting processing method and related product | |
CN108345848A (en) | The recognition methods of user's direction of gaze and Related product | |
WO2019001254A1 (en) | Method for iris liveness detection and related product | |
CN107451444A (en) | Solve lock control method and Related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210309 |