CN107506708B - Unlocking control method and related product - Google Patents
Unlocking control method and related product Download PDFInfo
- Publication number
- CN107506708B CN107506708B CN201710693352.3A CN201710693352A CN107506708B CN 107506708 B CN107506708 B CN 107506708B CN 201710693352 A CN201710693352 A CN 201710693352A CN 107506708 B CN107506708 B CN 107506708B
- Authority
- CN
- China
- Prior art keywords
- image
- face image
- face
- glasses
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Telephone Function (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses an unlocking control method and a related product, wherein the method comprises the following steps: acquiring a face image of a target object; judging whether the target object is in a glasses wearing state or not according to the face image; when the target object is in a glasses wearing state, determining a glasses area from the face image; and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image. The embodiment of the invention can acquire the face image of a user when the user wears glasses, removes part of the face image of the area where the glasses are positioned, and adopts the face images of other areas to perform face recognition, thereby improving the success rate of the face recognition.
Description
Technical Field
The invention relates to the technical field of mobile terminals, in particular to an unlocking control method and a related product.
Background
With the widespread application of mobile terminals (mobile phones, tablet computers, etc.), the applications that the mobile terminals can support are increasing, the functions are increasing, and the mobile terminals are developing towards diversification and individuation, and become indispensable electronic products in the life of users.
At present, multiple biometrics is increasingly favored by mobile terminal manufacturers, and face recognition is increasingly favored by various manufacturers. However, in the case where the user wears glasses, since information of human eyes is distorted by the glasses, the recognition efficiency of the face recognition is lowered.
Disclosure of Invention
The embodiment of the invention provides an unlocking control method and a related product, which can improve the face recognition efficiency under the state of wearing glasses.
In a first aspect, an embodiment of the present invention provides a mobile terminal, including an Application Processor (AP), and a face recognition device connected to the AP, wherein,
the face recognition device is used for acquiring a face image of the target object;
the AP is used for judging whether the target object is in a glasses wearing state or not according to the face image; when the target object is in a glasses wearing state, determining a glasses area from the face image; and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
In a second aspect, an embodiment of the present invention provides an unlocking control method, which is applied to a mobile terminal including an application processor AP and a face recognition device connected to the AP, and the method includes:
the face recognition device acquires a face image of a target object;
the AP judges whether the target object is in a glasses wearing state or not according to the face image; when the target object is in a glasses wearing state, determining a glasses area from the face image; and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
In a third aspect, an embodiment of the present invention provides an unlocking control method, including:
acquiring a face image of a target object;
judging whether the target object is in a glasses wearing state or not according to the face image;
when the target object is in a glasses wearing state, determining a glasses area from the face image;
and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
In a fourth aspect, an embodiment of the present invention provides an unlocking control apparatus, including:
an acquisition unit configured to acquire a face image of a target object;
the judging unit is used for judging whether the target object is in a glasses wearing state according to the face image;
the determining unit is used for determining a glasses area from the face image when the judging result of the judging unit is that the target object is in a glasses wearing state;
and the recognition unit is used for removing the glasses area from the face image to obtain a target face image and carrying out face recognition operation according to the target face image.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs including instructions for some or all of the steps as described in the third aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, where the computer program is used to make a computer execute some or all of the steps described in the third aspect of the present invention.
In a seventh aspect, embodiments of the present invention provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the third aspect of embodiments of the present invention. The computer program product may be a software installation package.
The embodiment of the invention has the following beneficial effects:
it can be seen that, in the embodiment of the present invention, the mobile terminal may obtain a face image of the target object, determine whether the target object is in a glasses wearing state according to the face image, determine a glasses area from the face image when the target object is in the glasses wearing state, subtract the glasses area from the face image, obtain the target face image, that is, take other areas except the glasses area in the face image as the target face image, and perform face recognition operation according to the target face image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic diagram of an architecture of an exemplary mobile terminal according to an embodiment of the present invention;
fig. 1B is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 1C is a schematic flowchart of an unlocking control method according to an embodiment of the present invention;
FIG. 1D is a diagram of a human face with glasses in a wearing state according to an embodiment of the present invention;
FIG. 1E is a schematic illustration of an exemplary embodiment of the present invention disclosed herein based on the determination of the horizontal cross-sectional area of FIG. 1D;
fig. 2 is a schematic flow chart of another unlocking control method disclosed in the embodiment of the present invention;
fig. 3 is another schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 4A is a schematic structural diagram of an unlocking control device according to an embodiment of the present invention;
fig. 4B is a schematic structural diagram of a determining unit of the unlocking control apparatus depicted in fig. 4A according to an embodiment of the present invention;
fig. 4C is a schematic structural diagram of a determination unit of the unlocking control apparatus depicted in fig. 4A according to an embodiment of the present invention;
fig. 4D is a schematic structural diagram of a determining unit of the unlocking control apparatus depicted in fig. 4A according to an embodiment of the present invention;
fig. 4E is a schematic structural diagram of an identification unit of the unlocking control apparatus depicted in fig. 4A according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The Mobile terminal according to the embodiment of the present invention may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
It should be noted that, the mobile terminal in the embodiment of the present invention may be installed with multiple biometric devices, that is, multiple biometric devices, and the multiple biometric devices may include, but are not limited to, a face recognition device: the fingerprint identification device, the iris identification device, the vein identification device, the brain wave identification device, the electrocardiogram identification device and the like, wherein each biological identification device is provided with a corresponding identification algorithm and an identification threshold value, in addition, each biological identification device is provided with a template which corresponds to the biological identification device and is input by a user in advance, for example, the face identification device is provided with a preset face template which corresponds to the biological identification device, further, the face identification device can collect a face image, and when the matching value between the face image and the preset face template is larger than the corresponding identification threshold value, the face image passes the identification.
The following describes embodiments of the present invention in detail. As shown in fig. 1A, an exemplary mobile terminal 1000 may include an infrared light supplement lamp 21 and an infrared camera 22, in a working process of the iris recognition device, after light of the infrared light supplement lamp 21 strikes the iris, the light is reflected back to the infrared camera 22 through the iris, the iris recognition device acquires an iris image, and the front camera 23 may serve as a face recognition device.
Referring to fig. 1B, fig. 1B is a schematic structural diagram of a mobile terminal 100, where the mobile terminal 100 includes: the application processor AP110 and the face recognition device 120, wherein the AP110 is connected with the face recognition device 120 through a bus 150.
In one possible example, the mobile terminal described in fig. 1A or fig. 1B above may have the following functions:
the face recognition device 120 is configured to obtain a face image of the target object;
the AP110 is used for judging whether the target object is in a glasses wearing state according to the face image; when the target object is in a glasses wearing state, determining a glasses area from the face image; and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
In one possible example, in the aspect of determining whether the target object is in a state of wearing glasses according to the face image, the AP110 is specifically configured to:
determining the eye position of the target object according to the face image;
acquiring a region image within a preset radius range by taking the position of the human eye as a center;
determining a specular reflection area from the area image;
and judging whether the target object is in a glasses wearing state or not according to the mirror reflection area, and confirming that the target object is in the glasses wearing state when the area ratio between the mirror reflection area and the face image is larger than a first preset threshold value.
In one possible example, in the aspect of determining the glasses area from the face image, the AP110 is specifically configured to:
determining the maximum vertical distance corresponding to the specular reflection area;
and determining a horizontal cross section area covering the face image by taking the maximum vertical distance as the width, and taking the horizontal cross section area as the glasses area.
In one possible example, in the aspect of determining whether the target object is in a state of wearing glasses according to the face image, the AP110 is specifically configured to:
extracting the contour of the face image to obtain a face contour image;
filtering out a first contour image under a state of not wearing glasses from the face contour image to obtain a second contour image;
and judging whether the similarity between the second contour image and a preset glasses image is greater than a second preset threshold value or not, and confirming that the target object is in a glasses wearing state when the similarity between the second contour image and the preset glasses image is greater than the second preset threshold value.
In one possible example, in the aspect of determining the glasses area from the face image, the AP110 is specifically configured to:
and taking the area formed by the second contour image as the glasses area.
In a possible example, in the performing a face recognition operation according to the target face image, the AP110 is specifically configured to:
determining an area ratio between the target face image and the face image;
and reducing a first face recognition threshold value according to the area ratio to obtain a second face recognition threshold value, judging whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value.
In a possible example, the mobile terminal described in fig. 1A or fig. 1B may be configured to execute an unlocking control method as follows:
the face recognition device 120 obtains a face image of a target object;
the AP110 judges whether the target object is in a glasses wearing state according to the face image; when the target object is in a glasses wearing state, determining a glasses area from the face image; and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
It can be seen that, in the embodiment of the present invention, the mobile terminal may obtain a face image of the target object, determine whether the target object is in a glasses wearing state according to the face image, determine a glasses area from the face image when the target object is in the glasses wearing state, subtract the glasses area from the face image, obtain the target face image, that is, take other areas except the glasses area in the face image as the target face image, and perform face recognition operation according to the target face image.
Fig. 1C is a schematic flowchart illustrating an unlocking control method according to an embodiment of the present invention. The unlocking control method described in this embodiment is applied to a mobile terminal, and its physical diagram and structure diagram can be referred to fig. 1A or fig. 1B, which includes the following steps:
101. and acquiring a face image of the target object.
The target object can be a person, and the mobile terminal can shoot the target object through the face recognition device to obtain a face image.
102. And judging whether the target object is in a glasses wearing state or not according to the face image.
After the user wears glasses, the human eye area part may be distorted to some extent, and the mobile terminal may determine whether the target object is in a state of wearing glasses through the face image, for specific description, refer to the following description process.
103. And when the target object is in a glasses wearing state, determining a glasses area from the face image.
When the target object is in a glasses wearing state, the face image comprises a glasses area and a non-glasses area, and the glasses area can be determined from the face image.
Optionally, in the step 102, determining whether the target object is in a glasses wearing state according to the face image includes:
a1, determining the human eye position of the target object according to the face image;
a2, acquiring an area image within a preset radius range with the position of the human eye as the center;
a3, determining a specular reflection area from the area image;
a4, judging whether the target object is in a glasses wearing state or not according to the mirror reflection area, and confirming that the target object is in the glasses wearing state when the area ratio between the mirror reflection area and the face image is larger than a first preset threshold value.
Wherein, the preset radius range can be set by the user or defaulted by the system, for example, when the user inputs the face template, the user can input the image without wearing glasses, then the face template of the user wearing the glasses is input, and then the glasses image can be segmented from the face image of the user wearing the glasses, the preset radius range is determined according to the glasses image, and the size of the glasses and the size of the human face present a proportional relation relative to the real object, so that, in the concrete implementation, the predetermined radius range may be determined according to the proportional relationship, for example, the width of the lens of the glasses may be determined as the predetermined radius range, of course, the predetermined radius range is a dynamic range, the size of the shot face image is different due to different distances, so that the preset radius range can be adjusted according to the size of the face image. The first preset threshold may be set by the user or default by the system.
In the specific application, the mobile terminal can determine the eye position of the target object according to the geometric structure of the face image, further can obtain an area image which takes the eye position as the center and is within a preset radius range from the face image, further determines the mirror reflection area according to the area image, further can judge whether the target object is in a glasses wearing state according to the mirror reflection area, and confirms that the target object is in the glasses wearing state when the area ratio between the mirror reflection area and the face image is larger than a first preset threshold value.
Alternatively, in step a3 described above, the mobile terminal may perform specular reflection area detection on the area image, and thus, may determine a specular reflection area in the area image.
The step a3 can be implemented by the dark channel theory as follows: as can be seen from the dark channel theory, the probability that a dark channel region with higher brightness is a specular reflection region is higher, and thus, the detection of the specular reflection region can be realized by the following formula:
wherein Ω (x) in the above formula is a face image with center at x, where x is the center coordinate of the sub-image, and y is the sub-imageThe central coordinate x of the image corresponds to the ordinate, c represents any channel, r, g and b are respectively a red channel, a green channel and a blue channel of the face image, IdarkIs a dark channel image (extracted from the region image in step A2 above), IcThe area of specular reflection is indicated.
Further, the area size of the specular reflection region can be determined according to the dark channel theory.
Further, in step 103, the determining the glasses area from the face image may include the following steps:
31. determining the maximum vertical distance corresponding to the specular reflection area;
32. and determining a horizontal cross section area covering the face image by taking the maximum vertical distance as the width, and taking the horizontal cross section area as the glasses area.
The mobile terminal may determine a maximum vertical distance corresponding to the specular reflection area, and then determine a horizontal cross-sectional area covering the face image according to the maximum vertical distance, and use the horizontal cross-sectional area as a glasses area, where of course, the maximum vertical distance is also parallel to a symmetry axis of the face image, as shown in fig. 1D, fig. 1D is the face image of the user wearing glasses, and further, as shown in fig. 1E, the maximum vertical distance is parallel to the symmetry axis, and the horizontal cross-sectional area covers the face image, and the horizontal cross-sectional area may be used as the glasses area.
Optionally, in the step 102, determining whether the target object is in a glasses wearing state according to the face image may include the following steps:
b1, extracting the contour of the face image to obtain a face contour image;
b2, filtering out the first contour image under the state of not wearing glasses from the face contour image to obtain a second contour image;
b3, judging whether the similarity between the second contour image and a preset glasses image is larger than a second preset threshold value or not, and confirming that the target object is in a glasses wearing state when the similarity between the second contour image and the preset glasses image is larger than the second preset threshold value.
The mobile terminal can extract the contour of the face image to obtain the face contour image, and the contour extraction mode can be one of the following modes: hough transform, principal component analysis, morphological methods, etc. The above-mentioned human face profile image contains the profile image of glasses and the profile image of human face image under the state of not wearing glasses, and then, can filter out the first profile image under the state of not wearing glasses from the human face profile image, obtains the second profile image, specifically is: the first contour image may include a peripheral contour of the face image (an outermost peripheral contour of the face image), other contours of the non-eye region generated by image segmentation, and a contour that approximates the eye region but does not belong to the glasses region. In the implementation step B2, the scope of the glasses area may be defined in advance, as long as the scope of the glasses area is reserved, and the obtained second contour image may be regarded as the glasses area. Furthermore, whether the similarity between the second contour image and the preset glasses image is greater than a second preset threshold value or not can be judged, when the similarity between the second contour image and the preset glasses image is greater than the second preset threshold value, the target object is confirmed to be in a glasses wearing state, and the second preset range can be set by a user or set by a system in a default mode.
Further, in step 103, the glasses area is determined from the face image, which can be implemented as follows:
and taking the area formed by the second contour image as the glasses area.
The area formed by the second contour image may be a closed area.
104. And removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
The target face image can be understood as the face image without the glasses area, the mobile terminal performs a matting operation on the face image, namely, the glasses area is deducted from the face image, the face image outside the glasses area is used as the target face image, and further, a face recognition operation can be performed according to the target face image, namely, the target face image is matched with a preset face template.
Optionally, in the step 104, performing a face recognition operation according to the target face image may include the following steps:
41. determining an area ratio between the target face image and the face image;
42. and reducing a first face recognition threshold value according to the area ratio to obtain a second face recognition threshold value, judging whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value.
The first face recognition threshold is a recognition threshold corresponding to a situation that the user does not wear glasses, and is pre-stored in a memory of the mobile terminal, and the preset face template can be pre-stored in the memory. The area of the target face image is smaller than that of the face image, so that the area ratio between the target face image and the face image is between 0 and 1, and further, the first face recognition threshold value can be reduced according to the area ratio to obtain the second face recognition threshold value, for example, the first face recognition threshold value can be a, the area ratio between the target face image and the face image is a, and the second face recognition threshold value is a. And then, matching the target face image with a preset face template to obtain a matching value, judging whether the matching value is greater than a second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value.
It can be seen that, in the embodiment of the present invention, the mobile terminal may obtain a face image of the target object, determine whether the target object is in a glasses wearing state according to the face image, determine a glasses area from the face image when the target object is in the glasses wearing state, subtract the glasses area from the face image, obtain the target face image, that is, take other areas except the glasses area in the face image as the target face image, and perform face recognition operation according to the target face image.
Fig. 2 is a schematic flowchart illustrating an unlocking control method according to an embodiment of the present invention. The unlocking control method described in this embodiment is applied to a mobile terminal, and its physical diagram and structure diagram can be referred to fig. 1A or fig. 1B, which includes the following steps:
201. and acquiring a face image of the target object.
202. And evaluating the image quality of the face image to obtain an image quality evaluation value.
However, when the image quality is poor, for example, in a dark vision or an exposure environment, the process of determining the wearing state of the glasses is affected, and therefore, the image quality evaluation value may be obtained by first performing the image quality evaluation on the face image.
The preset quality threshold value can be set by a user or defaulted by a system, image quality evaluation can be carried out on the face image to obtain an image quality evaluation value, whether the quality of the face image is good or bad is judged through the image quality evaluation value, when the image quality evaluation value is larger than or equal to the preset quality threshold value, the face image quality can be considered to be good, when the image quality evaluation value is smaller than the preset quality threshold value, the face image quality can be considered to be poor, and then image enhancement processing can be carried out on the face image.
In step 202, at least one image quality evaluation index may be used to perform image quality evaluation on the face image, so as to obtain an image quality evaluation value.
In the specific matching, when the face image is evaluated, a plurality of image quality evaluation indexes are included, and each image quality evaluation index also corresponds to one weight, so that when each image quality evaluation index evaluates the image quality, an evaluation result can be obtained, and finally, weighting operation is performed, so that a final image quality evaluation value is obtained. The image quality evaluation index may include, but is not limited to: mean, standard deviation, entropy, sharpness, signal-to-noise ratio, etc.
It should be noted that, since there is a certain limitation in evaluating image quality by using a single evaluation index, it is possible to evaluate image quality by using a plurality of image quality evaluation indexes, and certainly, when evaluating image quality, it is not preferable that the image quality evaluation indexes are more, because the image quality evaluation indexes are more, the calculation complexity of the image quality evaluation process is higher, and the image quality evaluation effect is not better, and therefore, in a case where the image quality evaluation requirement is higher, it is possible to evaluate image quality by using 2 to 10 image quality evaluation indexes. Specifically, the number of image quality evaluation indexes and which index is selected is determined according to the specific implementation situation. Of course, the image quality evaluation index selected in combination with the specific scene selection image quality evaluation index may be different between the image quality evaluation performed in the dark environment and the image quality evaluation performed in the bright environment.
Alternatively, in the case where the requirement on the accuracy of the image quality evaluation is not high, the evaluation may be performed by using one image quality evaluation index, for example, the image quality evaluation value may be performed on the image to be processed by using entropy, and it may be considered that the larger the entropy, the better the image quality is, and conversely, the smaller the entropy, the worse the image quality is.
Alternatively, when the requirement on the image quality evaluation accuracy is high, the image may be evaluated by using a plurality of image quality evaluation indexes, and when the image quality evaluation is performed by using a plurality of image quality evaluation indexes, a weight of each of the plurality of image quality evaluation indexes may be set, so that a plurality of image quality evaluation values may be obtained, and a final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and their corresponding weights, for example, three image quality evaluation indexes are: when an image quality evaluation is performed on a certain image by using A, B and C, the image quality evaluation value corresponding to a is B1, the image quality evaluation value corresponding to B is B2, and the image quality evaluation value corresponding to C is B3, the final image quality evaluation value is a1B1+ a2B2+ a3B 3. In general, the larger the image quality evaluation value, the better the image quality.
203. And when the image quality evaluation value is lower than a preset quality threshold value, performing image enhancement processing on the face image.
The preset quality threshold value can be set by the user or defaulted by the system.
Among them, the image enhancement processing may include, but is not limited to: image denoising (e.g., wavelet transform for image denoising), image restoration (e.g., wiener filtering), dark vision enhancement algorithms (e.g., histogram equalization, gray scale stretching, etc.), and after image enhancement processing is performed on the face image, the quality of the face image can be improved to some extent.
204. And judging whether the target object is in a glasses wearing state or not according to the face image subjected to the image enhancement processing.
After the image enhancement processing, the quality of the face image can be improved, and whether the target object is in a glasses wearing state or not can be judged more favorably according to the face image.
205. When the target object is in a glasses wearing state, determining a glasses area from the face image;
206. and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
The specific descriptions of step 201, step 203, and step 206 may refer to the corresponding steps of the unlocking control method described in fig. 1C, and are not described herein again.
It can be seen that, in the embodiment of the present invention, a mobile terminal may obtain a face image of a target object, perform image quality evaluation on the face image to obtain an image quality evaluation value, perform image enhancement processing on the face image when the image quality evaluation value is lower than a preset quality threshold, determine whether the target object is in a glasses wearing state according to the face image after the image enhancement processing, determine a glasses area from the face image when the target object is in the glasses wearing state, subtract the glasses area from the face image to obtain a target face image, that is, take a region other than the glasses area in the face image as the target face image, and perform face recognition operation according to the target face image, therefore, a user may obtain the face image thereof and scratch off a part of the face image including the region where glasses are located when wearing glasses, and perform face recognition by using the face images of the other regions, the success rate of face recognition can be improved.
Referring to fig. 3, fig. 3 is a mobile terminal according to an embodiment of the present invention, including: an application processor AP and a memory; and one or more programs stored in the memory and configured for execution by the AP, the programs including instructions for performing the steps of:
acquiring a face image of a target object;
judging whether the target object is in a glasses wearing state or not according to the face image;
when the target object is in a glasses wearing state, determining a glasses area from the face image;
and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
In one possible example, in the aspect of determining whether the target object is in a state of wearing glasses from the face image, the program includes instructions for performing the steps of:
determining the eye position of the target object according to the face image;
acquiring a region image within a preset radius range by taking the position of the human eye as a center;
determining a specular reflection area from the area image;
and judging whether the target object is in a glasses wearing state or not according to the mirror reflection area, and confirming that the target object is in the glasses wearing state when the area ratio between the mirror reflection area and the face image is larger than a first preset threshold value.
In one possible example, in the determining of the glasses area from the face image, the program includes instructions for:
determining the maximum vertical distance corresponding to the specular reflection area;
and determining a horizontal cross section area covering the face image by taking the maximum vertical distance as the width, and taking the horizontal cross section area as the glasses area.
In one possible example, in the aspect of determining whether the target object is in a state of wearing glasses from the face image, the program includes instructions for performing the steps of:
extracting the contour of the face image to obtain a face contour image;
filtering out a first contour image under a state of not wearing glasses from the face contour image to obtain a second contour image;
and judging whether the similarity between the second contour image and a preset glasses image is greater than a second preset threshold value or not, and confirming that the target object is in a glasses wearing state when the similarity between the second contour image and the preset glasses image is greater than the second preset threshold value.
In one possible example, in the determining of the glasses area from the face image, the program includes instructions for:
and taking the area formed by the second contour image as the glasses area.
In one possible example, in terms of the face recognition operation from the target face image, the program includes instructions for performing the steps of:
determining an area ratio between the target face image and the face image;
and reducing a first face recognition threshold value according to the area ratio to obtain a second face recognition threshold value, judging whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value.
Referring to fig. 4A, fig. 4A is a schematic structural diagram of an unlocking control device according to the present embodiment. The unlocking control apparatus is applied to a mobile terminal, and includes an acquisition unit 401, a judgment unit 402, a determination unit 403, and an identification unit 404, wherein,
an acquisition unit 401 configured to acquire a face image of a target object;
a judging unit 402, configured to judge whether the target object is in a glasses wearing state according to the face image;
a determining unit 403, configured to determine a glasses area from the face image when the target object is in a glasses wearing state as a result of the determination by the determining unit 402;
and the identifying unit 404 is configured to remove the glasses area from the face image to obtain a target face image, and perform a face identifying operation according to the target face image.
Alternatively, as shown in fig. 4B, fig. 4B is a detailed structure of the determination unit 402 of the unlocking control device described in fig. 4A, and the determination unit 402 may include: the first determining module 4021, the obtaining module 4022, and the first determining module 4023 are as follows:
a first determining module 4021, configured to determine a position of a human eye of the target object according to the face image;
an obtaining module 4022, configured to obtain an area image within a preset radius range with the position of the human eye as a center;
the first determining module 4021 is further specifically configured to:
determining a specular reflection area from the area image;
the first judging module 4023 is configured to judge whether the target object is in a glasses wearing state according to the specular reflection area, and confirm that the target object is in the glasses wearing state when an area ratio between the specular reflection area and the face image is greater than a first preset threshold.
Alternatively, as shown in fig. 4C, fig. 4C is a detailed structure of the determination unit 403 of the unlocking control device depicted in fig. 4A, and the determination unit 403 may include: the second determination module 4031 and the third determination module 4032 are specifically as follows:
a second determining module 4031, configured to determine a maximum vertical distance corresponding to the specular reflection area;
a third determining module 4032, configured to determine a horizontal cross-sectional area covering the facial image by using the maximum vertical distance as a width, and use the horizontal cross-sectional area as the glasses area.
Alternatively, as shown in fig. 4D, fig. 4D is a detailed structure of the determination unit 402 of the unlocking control device depicted in fig. 4A, and the determination unit 402 may include: the extracting module 4024, the filtering module 4025 and the second judging module 4026 are as follows:
the extraction module 4024 is configured to perform contour extraction on the face image to obtain a face contour image;
the filtering unit 4025 is configured to filter the first contour image in a state of not wearing glasses from the face contour image to obtain a second contour image;
a second determining module 4026, configured to determine whether a similarity between the second contour image and a preset glasses image is greater than a second preset threshold, and when the similarity between the second contour image and the preset glasses image is greater than the second preset threshold, determine that the target object is in a glasses wearing state.
Further, the determining unit 403 may specifically be configured to:
and taking the area formed by the second contour image as the glasses area.
Alternatively, as shown in fig. 4E, fig. 4E is a detailed structure of the identification unit 404 of the unlocking control device depicted in fig. 4A, and the identification unit 404 may include: the fourth determining module 4041 and the identifying module 4042 are specifically as follows:
a fourth determining module 4041, configured to determine an area ratio between the target face image and the face image;
the identification module 4042 is configured to reduce a first face recognition threshold according to the area ratio to obtain a second face recognition threshold, determine whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold, and confirm that face recognition is successful when the matching value is greater than the second face recognition threshold.
It can be seen that, the unlocking control device described in the embodiment of the present invention can obtain a face image of a target object, determine whether the target object is in a glasses wearing state according to the face image, determine a glasses area from the face image when the target object is in the glasses wearing state, subtract the glasses area from the face image, obtain a target face image, that is, take other areas except the glasses area in the face image as the target face image, and perform face recognition operation according to the target face image, and thus, a user can obtain the face image thereof and scratch off a part of the face image including the area where the glasses are located when wearing glasses, and perform face recognition by using the face images of the other areas, which can improve the success rate of face recognition.
It can be understood that the functions of each program module of the unlocking control device in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 5 is a block diagram illustrating a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention. Referring to fig. 5, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, sensor 950, audio circuit 960, Wireless Fidelity (WiFi) module 970, application processor AP980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 5:
the input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch display 933, a face recognition device 931, and other input devices 932. The specific structure and composition of the face recognition device 931 can refer to the above description, and will not be described in detail herein. The input unit 930 may also include other input devices 932. In particular, other input devices 932 may include, but are not limited to, one or more of physical keys, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Wherein, the AP980 is configured to perform the following steps:
acquiring a face image of a target object;
judging whether the target object is in a glasses wearing state or not according to the face image;
when the target object is in a glasses wearing state, determining a glasses area from the face image;
and removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image.
The AP980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions and processes of the mobile phone by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Optionally, the AP980 may include one or more processing units, which may be artificial intelligence chips, quantum chips; preferably, the AP980 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the AP 980.
Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the touch display screen according to the brightness of ambient light, and the proximity sensor may turn off the touch display screen and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 5 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, and preferably, the power supply may be logically connected to the AP980 via a power management system, so that functions such as managing charging, discharging, and power consumption may be performed via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the foregoing embodiment shown in fig. 1C or fig. 2, the method flow of each step may be implemented based on the structure of the mobile phone.
In the embodiments shown in fig. 3 and fig. 4A to fig. 4E, the functions of the units may be implemented based on the structure of the mobile phone.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the unlocking control methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product including a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the unlock control methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (14)
1. A mobile terminal comprising an application processor AP, and a face recognition device connected to the AP, wherein,
the face recognition device is used for acquiring a face image of the target object;
the mobile terminal is further used for carrying out image quality evaluation on the face image to obtain an image quality evaluation value, and when the image quality evaluation value is lower than a preset quality threshold value, carrying out image enhancement processing on the face image;
the AP is used for judging whether the target object is in a glasses wearing state or not according to the face image subjected to image enhancement processing; when the target object is in a glasses wearing state, determining a glasses area from the face image; removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image, wherein the face image except the glasses area in the face image is the target face image;
wherein, the face recognition operation is performed according to the target face image, and the AP is specifically configured to:
determining an area ratio between the target face image and the face image;
and reducing a first face recognition threshold value according to the area ratio to obtain a second face recognition threshold value, judging whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value, wherein the first face recognition threshold value is a corresponding recognition threshold value under the condition that the glasses are not worn.
2. The mobile terminal according to claim 1, wherein in said determining whether the target object is in a state of wearing glasses according to the face image, the AP is specifically configured to:
determining the eye position of the target object according to the face image;
acquiring a region image within a preset radius range by taking the position of the human eye as a center;
determining a specular reflection area from the area image;
and judging whether the target object is in a glasses wearing state or not according to the mirror reflection area, and confirming that the target object is in the glasses wearing state when the area ratio between the mirror reflection area and the face image is larger than a first preset threshold value.
3. The mobile terminal of claim 2, wherein in said determining a glasses area from the face image, the AP is specifically configured to:
determining the maximum vertical distance corresponding to the specular reflection area;
and determining a horizontal cross section area covering the face image by taking the maximum vertical distance as the width, and taking the horizontal cross section area as the glasses area.
4. The mobile terminal according to claim 1, wherein in said determining whether the target object is in a state of wearing glasses according to the face image, the AP is specifically configured to:
extracting the contour of the face image to obtain a face contour image;
filtering out a first contour image under a state of not wearing glasses from the face contour image to obtain a second contour image;
and judging whether the similarity between the second contour image and a preset glasses image is greater than a second preset threshold value or not, and confirming that the target object is in a glasses wearing state when the similarity between the second contour image and the preset glasses image is greater than the second preset threshold value.
5. The mobile terminal of claim 4, wherein in said determining a glasses area from the face image, the AP is specifically configured to:
and taking the area formed by the second contour image as the glasses area.
6. An unlocking control method is applied to a mobile terminal comprising an application processor AP and a face recognition device connected with the AP, and comprises the following steps:
the face recognition device acquires a face image of a target object;
the mobile terminal evaluates the image quality of the face image to obtain an image quality evaluation value, and when the image quality evaluation value is lower than a preset quality threshold value, performs image enhancement processing on the face image;
the AP judges whether the target object is in a glasses wearing state according to the face image after image enhancement processing; when the target object is in a glasses wearing state, determining a glasses area from the face image; removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image, wherein the face image except the glasses area in the face image is the target face image;
wherein, the performing the face recognition operation according to the target face image comprises:
determining an area ratio between the target face image and the face image;
and reducing a first face recognition threshold value according to the area ratio to obtain a second face recognition threshold value, judging whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value, wherein the first face recognition threshold value is a corresponding recognition threshold value under the condition that the glasses are not worn.
7. An unlock control method, characterized by comprising:
acquiring a face image of a target object;
carrying out image quality evaluation on the face image to obtain an image quality evaluation value;
when the image quality evaluation value is lower than a preset quality threshold value, performing image enhancement processing on the face image;
judging whether the target object is in a glasses wearing state or not according to the face image subjected to image enhancement processing;
when the target object is in a glasses wearing state, determining a glasses area from the face image;
removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image, wherein the face image except the glasses area in the face image is the target face image;
wherein, the performing the face recognition operation according to the target face image comprises:
determining an area ratio between the target face image and the face image;
and reducing a first face recognition threshold value according to the area ratio to obtain a second face recognition threshold value, judging whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value, wherein the first face recognition threshold value is a corresponding recognition threshold value under the condition that the glasses are not worn.
8. The method according to claim 7, wherein the determining whether the target object is in a state of wearing glasses according to the face image comprises:
determining the eye position of the target object according to the face image;
acquiring a region image within a preset radius range by taking the position of the human eye as a center;
determining a specular reflection area from the area image;
and judging whether the target object is in a glasses wearing state or not according to the mirror reflection area, and confirming that the target object is in the glasses wearing state when the area ratio between the mirror reflection area and the face image is larger than a first preset threshold value.
9. The method of claim 8, wherein the determining the glasses area from the face image comprises:
determining the maximum vertical distance corresponding to the specular reflection area;
and determining a horizontal cross section area covering the face image by taking the maximum vertical distance as the width, and taking the horizontal cross section area as the glasses area.
10. The method according to claim 7, wherein the determining whether the target object is in a state of wearing glasses according to the face image comprises:
extracting the contour of the face image to obtain a face contour image;
filtering out a first contour image under a state of not wearing glasses from the face contour image to obtain a second contour image;
and judging whether the similarity between the second contour image and a preset glasses image is greater than a second preset threshold value or not, and confirming that the target object is in a glasses wearing state when the similarity between the second contour image and the preset glasses image is greater than the second preset threshold value.
11. The method of claim 10, wherein the determining the glasses area from the face image comprises:
and taking the area formed by the second contour image as the glasses area.
12. An unlock control device, comprising:
an acquisition unit configured to acquire a face image of a target object;
the device is also used for evaluating the image quality of the face image to obtain an image quality evaluation value, and when the image quality evaluation value is lower than a preset quality threshold value, the image enhancement processing is carried out on the face image;
the judging unit is used for judging whether the target object is in a glasses wearing state according to the face image subjected to image enhancement processing;
the determining unit is used for determining a glasses area from the face image when the judging result of the judging unit is that the target object is in a glasses wearing state;
the recognition unit is used for removing the glasses area from the face image to obtain a target face image, and performing face recognition operation according to the target face image, wherein the face image except the glasses area in the face image is the target face image;
in the aspect of performing the face recognition operation according to the target face image, the recognition unit is specifically configured to:
determining an area ratio between the target face image and the face image;
and reducing a first face recognition threshold value according to the area ratio to obtain a second face recognition threshold value, judging whether a matching value between the target face image and a preset face template is greater than the second face recognition threshold value, and confirming that the face recognition is successful when the matching value is greater than the second face recognition threshold value, wherein the first face recognition threshold value is a corresponding recognition threshold value under the condition that the glasses are not worn.
13. A mobile terminal, comprising: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs comprising instructions for the method of any of claims 7-11.
14. A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method according to any one of claims 7-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710693352.3A CN107506708B (en) | 2017-08-14 | 2017-08-14 | Unlocking control method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710693352.3A CN107506708B (en) | 2017-08-14 | 2017-08-14 | Unlocking control method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107506708A CN107506708A (en) | 2017-12-22 |
CN107506708B true CN107506708B (en) | 2021-03-09 |
Family
ID=60691633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710693352.3A Active CN107506708B (en) | 2017-08-14 | 2017-08-14 | Unlocking control method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107506708B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830062B (en) * | 2018-05-29 | 2022-10-04 | 浙江水科文化集团有限公司 | Face recognition method, mobile terminal and computer readable storage medium |
CN108875989B (en) * | 2018-06-29 | 2022-12-27 | 北京金山安全软件有限公司 | Reservation method and device based on face recognition, computer equipment and storage medium |
CN108932758A (en) * | 2018-06-29 | 2018-12-04 | 北京金山安全软件有限公司 | Sign-in method and device based on face recognition, computer equipment and storage medium |
CN111507202B (en) * | 2020-03-27 | 2023-04-18 | 北京万里红科技有限公司 | Image processing method, device and storage medium |
CN112102623A (en) * | 2020-08-24 | 2020-12-18 | 深圳云天励飞技术股份有限公司 | Traffic violation identification method and device and intelligent wearable device |
CN112733722B (en) * | 2021-01-11 | 2024-06-21 | 深圳力维智联技术有限公司 | Gesture recognition method, device, system and computer readable storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020579B (en) * | 2011-09-22 | 2015-11-25 | 上海银晨智能识别科技有限公司 | The spectacle-frame minimizing technology of face identification method and system, facial image and device |
CN104091163A (en) * | 2014-07-19 | 2014-10-08 | 福州大学 | LBP face recognition method capable of eliminating influences of blocking |
CN104156700A (en) * | 2014-07-26 | 2014-11-19 | 佳都新太科技股份有限公司 | Face image glass removal method based on mobile shape model and weighted interpolation method |
CN105046250B (en) * | 2015-09-06 | 2018-04-20 | 广州广电运通金融电子股份有限公司 | The glasses removing method of recognition of face |
-
2017
- 2017-08-14 CN CN201710693352.3A patent/CN107506708B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107506708A (en) | 2017-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107590461B (en) | Face recognition method and related product | |
CN107862265B (en) | Image processing method and related product | |
CN107609514B (en) | Face recognition method and related product | |
CN107480496B (en) | Unlocking control method and related product | |
CN107506708B (en) | Unlocking control method and related product | |
CN107292285B (en) | Iris living body detection method and related product | |
CN107463818B (en) | Unlocking control method and related product | |
CN107679482B (en) | Unlocking control method and related product | |
CN107657218B (en) | Face recognition method and related product | |
CN107451446B (en) | Unlocking control method and related product | |
CN107506687B (en) | Living body detection method and related product | |
CN107679481B (en) | Unlocking control method and related product | |
CN107403147B (en) | Iris living body detection method and related product | |
CN107451454B (en) | Unlocking control method and related product | |
CN107451449B (en) | Biometric unlocking method and related product | |
CN107480488B (en) | Unlocking control method and related product | |
CN107423699B (en) | Biopsy method and Related product | |
WO2019024717A1 (en) | Anti-counterfeiting processing method and related product | |
CN107784271B (en) | Fingerprint identification method and related product | |
CN107633499B (en) | Image processing method and related product | |
CN107506697B (en) | Anti-counterfeiting processing method and related product | |
CN107613550B (en) | Unlocking control method and related product | |
CN108345848A (en) | The recognition methods of user's direction of gaze and Related product | |
WO2019001254A1 (en) | Method for iris liveness detection and related product | |
CN107451444A (en) | Solve lock control method and Related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |