CN107609514B - Face recognition method and related product - Google Patents

Face recognition method and related product Download PDF

Info

Publication number
CN107609514B
CN107609514B CN201710818693.9A CN201710818693A CN107609514B CN 107609514 B CN107609514 B CN 107609514B CN 201710818693 A CN201710818693 A CN 201710818693A CN 107609514 B CN107609514 B CN 107609514B
Authority
CN
China
Prior art keywords
face
display parameters
color
face image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710818693.9A
Other languages
Chinese (zh)
Other versions
CN107609514A (en
Inventor
周海涛
王健
郭子青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710818693.9A priority Critical patent/CN107609514B/en
Publication of CN107609514A publication Critical patent/CN107609514A/en
Priority to PCT/CN2018/102278 priority patent/WO2019052329A1/en
Application granted granted Critical
Publication of CN107609514B publication Critical patent/CN107609514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the invention discloses a face recognition method and a related product, wherein the method comprises the following steps: acquiring ambient light, and shooting a face according to the ambient light to obtain a first face image; if the color of the first face image is color cast in the designated color, adjusting display parameters of a screen to obtain target display parameters; and lightening the screen according to the target display parameters to supplement light for the face, and shooting to obtain a second face image. According to the embodiment of the invention, when the face image is shot by using the ambient light, if the color cast phenomenon occurs, the display parameters of the screen can be adjusted, and then the light is supplemented to the face through the screen, the color cast can be generated in the shot face image as little as possible, the quality of the face image is improved, and further, the face unlocking efficiency is improved.

Description

Face recognition method and related product
Technical Field
The invention relates to the technical field of mobile terminals, in particular to a face recognition method and a related product.
Background
With the widespread application of mobile terminals (mobile phones, tablet computers, etc.), the applications that the mobile terminals can support are increasing, the functions are increasing, and the mobile terminals are developing towards diversification and individuation, and become indispensable electronic products in the life of users.
At present, people face unlocking is more and more favored by mobile terminal manufacturers, and because people face unlocking does not need a user to contact a mobile terminal, people can acquire face images, so the face images are very convenient to acquire, the acquisition of the face images is taken as the key of the face unlocking, the quality of the face images directly determines the success or failure of the face unlocking, and therefore the problem of how to improve the acquisition efficiency of the face images needs to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a face recognition method and a related product, aiming at improving the face recognition efficiency under the condition of squinting.
In a first aspect, an embodiment of the present invention provides a mobile terminal, including an Application Processor (AP), and a face recognition device connected to the AP, wherein,
the face recognition device is used for acquiring ambient light and shooting a face according to the ambient light to obtain a first face image;
the AP is used for adjusting the display parameters of the screen to obtain target display parameters if the color of the first face image is color cast in a designated color; lightening the screen according to the target display parameters to supplement light for the human face;
the face recognition device is used for shooting to obtain a second face image.
In a second aspect, an embodiment of the present invention provides a face recognition method, which is applied to a mobile terminal including an application processor AP and a face recognition device connected to the AP, where the method includes:
the face recognition device acquires ambient light and shoots a face according to the ambient light to obtain a first face image;
when the color of the first face image is color cast to the designated color, the AP adjusts the display parameters of the screen to obtain target display parameters; lightening the screen according to the target display parameters to supplement light for the human face;
and the face recognition device shoots to obtain a second face image.
In a third aspect, an embodiment of the present invention provides a face recognition method, including:
acquiring ambient light, and shooting a face according to the ambient light to obtain a first face image;
if the color of the first face image is color cast in the designated color, adjusting display parameters of a screen to obtain target display parameters;
and lightening the screen according to the target display parameters to supplement light for the face, and shooting to obtain a second face image.
In a fourth aspect, an embodiment of the present invention provides a face recognition apparatus, including:
the first shooting unit is used for acquiring ambient light and shooting a face according to the ambient light to obtain a first face image;
the adjusting unit is used for adjusting the display parameters of the screen to obtain target display parameters if the color of the first face image is color cast in the designated color;
and the second shooting unit is used for lightening the screen according to the target display parameters so as to supplement light to the human face and shooting to obtain a second human face image.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs including instructions for some or all of the steps as described in the third aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, where the computer program is used to make a computer execute some or all of the steps described in the third aspect of the present invention.
In a seventh aspect, embodiments of the present invention provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the third aspect of embodiments of the present invention. The computer program product may be a software installation package.
The embodiment of the invention has the following beneficial effects:
it can be seen that, according to the face recognition method described in the embodiment of the present invention, ambient light may be obtained, a face is photographed according to the ambient light, a first face image is obtained, if a color of the first face image is color cast to a specified color, a display parameter of a screen is adjusted, a target display parameter is obtained, the screen is lit according to the target display parameter, so as to supplement light to the face, and photographing is performed, a second face image is obtained, so that, when the face image is photographed by using the ambient light, if a color cast phenomenon occurs, the display parameter of the screen may be adjusted, further, the light is supplemented to the face by the aid of the screen, the photographed face image may have color cast as little as possible, quality of the face image is improved, and further, face unlocking efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic diagram of an architecture of an exemplary mobile terminal according to an embodiment of the present invention;
fig. 1B is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 1C is a schematic flow chart of a face recognition method disclosed in the embodiment of the present invention;
fig. 1D is another schematic flow chart of a face recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another face recognition method disclosed in the embodiment of the present invention;
fig. 3 is another schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 4A is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present invention;
FIG. 4B is a schematic diagram of another structure of the face recognition apparatus depicted in FIG. 4A according to an embodiment of the present invention;
fig. 4C is a schematic structural diagram of an adjusting unit of the face recognition apparatus depicted in fig. 4B according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The Mobile terminal according to the embodiment of the present invention may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
The following describes embodiments of the present invention in detail. As shown in fig. 1A, an exemplary mobile terminal 1000, the face recognition device of the mobile terminal 1000 may include a front camera 21, which may be at least one of: infrared camera, two cameras, visible light camera etc. two cameras can be following at least one: infrared camera + can gather facial image through face identification device with light camera, two visible light cameras etc. at face identification in-process, above-mentioned leading camera can possess the function of zooming, can shoot same target based on different focuses, obtains a plurality of images, and above-mentioned target can be for the people's face.
Referring to fig. 1B, fig. 1B is a schematic structural diagram of a mobile terminal 100, where the mobile terminal 100 includes: the application processor AP110 and the face recognition device 130, wherein the AP110 is connected with the face recognition device 130 through a bus 150.
The mobile terminal described in fig. 1A or fig. 1B may be configured to implement the following functions:
the face recognition device 130 is configured to acquire ambient light, and capture a face according to the ambient light to obtain a first face image;
the AP110 is configured to adjust a display parameter of a screen to obtain a target display parameter if the color of the first face image is color cast in a designated color; lightening the screen according to the target display parameters to supplement light for the human face;
the face recognition device 130 is configured to perform shooting to obtain a second face image.
Optionally, the AP110 is further specifically configured to:
performing spectrum analysis on the first face image to obtain a color component of the first face image;
comparing the color components with preset color components to obtain color deviation;
and when the color deviation degree is in a preset range, confirming that the color of the first face image is deviated from the designated color.
Optionally, in terms of the display parameters of the adjustment screen and the target display parameters, the AP110 is specifically configured to:
and determining the target display parameters corresponding to the color deviation according to a mapping relation between a preset deviation and the display parameters of the screen.
Optionally, in terms of the display parameters of the adjustment screen, the AP110 is specifically configured to:
acquiring a first color spectrogram of the ambient light;
acquiring a preset second color spectrogram;
determining a spectral difference map between the second color spectrogram and the first color spectrogram;
and determining display parameters corresponding to the frequency spectrum difference map as the target display parameters.
Optionally, in the aspect of lighting the screen according to the target display parameter, the AP110 is specifically configured to:
obtaining pre-stored display parameters of N pieces of wallpaper to obtain N groups of display parameters, wherein N is an integer greater than 1;
selecting a group of display parameters closest to the target display parameters from the N groups of display parameters, and acquiring wallpaper corresponding to the closest group of display parameters to obtain target wallpaper;
and lightening the screen according to the target wallpaper.
Further optionally, based on the mobile terminal described in fig. 1A or fig. 1B, a face recognition method described in the following may be executed, specifically as follows:
the face recognition device 130 acquires ambient light, and shoots a face according to the ambient light to obtain a first face image;
when the color of the first face image is color cast to the designated color, the AP110 adjusts the display parameters of the screen to obtain target display parameters; lightening the screen according to the target display parameters to supplement light for the human face;
the face recognition device 130 performs shooting to obtain a second face image.
It can be seen that, according to the face recognition method described in the embodiment of the present invention, ambient light may be obtained, a face is photographed according to the ambient light, a first face image is obtained, if a color of the first face image is color cast to a specified color, a display parameter of a screen is adjusted, a target display parameter is obtained, the screen is lit according to the target display parameter, so as to supplement light to the face, and photographing is performed, a second face image is obtained, so that, when the face image is photographed by using the ambient light, if a color cast phenomenon occurs, the display parameter of the screen may be adjusted, further, the light is supplemented to the face by the aid of the screen, the photographed face image may have color cast as little as possible, quality of the face image is improved, and further, face unlocking efficiency is improved.
Fig. 1C is a schematic flow chart of an embodiment of a face recognition method according to an embodiment of the present invention. The face recognition method described in this embodiment is applied to a mobile terminal including a face recognition device and an application processor AP, and its physical diagram and structure diagram can be seen in fig. 1A or fig. 1B, which includes the following steps:
101. and acquiring ambient light, and shooting the face according to the ambient light to obtain a first face image.
Wherein, because there are different luminous objects in the environment to and different reflection light, therefore, face recognition device can acquire these ambient light, and carry out the light filling to the face based on these ambient light, and then, shoot the face, obtain first face image. Due to the complexity of light in the environment, the human face image may have color cast. Especially in complex environments, it is easy to cause human face image color cast, for example, in a KTV environment, a scotopic vision environment, or an exposure environment.
102. And if the color of the first face image is color cast in the designated color, adjusting the display parameters of the screen to obtain the target display parameters.
Wherein the designated color may be one of: red, green or blue, etc. If the color of the first face image is color cast in the designated color, the display parameters of the screen can be adjusted to neutralize the color cast phenomenon, and then the shooting is carried out again, so that the color cast phenomenon in the obtained face image can be relieved. The color cast of the first face image to the designated color may be understood as that the color of the entire first face image is cast to the designated color, or the color corresponding to the face area in the first face image is cast to the designated color.
Optionally, the display parameter of the screen may be at least one of: color temperature of the screen, brightness of the screen, color of the screen, resolution of the screen, and the like. For example, when the color of the first face image is biased to a specified color, the color temperature and color of the screen may be adjusted.
Optionally, in the step 102, adjusting the display parameters of the screen may include the following steps:
21. acquiring a first color spectrogram of the ambient light;
22. acquiring a preset second color spectrogram;
23. determining a spectral difference map between the second color spectrogram and the first color spectrogram;
24. and determining display parameters corresponding to the frequency spectrum difference map as the target display parameters.
The first color spectrogram is obtained by acquiring ambient light through a face recognition device and analyzing the ambient light. The preset second color spectrogram may be a color spectrogram under a non-color-cast condition, and further, a spectrum difference map between the second color spectrogram and the first color spectrogram may be determined, for example, a difference operation may be performed between the second color spectrogram and the first color spectrogram, or an absolute value operation may be performed to obtain the spectrum difference map. The corresponding relation between the spectrogram and the display parameters can be prestored in the mobile terminal, and then after the spectrogram is determined, the display parameters corresponding to the spectrogram can be determined according to the corresponding relation and serve as target display parameters.
103. And lightening the screen according to the target display parameters to supplement light for the face, and shooting to obtain a second face image.
The screen can be lightened according to the target display parameters, and then the face is supplemented with light through the screen, at this time, the screen and the ambient light are equivalently supplemented with light for the face at the same time, and shooting is performed through the face recognition device, so that a second face image is obtained.
Optionally, in step 103, lighting the screen according to the target display parameter may include the following steps:
31. obtaining pre-stored display parameters of N pieces of wallpaper to obtain N groups of display parameters, wherein N is an integer greater than 1;
32. selecting a group of display parameters closest to the target display parameters from the N groups of display parameters, and acquiring wallpaper corresponding to the closest group of display parameters to obtain target wallpaper;
33. and lightening the screen according to the target wallpaper.
N pieces of wallpaper can be stored in the mobile terminal in advance, each piece of wallpaper corresponds to one group of display parameters, and N groups of display parameters are obtained, wherein N is an integer larger than 1.
For example, in the process of performing step 32, one of the N sets of display parameters having a color temperature closest to the target display parameter may be selected, and the wallpaper corresponding to the closest set of display parameter, that is, the target wallpaper, may be obtained, and the screen may be lit to display the target wallpaper.
For example, in the process of performing step 32, one of the N sets of display parameters having a color closest to the target display parameter may be selected, and the wallpaper corresponding to the closest set of display parameter, that is, the target wallpaper, may be obtained, and the screen may be lit to display the target wallpaper.
For another example, the weight value of each display parameter in N groups of display parameters is determined, and then the display effect value corresponding to each group of display parameters is determined, so as to obtain N display effect values. And selecting one display effect value closest to the first target display effect value from the N display effect values to obtain a second target display effect value, and acquiring the corresponding wallpaper to obtain the target wallpaper.
Optionally, as shown in fig. 1D, fig. 1D is another embodiment of the face recognition method described in fig. 1C according to the embodiment of the present invention, and compared with the face recognition method described in fig. 1C, the method may further include the following steps:
104. and matching the second face image with a preset face template, and executing unlocking operation when the second face image is successfully matched with the preset face template.
The preset face template may be stored in advance before the step 101 is executed, and may be implemented by acquiring a face image of a user through a face recognition device, where the preset face template may be stored in a face template library.
Optionally, in the process of executing the step 104, the second face image is matched with the preset face template, and when the matching value between the face image and the preset face template is greater than the face recognition threshold, the matching is successful, and then the following unlocking process is executed, and when the matching value between the second face image and the preset face template is less than or equal to the face recognition threshold, the whole process of face recognition may be ended, or the user is prompted to perform face recognition again.
Specifically, in the process of executing step 104, feature extraction may be performed on the second face image and the preset face template, and feature matching may be performed on features obtained after feature extraction. The above feature extraction can be implemented by the following algorithm: a Harris corner detection algorithm, Scale Invariant Feature Transform (SIFT), SUSAN corner detection algorithm, etc., which are not described herein again. In performing step 104, the face image may be pre-processed, which may include but is not limited to: the method comprises the steps of image enhancement processing, binarization processing, smoothing processing, color image conversion into gray level images and the like, then carrying out feature extraction on a second face image after preprocessing to obtain a feature set of the face image, then selecting at least one face template from a face template library, wherein the face template can be an original face image or a group of feature sets, further carrying out feature matching on the feature set of the face image and the feature set of the face template to obtain a matching result, and judging whether matching is successful or not according to the matching result.
When the matching value between the second face image and the preset face template is greater than the face recognition threshold, a next unlocking process may be executed, and the next unlocking process may include, but is not limited to: unlocking is achieved to enter the main page, or a designated page of an application, or to enter the next biometric step.
Optionally, in the step 104, matching the second face image with a preset face template may include the following steps:
d1, performing multi-scale decomposition on the second face image by adopting a multi-scale decomposition algorithm to obtain a first high-frequency component image of the second face image, and performing feature extraction on the first high-frequency component image to obtain a first feature set;
d2, performing multi-scale decomposition on the preset face template by adopting the multi-scale decomposition algorithm to obtain a second high-frequency component image of the preset face template, and performing feature extraction on the second high-frequency component image to obtain a second feature set;
d3, screening the first characteristic set and the second characteristic set to obtain a first stable characteristic set and a second stable characteristic set;
d4, performing feature matching on the first stable feature set and the second stable feature set, and confirming that the second face image is successfully matched with a preset face template when the number of matched feature points between the first stable feature set and the second stable feature set is greater than a preset quantity threshold.
The second face image may be subjected to multi-scale decomposition by using a multi-scale decomposition algorithm to obtain a low-frequency component image and a plurality of high-frequency component images, where the first high-frequency component image may be one of the plurality of high-frequency component images, and the multi-scale decomposition algorithm may include, but is not limited to: wavelet transformation, laplacian transformation, Contourlet Transformation (CT), nonsubsampled Contourlet transformation (NSCT), shear wave transformation, etc., taking a Contourlet as an example, performing multi-scale decomposition on a face image by using the Contourlet transformation to obtain a low-frequency component image and a plurality of high-frequency component images, taking NSCT as an example, performing multi-scale decomposition on the face image by using the NSCT to obtain a low-frequency component image and a plurality of high-frequency component images, and taking the sizes of each image in the plurality of high-frequency component images as an example, and performing multi-scale decomposition on the face image by using the NSCT to obtain a low-frequency component image and a plurality of high-frequency component images, wherein the sizes of each image in the plurality of high-frequency component images are the same. For high frequency component images, it contains more detail information of the original image. Similarly, a multi-scale decomposition algorithm may be used to perform multi-scale decomposition on the preset face template to obtain a low-frequency component image and a plurality of high-frequency component images, where the second high-frequency component image may be one of the plurality of high-frequency component images, and the first high-frequency component image corresponds to the second high-frequency component image in position, that is, the hierarchical position between the first high-frequency component image and the second high-frequency component image is the same as the scale position, for example, the first high-frequency component image is located at the 2 nd layer and the 3 rd scale, and the second high-frequency component image is also located at the 2 nd layer and the 3 rd scale. In the step D3, the first feature set and the second feature set are filtered to obtain a first stable feature set and a second stable feature set, and the filtering process may be implemented in such a manner that the first feature set may include a plurality of feature points, the second feature set also includes a plurality of feature points, each feature point is a vector and includes a magnitude and a direction, so that a modulus of each feature point may be calculated, and if the modulus is greater than a certain threshold, the feature point is retained, so that the feature point may be filtered. In the steps D1-D4, the fine features between the second face image and the preset face template are mainly considered to be matched, so that the accuracy of face recognition can be improved, and in general, the more detailed features are more difficult to forge, so that the safety of face unlocking is improved.
Optionally, between the step 103 and the step 104, the following steps may be further included:
and carrying out image enhancement processing on the second face image.
Among them, the image enhancement processing may include, but is not limited to: image denoising (e.g., wavelet transform for image denoising), image restoration (e.g., wiener filtering), dark vision enhancement algorithms (e.g., histogram equalization, gray scale stretching, etc.), and after image enhancement processing is performed on the face image, the quality of the face image can be improved to some extent.
It can be seen that, according to the face recognition method described in the embodiment of the present invention, ambient light may be obtained, a face is photographed according to the ambient light, a first face image is obtained, if a color of the first face image is color cast to a specified color, a display parameter of a screen is adjusted, a target display parameter is obtained, the screen is lit according to the target display parameter, so as to supplement light to the face, and photographing is performed, a second face image is obtained, so that, when the face image is photographed by using the ambient light, if a color cast phenomenon occurs, the display parameter of the screen may be adjusted, further, the light is supplemented to the face by the aid of the screen, the photographed face image may have color cast as little as possible, quality of the face image is improved, and further, face unlocking efficiency is improved.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of a face recognition method according to an embodiment of the present invention. The face recognition method described in this embodiment is applied to a mobile terminal including a face recognition device and an application processor AP, and its physical diagram and structure diagram can be seen in fig. 1A or fig. 1B, which includes the following steps:
201. acquiring ambient light, and shooting a face according to the ambient light to obtain a first face image;
202. and carrying out spectrum analysis on the first face image to obtain the color component of the first face image.
In step 201, performing spectrum analysis on the first face image may include: performing spectral analysis on the entire image of the first face image; or carrying out spectrum analysis on the face region in the first face image. In this way, the color component of the first face image can be obtained.
203. And comparing the color components with preset color components to obtain the color deviation.
Wherein, the color component can see whether the image is color cast to a certain extent. The preset color components may be pre-stored in the mobile terminal, which are color components in the case of non-color cast. The color components of the first face image can be compared with preset color components to obtain the color deviation. Since the color components can be expressed by ratios, the degree of color deviation can be easily calculated.
204. And when the color deviation degree is in a preset range, confirming that the color of the first face image deviates to a designated color, and adjusting display parameters of a screen to obtain target display parameters.
The preset range can be set by a system default or a user, and then when the color deviation degree is within the preset range, the color cast of the first face image is confirmed to be a designated color, and then the display parameters of the screen are adjusted to obtain the target display parameters.
Optionally, in step 204, the display parameters of the screen and the target display parameters may be adjusted as follows:
and determining the target display parameters corresponding to the color deviation according to a mapping relation between a preset deviation and the display parameters of the screen.
Each deviation degree corresponds to a display parameter of a screen, which can be obtained in advance through experiments, and further, before implementing the embodiment of the present invention, a mapping relationship between a preset deviation degree and the display parameter of the screen can be obtained, and further, according to the mapping relationship, a target display parameter corresponding to the color deviation degree in the step 203 is determined, so that different display parameters of the screen can be adjusted under different color cast conditions, so as to perform light supplement on a human face through the screen in a targeted manner, and the acquisition efficiency of a human face image is improved.
205. And lightening the screen according to the target display parameters to supplement light for the face, and shooting to obtain a second face image.
The specific description of the steps 201 and 205 may refer to the corresponding steps of the face recognition method described in fig. 1C, and will not be described herein again.
It can be seen that, according to the face recognition method described in the embodiment of the present invention, ambient light may be obtained, a face is photographed according to the ambient light, a first face image is obtained, if a color of the first face image is color cast to a specified color, a display parameter of a screen is adjusted, a target display parameter is obtained, the screen is lit according to the target display parameter, so as to supplement light to the face, and photographing is performed, a second face image is obtained, so that, when the face image is photographed by using the ambient light, if a color cast phenomenon occurs, the display parameter of the screen may be adjusted, further, the light is supplemented to the face by the aid of the screen, the photographed face image may have color cast as little as possible, quality of the face image is improved, and further, face unlocking efficiency is improved.
Referring to fig. 3, fig. 3 is a mobile terminal according to an embodiment of the present invention, including: an application processor AP and a memory; and one or more programs stored in the memory and configured for execution by the AP, the programs including instructions for performing the steps of:
acquiring ambient light, and shooting a face according to the ambient light to obtain a first face image;
if the color of the first face image is color cast in the designated color, adjusting display parameters of a screen to obtain target display parameters;
and lightening the screen according to the target display parameters to supplement light for the face, and shooting to obtain a second face image.
In one possible example, the program further comprises instructions for performing the steps of:
performing spectrum analysis on the first face image to obtain a color component of the first face image;
comparing the color components with preset color components to obtain color deviation;
and when the color deviation degree is in a preset range, confirming that the color of the first face image is deviated from the designated color.
In one possible example, in adjusting the display parameters of the screen, the target display parameters, the program includes instructions for performing the steps of:
and determining the target display parameters corresponding to the color deviation according to a mapping relation between a preset deviation and the display parameters of the screen.
In one possible example, in terms of adjusting display parameters of the screen, the program includes instructions for performing the steps of:
acquiring a first color spectrogram of the ambient light;
acquiring a preset second color spectrogram;
determining a spectral difference map between the second color spectrogram and the first color spectrogram;
and determining display parameters corresponding to the frequency spectrum difference map as the target display parameters.
In one possible example, in said illuminating said screen according to said target display parameters, said program comprises instructions for performing the steps of:
obtaining pre-stored display parameters of N pieces of wallpaper to obtain N groups of display parameters, wherein N is an integer greater than 1;
selecting a group of display parameters closest to the target display parameters from the N groups of display parameters, and acquiring wallpaper corresponding to the closest group of display parameters to obtain target wallpaper;
and lightening the screen according to the target wallpaper.
Referring to fig. 4A, fig. 4A is a schematic structural diagram of a face recognition device according to the present embodiment. The face recognition apparatus is applied to a mobile terminal, and includes a first photographing unit 401, an adjusting unit 402, and a second photographing unit 403, wherein,
a first shooting unit 401, configured to acquire ambient light, and shoot a face according to the ambient light to obtain a first face image;
an adjusting unit 402, configured to adjust a display parameter of a screen to obtain a target display parameter if the color of the first face image is color cast in a designated color;
and a second shooting unit 403, configured to light up the screen according to the target display parameter, so as to supplement light to the face, and shoot the face to obtain a second face image.
Optionally, as shown in fig. 4B, fig. 4B is a modified structure of the face recognition apparatus depicted in fig. 4A, the apparatus may further include: the analyzing unit 404 and the comparing unit 405 are as follows:
an analysis unit 404, configured to perform spectrum analysis on the first face image to obtain a color component of the first face image;
a comparison unit 405, configured to compare the color component with a preset color component to obtain a color deviation degree, and when the color deviation degree is within a preset range, determine that the color of the first face image is color-shifted from the designated color.
Optionally, the adjusting unit 402 adjusts display parameters of a screen, and a specific implementation manner of the target display parameters is as follows:
and determining the target display parameters corresponding to the color deviation according to a mapping relation between a preset deviation and the display parameters of the screen.
Alternatively, as shown in fig. 4C, fig. 4C is a detailed structure of the adjusting unit 402 of the face recognition apparatus depicted in fig. 4A, where the adjusting unit 402 may include: the obtaining module 4021 and the determining module 4022 are as follows:
an obtaining module 4021, configured to obtain a first color spectrogram of the ambient light; acquiring a preset second color spectrogram;
a determining module 4022, configured to determine a spectrum difference map between the second color spectrogram and the first color spectrogram; and determining a display parameter corresponding to the spectrum difference map as the target display parameter.
Optionally, a specific implementation manner of the second capturing unit 402 lighting the screen according to the target display parameter is as follows:
obtaining pre-stored display parameters of N pieces of wallpaper to obtain N groups of display parameters, wherein N is an integer greater than 1;
selecting a group of display parameters closest to the target display parameters from the N groups of display parameters, and acquiring wallpaper corresponding to the closest group of display parameters to obtain target wallpaper;
and lightening the screen according to the target wallpaper.
It can be seen that, the face recognition device described in the embodiment of the present invention may obtain ambient light, photograph a face according to the ambient light, obtain a first face image, adjust display parameters of a screen if a color of the first face image is color cast to a specified color, obtain target display parameters, light up the screen according to the target display parameters, fill in light for the face, and photograph the face to obtain a second face image, so that when the face image is photographed by using the ambient light, if a color cast phenomenon occurs, the display parameters of the screen may be adjusted, and further, fill in light for the face is assisted by the screen, the photographed face image may have color cast as little as possible, quality of the face image is improved, and further, face unlocking efficiency is improved.
It can be understood that the functions of each program module of the face recognition apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 5 is a block diagram illustrating a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention. Referring to fig. 5, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, sensor 950, audio circuit 960, Wireless Fidelity (WiFi) module 970, application processor AP980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 5:
the input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch display 933, a face recognition device 931, and other input devices 932. The specific structure and composition of the face recognition device 931 can refer to the above description, and will not be described in detail herein. The input unit 930 may also include other input devices 932. In particular, other input devices 932 may include, but are not limited to, one or more of physical keys, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Wherein, the AP980 is configured to perform the following steps:
acquiring ambient light, and shooting a face according to the ambient light to obtain a first face image;
if the color of the first face image is color cast in the designated color, adjusting display parameters of a screen to obtain target display parameters;
and lightening the screen according to the target display parameters to supplement light for the face, and shooting to obtain a second face image.
The AP980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions and processes of the mobile phone by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Optionally, the AP980 may include one or more processing units, which may be artificial intelligence chips, quantum chips; preferably, the AP980 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the AP 980.
Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
RF circuitry 910 may be used for the reception and transmission of information. In general, the RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the touch display screen according to the brightness of ambient light, and the proximity sensor may turn off the touch display screen and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and the audio signal is converted by the speaker 961 to be played; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, and the electrical signal is received by the audio circuit 960 and converted into audio data, and the audio data is processed by the audio playing AP980, and then sent to another mobile phone via the RF circuit 910, or played to the memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 5 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, and preferably, the power supply may be logically connected to the AP980 via a power management system, so that functions such as managing charging, discharging, and power consumption may be performed via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiments shown in fig. 1C, fig. 1D, or fig. 2, the method flows of the steps may be implemented based on the structure of the mobile phone.
In the embodiments shown in fig. 3 and fig. 4A to fig. 4C, the functions of the units may be implemented based on the structure of the mobile phone.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the face recognition methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to make a computer execute part or all of the steps of any one of the face recognition methods as described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. A mobile terminal, comprising an application processor AP, and a face recognition device connected to the AP, the face recognition device being a front-facing camera, wherein,
the face recognition device is used for acquiring ambient light and shooting a face according to the ambient light to obtain a first face image;
the AP is used for adjusting the display parameters of the screen to obtain target display parameters if the color of the first face image is color cast in a designated color; lightening the screen according to the target display parameters to supplement light for the human face;
the face recognition device is used for shooting to obtain a second face image;
the mobile terminal is used for matching the second face image with a preset face template, and when the second face image is successfully matched with the preset face template, unlocking operation is executed;
in the aspect of matching the second face image with a preset face template, the mobile terminal is specifically configured to:
performing multi-scale decomposition on the second face image by adopting a multi-scale decomposition algorithm to obtain a first high-frequency component image of the second face image, and performing feature extraction on the first high-frequency component image to obtain a first feature set;
performing multi-scale decomposition on the preset face template by adopting the multi-scale decomposition algorithm to obtain a second high-frequency component image of the preset face template, and performing feature extraction on the second high-frequency component image to obtain a second feature set;
screening the first characteristic set and the second characteristic set to obtain a first stable characteristic set and a second stable characteristic set;
and performing feature matching on the first stable feature set and the second stable feature set, and confirming that the second face image is successfully matched with a preset face template when the number of matched feature points between the first stable feature set and the second stable feature set is greater than a preset quantity threshold.
2. The mobile terminal of claim 1, wherein the AP is further specifically configured to:
performing spectrum analysis on the first face image to obtain a color component of the first face image;
comparing the color components with preset color components to obtain color deviation;
and when the color deviation degree is in a preset range, confirming that the color of the first face image is deviated from the designated color.
3. The mobile terminal of claim 2, wherein in adjusting the display parameters of the screen to obtain the target display parameters, the AP is specifically configured to:
and determining the target display parameters corresponding to the color deviation according to a mapping relation between a preset deviation and the display parameters of the screen.
4. The mobile terminal according to claim 1, wherein in terms of the display parameters of the adjustment screen, the AP is specifically configured to:
acquiring a first color spectrogram of the ambient light;
acquiring a preset second color spectrogram;
determining a spectral difference map between the second color spectrogram and the first color spectrogram;
and determining display parameters corresponding to the frequency spectrum difference map as the target display parameters.
5. A mobile terminal according to any of claims 1-4, wherein in said illuminating said screen according to said target display parameters, said AP is specifically configured to:
obtaining pre-stored display parameters of N pieces of wallpaper to obtain N groups of display parameters, wherein N is an integer greater than 1;
selecting a group of display parameters closest to the target display parameters from the N groups of display parameters, and acquiring wallpaper corresponding to the closest group of display parameters to obtain target wallpaper;
and lightening the screen according to the target wallpaper.
6. A face recognition method is applied to a mobile terminal comprising an application processor AP and a face recognition device connected with the AP, wherein the face recognition device is a front camera, and the method comprises the following steps:
the face recognition device acquires ambient light and shoots a face according to the ambient light to obtain a first face image;
when the color of the first face image is color cast to the designated color, the AP adjusts the display parameters of the screen to obtain target display parameters; lightening the screen according to the target display parameters to supplement light for the human face;
the face recognition device shoots to obtain a second face image;
the mobile terminal matches the second face image with a preset face template, and when the second face image is successfully matched with the preset face template, an unlocking operation is executed;
wherein, the matching of the second face image with a preset face template comprises:
performing multi-scale decomposition on the second face image by adopting a multi-scale decomposition algorithm to obtain a first high-frequency component image of the second face image, and performing feature extraction on the first high-frequency component image to obtain a first feature set;
performing multi-scale decomposition on the preset face template by adopting the multi-scale decomposition algorithm to obtain a second high-frequency component image of the preset face template, and performing feature extraction on the second high-frequency component image to obtain a second feature set;
screening the first characteristic set and the second characteristic set to obtain a first stable characteristic set and a second stable characteristic set;
and performing feature matching on the first stable feature set and the second stable feature set, and confirming that the second face image is successfully matched with a preset face template when the number of matched feature points between the first stable feature set and the second stable feature set is greater than a preset quantity threshold.
7. A face recognition method is applied to a mobile terminal, the mobile terminal comprises a front camera, and the method comprises the following steps:
acquiring ambient light through the front camera, and shooting a face according to the ambient light to obtain a first face image;
if the color of the first face image is color cast in the designated color, adjusting display parameters of a screen to obtain target display parameters;
lightening the screen according to the target display parameters to supplement light for the face, and shooting through the front camera to obtain a second face image;
matching the second face image with a preset face template, and executing an unlocking operation when the second face image is successfully matched with the preset face template;
wherein, the matching of the second face image with a preset face template comprises:
performing multi-scale decomposition on the second face image by adopting a multi-scale decomposition algorithm to obtain a first high-frequency component image of the second face image, and performing feature extraction on the first high-frequency component image to obtain a first feature set;
performing multi-scale decomposition on the preset face template by adopting the multi-scale decomposition algorithm to obtain a second high-frequency component image of the preset face template, and performing feature extraction on the second high-frequency component image to obtain a second feature set;
screening the first characteristic set and the second characteristic set to obtain a first stable characteristic set and a second stable characteristic set;
and performing feature matching on the first stable feature set and the second stable feature set, and confirming that the second face image is successfully matched with a preset face template when the number of matched feature points between the first stable feature set and the second stable feature set is greater than a preset quantity threshold.
8. The method of claim 7, further comprising:
performing spectrum analysis on the first face image to obtain a color component of the first face image;
comparing the color components with preset color components to obtain color deviation;
and when the color deviation degree is in a preset range, confirming that the color of the first face image is deviated from the designated color.
9. The method of claim 8, wherein the adjusting the display parameters of the screen to obtain the target display parameters comprises:
and determining the target display parameters corresponding to the color deviation according to a mapping relation between a preset deviation and the display parameters of the screen.
10. The method of claim 7, wherein adjusting the display parameters of the screen comprises:
acquiring a first color spectrogram of the ambient light;
acquiring a preset second color spectrogram;
determining a spectral difference map between the second color spectrogram and the first color spectrogram;
and determining display parameters corresponding to the frequency spectrum difference map as the target display parameters.
11. The method of any one of claims 7-10, wherein said illuminating the screen according to the target display parameter comprises:
obtaining pre-stored display parameters of N pieces of wallpaper to obtain N groups of display parameters, wherein N is an integer greater than 1;
selecting a group of display parameters closest to the target display parameters from the N groups of display parameters, and acquiring wallpaper corresponding to the closest group of display parameters to obtain target wallpaper;
and lightening the screen according to the target wallpaper.
12. The face recognition device is applied to a mobile terminal, the mobile terminal comprises a front camera, and the face recognition device comprises:
the first shooting unit is used for acquiring ambient light through the front camera and shooting a face according to the ambient light to obtain a first face image;
the adjusting unit is used for adjusting the display parameters of the screen to obtain target display parameters if the color of the first face image is color cast in the designated color;
the second shooting unit is used for lightening the screen according to the target display parameters so as to supplement light to the human face, and shooting through the front camera to obtain a second human face image;
the matching unit is used for matching the second face image with a preset face template;
the unlocking unit is used for executing unlocking operation when the second face image is successfully matched with the preset face template;
in the aspect of matching the second face image with a preset face template, the matching unit is specifically configured to:
performing multi-scale decomposition on the second face image by adopting a multi-scale decomposition algorithm to obtain a first high-frequency component image of the second face image, and performing feature extraction on the first high-frequency component image to obtain a first feature set;
performing multi-scale decomposition on the preset face template by adopting the multi-scale decomposition algorithm to obtain a second high-frequency component image of the preset face template, and performing feature extraction on the second high-frequency component image to obtain a second feature set;
screening the first characteristic set and the second characteristic set to obtain a first stable characteristic set and a second stable characteristic set;
and performing feature matching on the first stable feature set and the second stable feature set, and confirming that the second face image is successfully matched with a preset face template when the number of matched feature points between the first stable feature set and the second stable feature set is greater than a preset quantity threshold.
13. A mobile terminal, comprising: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs comprising instructions for performing the method of any of claims 7-11.
14. A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method according to any one of claims 7-11.
CN201710818693.9A 2017-09-12 2017-09-12 Face recognition method and related product Active CN107609514B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710818693.9A CN107609514B (en) 2017-09-12 2017-09-12 Face recognition method and related product
PCT/CN2018/102278 WO2019052329A1 (en) 2017-09-12 2018-08-24 Facial recognition method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710818693.9A CN107609514B (en) 2017-09-12 2017-09-12 Face recognition method and related product

Publications (2)

Publication Number Publication Date
CN107609514A CN107609514A (en) 2018-01-19
CN107609514B true CN107609514B (en) 2021-08-06

Family

ID=61063366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710818693.9A Active CN107609514B (en) 2017-09-12 2017-09-12 Face recognition method and related product

Country Status (2)

Country Link
CN (1) CN107609514B (en)
WO (1) WO2019052329A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609514B (en) * 2017-09-12 2021-08-06 Oppo广东移动通信有限公司 Face recognition method and related product
CN108288044B (en) * 2018-01-31 2020-11-20 Oppo广东移动通信有限公司 Electronic device, face recognition method and related product
CN110472459B (en) * 2018-05-11 2022-12-27 华为技术有限公司 Method and device for extracting feature points
US11123505B2 (en) 2018-12-05 2021-09-21 Aires Medical LLC Breathing apparatus with breath detection software
CN110738250B (en) * 2019-10-09 2024-02-27 陈浩能 Fruit and vegetable freshness identification method and related products
CN113379609B (en) * 2020-03-10 2023-08-04 Tcl科技集团股份有限公司 Image processing method, storage medium and terminal equipment
CN111486950B (en) * 2020-04-20 2022-04-19 Oppo广东移动通信有限公司 Ambient light detection method, ambient light detection device, electronic apparatus, and storage medium
CN111696058A (en) * 2020-05-27 2020-09-22 重庆邮电大学移通学院 Image processing method, device and storage medium
CN111652131A (en) * 2020-06-02 2020-09-11 浙江大华技术股份有限公司 Face recognition device, light supplementing method thereof and readable storage medium
CN111694530B (en) * 2020-06-09 2023-05-23 阿波罗智联(北京)科技有限公司 Screen adaptation method and device, electronic equipment and storage medium
CN111752516A (en) * 2020-06-10 2020-10-09 Oppo(重庆)智能科技有限公司 Screen adjustment method and device for terminal equipment, terminal equipment and storage medium
CN113361349B (en) * 2021-05-25 2023-08-04 北京百度网讯科技有限公司 Face living body detection method, device, electronic equipment and storage medium
CN113359734B (en) * 2021-06-15 2022-02-22 苏州工业园区报关有限公司 Logistics auxiliary robot based on AI
CN113609958A (en) * 2021-08-02 2021-11-05 金茂智慧科技(广州)有限公司 Light adjusting method and related device
CN113630535B (en) * 2021-08-06 2023-08-11 深圳创维-Rgb电子有限公司 Face camera light supplementing method, equipment, storage medium and device based on television
CN116168658B (en) * 2023-02-20 2023-08-15 深圳新视光电科技有限公司 LCD color difference adjusting method, device, equipment and medium based on radial reflection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104320578A (en) * 2014-10-22 2015-01-28 厦门美图之家科技有限公司 Method for performing self-shot soft light compensation on basis of screen luminance
CN104683701A (en) * 2015-03-12 2015-06-03 成都品果科技有限公司 Method and system for optimizing human face colors in self-shot photos of front camera
CN105657267A (en) * 2016-04-22 2016-06-08 上海斐讯数据通信技术有限公司 Light-supplementing device and method for self-photography
CN106454081A (en) * 2016-09-29 2017-02-22 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN106899768A (en) * 2017-03-22 2017-06-27 广东小天才科技有限公司 A kind of terminal photographic method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8913005B2 (en) * 2011-04-08 2014-12-16 Fotonation Limited Methods and systems for ergonomic feedback using an image analysis module
CN105744174B (en) * 2016-02-15 2019-03-08 Oppo广东移动通信有限公司 A kind of self-timer method, device and mobile terminal
CN106469301B (en) * 2016-08-31 2019-05-07 北京天诚盛业科技有限公司 Adaptive adjustable face identification method and device
CN107609514B (en) * 2017-09-12 2021-08-06 Oppo广东移动通信有限公司 Face recognition method and related product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104320578A (en) * 2014-10-22 2015-01-28 厦门美图之家科技有限公司 Method for performing self-shot soft light compensation on basis of screen luminance
CN104683701A (en) * 2015-03-12 2015-06-03 成都品果科技有限公司 Method and system for optimizing human face colors in self-shot photos of front camera
CN105657267A (en) * 2016-04-22 2016-06-08 上海斐讯数据通信技术有限公司 Light-supplementing device and method for self-photography
CN106454081A (en) * 2016-09-29 2017-02-22 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN106899768A (en) * 2017-03-22 2017-06-27 广东小天才科技有限公司 A kind of terminal photographic method and device

Also Published As

Publication number Publication date
CN107609514A (en) 2018-01-19
WO2019052329A1 (en) 2019-03-21

Similar Documents

Publication Publication Date Title
CN107609514B (en) Face recognition method and related product
CN107679482B (en) Unlocking control method and related product
CN107862265B (en) Image processing method and related product
CN107590461B (en) Face recognition method and related product
CN107480496B (en) Unlocking control method and related product
CN107506687B (en) Living body detection method and related product
CN107292285B (en) Iris living body detection method and related product
CN108093134B (en) Anti-interference method of electronic equipment and related product
CN107197146B (en) Image processing method and device, mobile terminal and computer readable storage medium
CN107657218B (en) Face recognition method and related product
CN107451446B (en) Unlocking control method and related product
CN107463818B (en) Unlocking control method and related product
CN107403147B (en) Iris living body detection method and related product
CN107679481B (en) Unlocking control method and related product
CN107633499B (en) Image processing method and related product
CN107423699B (en) Biopsy method and Related product
CN107451454B (en) Unlocking control method and related product
CN107506708B (en) Unlocking control method and related product
CN107644219B (en) Face registration method and related product
CN107480488B (en) Unlocking control method and related product
CN107205116B (en) Image selection method, mobile terminal, image selection device, and computer-readable storage medium
CN107784271B (en) Fingerprint identification method and related product
CN107613550B (en) Unlocking control method and related product
CN107633235B (en) Unlocking control method and related product
CN107506697B (en) Anti-counterfeiting processing method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant