CN107644159B - Face recognition method and related product - Google Patents

Face recognition method and related product Download PDF

Info

Publication number
CN107644159B
CN107644159B CN201710822114.8A CN201710822114A CN107644159B CN 107644159 B CN107644159 B CN 107644159B CN 201710822114 A CN201710822114 A CN 201710822114A CN 107644159 B CN107644159 B CN 107644159B
Authority
CN
China
Prior art keywords
face image
face
target
initial
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710822114.8A
Other languages
Chinese (zh)
Other versions
CN107644159A (en
Inventor
郭子青
周海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710822114.8A priority Critical patent/CN107644159B/en
Publication of CN107644159A publication Critical patent/CN107644159A/en
Application granted granted Critical
Publication of CN107644159B publication Critical patent/CN107644159B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention relates to the technical field of mobile terminals, and discloses a face recognition method and a related product. Wherein, the method comprises the following steps: the method comprises the steps that a mobile terminal obtains an initial face image at the current moment, a stain area in the initial face image is removed according to a historical face image obtained by the mobile terminal before the current moment to obtain a target face image, and then face recognition is carried out on the target face image. Therefore, by implementing the embodiment of the invention, the noise introduced to the obtained face image when the lens is stained can be eliminated, so that the accuracy and the success rate of face recognition are improved.

Description

Face recognition method and related product
Technical Field
The invention relates to the technical field of mobile terminals, in particular to a face recognition method and a related product.
Background
The mobile terminal provides great convenience for life of people and brings personal information leakage risks.
For example, the mobile terminal stores privacy information such as photo albums and chat records; in addition, the mobile terminal also has a mobile payment application, and the mobile payment application often has a virtual wallet or a bank card bound to a user, so that if the mobile terminal is operated by other users except the owner user, the other users may check chat records and photo albums in the mobile terminal, and even use the mobile payment application to perform operations such as shopping and payment, which threatens the privacy information security and property security of the owner user.
In order to ensure the personal information security of the user in the mobile terminal, the mobile terminal can be configured with a face recognition function. However, the face recognition function is affected by factors such as ambient light, and the normal use of the mobile terminal by the user is often affected due to the failure of recognition.
Disclosure of Invention
The embodiment of the invention provides a face recognition method and a related product, which can eliminate noise introduced to an obtained face image when stains exist on a lens, so that the accuracy and the success rate of face recognition are improved.
The first aspect of the embodiment of the invention discloses a face recognition method, which comprises the following steps:
the mobile terminal acquires an initial face image at the current moment;
removing a stain area in the initial face image according to a historical face image acquired by the mobile terminal before the current moment to acquire a target face image;
and carrying out face recognition on the target face image.
The second aspect of the embodiments of the present invention discloses a mobile terminal, which includes a processor, a front-facing camera connected to the processor, and a memory connected to the processor, wherein,
the memory is used for storing historical face images acquired by the front-facing camera before the current moment;
the front-facing camera is used for acquiring an initial face image at the current moment;
and the processor is used for removing the dirt area in the initial face image according to the historical face image to obtain a target face image and carrying out face recognition on the target face image.
A third aspect of the embodiments of the present invention discloses a face recognition apparatus, including:
the acquiring unit is used for acquiring an initial face image at the current moment;
the removing unit is used for removing a stain area in the initial face image according to a historical face image acquired by the mobile terminal before the current moment so as to acquire a target face image;
and the recognition unit is used for carrying out face recognition on the target face image.
The fourth aspect of the embodiments of the present invention discloses a face recognition method, which is applied to a mobile terminal including a processor, a memory and a front-facing camera, and the method includes:
the front camera acquires an initial face image at the current moment;
the processor removes dirty areas in the initial face image according to the historical face image stored in the memory to obtain a target face image;
and the processor performs face recognition on the target face image.
A fifth aspect of the embodiments of the present invention discloses a mobile terminal, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing some or all of the steps described in any of the methods of the first aspect of the embodiments of the present invention.
A sixth aspect of embodiments of the present invention discloses a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program enables a computer, the computer comprising a mobile terminal, to execute some or all of the steps described in any of the methods of the first aspect of the embodiments of the present invention.
A seventh aspect of embodiments of the present invention discloses a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of embodiments of the present invention. The computer program product may be a software installation package, said computer comprising a mobile terminal.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, the mobile terminal acquires the initial face image at the current moment, removes the stain area in the initial face image according to the historical face image acquired by the mobile terminal before the current moment so as to acquire the target face image, and then carries out face recognition on the target face image. Therefore, by implementing the embodiment of the invention, the noise introduced to the obtained face image when the lens is stained can be eliminated, so that the accuracy and the success rate of face recognition are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention;
FIG. 3 is a schematic view of a scene in which a human face image is obtained at a background according to an embodiment of the present invention;
FIG. 4 is a comparison diagram of a historical face image and an initial face image according to the embodiment of the present invention;
FIG. 5 is a comparison diagram of another historical face image and an initial face image disclosed in the embodiments of the present invention;
FIG. 6 is a comparison diagram of an initial face image and a target face image according to the embodiment of the present invention;
FIG. 7 is a schematic flow chart of a face recognition method disclosed in the embodiments of the present invention;
fig. 8 is a schematic structural diagram of a face recognition apparatus disclosed in the embodiment of the present invention;
fig. 9 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention;
fig. 10 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, or apparatus.
The Mobile terminal according to the embodiment of the present invention may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal. Embodiments of the present invention will be described below with reference to the accompanying drawings.
The embodiment of the invention provides a face recognition method and a related product, which can eliminate noise introduced to an obtained face image when stains exist on a lens, so that the accuracy and the success rate of face recognition are improved. The following are detailed below.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a mobile terminal 100 according to an embodiment of the present invention. The mobile terminal 100 includes a processor 110, a front camera 120, and a memory 140, and the processor 110 connects the front camera 120 and the memory 140 through a bus 130, so that the processor 110, the front camera 120, and the memory 140 can communicate with each other.
In the embodiment of the present invention, the processor may be a Central Processing Unit (CPU), and in some embodiments, may also be referred to as an Application Processor (AP) to distinguish the processor from the baseband processor.
In the embodiment of the present invention, the memory 140 is configured to store a historical face image acquired by the front-facing camera 120 before the current time;
the front camera 120 is used for acquiring an initial face image at the current moment;
and the processor 110 is configured to remove a dirty region in the initial face image according to the historical face image to obtain a target face image, and then perform face recognition on the target face image.
In practical applications, a front-facing camera of the mobile terminal may be stained with dirt and oil, or a dent, a crack, and the like may occur in an outermost glass cover plate (in the embodiment of the present invention, the dirt, the oil, the dent, and the crack are collectively referred to as "dirt"), in this case, the front-facing camera may normally acquire an image, however, due to the dirt on the front-facing camera, a dirt area may be generated on the image acquired by the front-facing camera, and if the front-facing camera acquires a face image to perform face recognition, the dirt area may block facial features, which affects normal operation of the face recognition.
Thus, as an alternative embodiment, the processor 110 may perform the following operations to remove the dirty region in the initial portrait image acquired by the front-facing camera 120 to obtain the target facial image:
the processor identifies a stain area in the initial face image, then calls a historical face image obtained before the front camera from the memory 140, and then judges whether the historical face image has the stain area with the same position and the same shape as the initial face image; and if so, removing the dirty area in the initial face image to obtain the target face image.
Specifically, in the above embodiment, the number of the history face images may be plural.
Specifically, in the above embodiment, the edge of the dirty area may be determined from the initial face image by an edge extraction method, so as to obtain the dirty area.
As another alternative, in addition to the historical face image, the memory 140 may also store other images acquired by the front-facing camera, and the other images may also be used to determine that there is dirt on the front-facing camera through the above-mentioned embodiment, and further remove the dirt area on the acquired face image.
In the embodiment of the present invention, the memory 140 further stores a face feature template, after the target face image is obtained, the processor is further configured to extract face feature information in the target face image, match the face feature information with the face feature template, and if the matching is successful, determine that the face recognition is passed; and if the matching fails, determining that the face recognition fails.
Further, in order to improve the success rate of face recognition, local face feature information at a position corresponding to a spot region in a face feature template can be removed, and then matching is performed by using the remaining face feature templates, so that the situation that the target face image lacks the part of face feature information and the success rate of face recognition is reduced is avoided, and the implementation mode can be specifically realized by the following modes:
acquiring a face region in an initial face image through edge extraction;
acquiring a target position of a stain area relative to a face area in an initial face image;
removing local feature information corresponding to the target position in the face feature template to obtain a target face feature template;
extracting the face feature information in the target face image, and matching the face feature information with the target face feature template.
Therefore, by using the mobile terminal described in fig. 1, when dirt exists on the lens, the dirt area on the obtained face image is removed, and then face recognition is performed, so that noise introduced into the obtained face image when dirt exists on the lens can be eliminated, and the accuracy and the success rate of face recognition are improved.
Referring to fig. 2, fig. 2 is a schematic structural diagram of another mobile terminal 100 according to an embodiment of the disclosure. As shown in fig. 2, the front camera 120 may be configured above the display screen, so that when the user operates the mobile terminal, the front camera 120 may obtain a face image of the user.
As an optional implementation manner, the operation of face recognition may be triggered by user operations such as unlocking the mobile terminal, checking a chat record in the mobile terminal, checking an album in the mobile terminal, and performing a payment operation using the mobile terminal, and if the face recognition does not pass, the operation requested by the current user is rejected, so that the security of the personal information of the user in the mobile terminal is ensured.
As another optional implementation, the front-facing camera may be further configured with an infrared fill light, so as to help obtain certain face feature information in a dark environment or an environment with weak light; on the other hand, infrared light emitted by the infrared light supplement lamp cannot be recognized by human eyes, so that if a current user uses the mobile terminal, the background of the mobile terminal acquires a human face image, and certain human face feature information can be acquired in a dark environment without triggering a flash lamp, so that the current user cannot perceive the human face feature information.
Referring to fig. 3, fig. 3 is a schematic view of a scene where a human face image is acquired at a background according to an embodiment of the present invention. As shown in fig. 3, an illegal user (an authorized user who is not a mobile terminal, and a face feature template of the authorized user is stored in the mobile terminal, so that the authorized user can view a picture through face recognition) using the mobile terminal 100 normally, the front-facing camera 120 is turned on in the background and continuously obtains an image, and when face feature information is recognized in the obtained image, the image is determined as an initial face image. Therefore, the mobile terminal 100 may obtain the face image of the illegal user without being aware of the face image and perform face recognition, and reject further operation of the illegal user on the mobile terminal when the face recognition is not passed.
Referring to fig. 4, fig. 4 is a comparison diagram of a historical face image and an initial face image according to an embodiment of the present invention. As can be seen from fig. 4, if there is dirt on the front camera 120, the historical face image and the initial face image both have dirt areas corresponding to the dirt on the front camera 120 at the same position relative to the edge of the image, as shown at 401 and 402 in fig. 4.
On the other hand, if the acquired face image has a "noise region" due to dirt, flaw, or stain on the face, the "noise region" on the historical face image and the initial face image should be located at the same position relative to the face region. As shown in fig. 5, fig. 5 is a comparison diagram of another historical face image and an initial face image disclosed in the embodiment of the present invention, and 501 and 502 in fig. 5 are schematic diagrams of the above "noise region".
Referring to fig. 6, fig. 6 is a comparison diagram of an initial face image and a target face image according to an embodiment of the present invention. As shown in fig. 6, if there is dirt on the front camera 120, the method described in fig. 1 is used to remove the dirt area on the original face image, and then the obtained target face image is as shown in fig. 6.
Referring to fig. 7, fig. 7 is a schematic flow chart of a face recognition method according to an embodiment of the present invention. The face recognition method can be applied to the mobile terminal. The face recognition method can comprise the following steps:
701. the mobile terminal obtains an initial face image at the current moment.
In the embodiment of the invention, the mobile terminal can be provided with the front camera, so that the initial face image at the current moment can be obtained through the front camera. The front-facing camera can be configured above the display screen, so that when a user operates the mobile terminal, the front-facing camera can acquire a face image of the user.
As an optional implementation manner, the operation of face recognition may be triggered by user operations such as unlocking the mobile terminal, checking a chat record in the mobile terminal, checking an album in the mobile terminal, and performing a payment operation using the mobile terminal, and if the face recognition does not pass, the operation requested by the current user is rejected, so that the security of the personal information of the user in the mobile terminal is ensured.
As another optional implementation, the front-facing camera may be further configured with an infrared fill light, so as to help obtain certain face feature information in a dark environment or an environment with weak light; on the other hand, infrared light emitted by the infrared light supplement lamp cannot be recognized by human eyes, so that if a current user uses the mobile terminal, the background of the mobile terminal acquires a human face image, and certain human face feature information can be acquired in a dark environment without triggering a flash lamp, so that the current user cannot perceive the human face feature information.
702. And removing a stain area in the initial face image according to the historical face image acquired by the mobile terminal before the current moment so as to acquire a target face image.
In the embodiment of the invention, if the front-facing camera is stained, the historical face image and the initial face image are both stained areas corresponding to the stains on the front-facing camera at the same position relative to the edge of the image. Therefore, the stain area in the initial face image can be removed to obtain the target face image as follows:
the processor identifies a stain area in the initial face image, and then judges whether the historical face image has the stain area with the same position and the same shape as the initial face image; and if so, removing the dirty area in the initial face image to obtain the target face image.
Specifically, in the above embodiment, the number of the history face images may be plural.
Specifically, in the above embodiment, the edge of the dirty area may be determined from the initial face image by an edge extraction method, so as to obtain the dirty area.
703. And extracting the face feature information in the target face image.
704. And matching the face feature information with a preset face feature template.
In the embodiment of the invention, a face feature template is stored in a mobile terminal, after a target face image is obtained, face feature information in the target face image is extracted, then the face feature information is matched with the face feature template, and if the matching is successful, the face recognition is determined to be passed; and if the matching fails, determining that the face recognition fails.
Further, in order to improve the success rate of face recognition, local face feature information at a position corresponding to a spot region in a face feature template can be removed, and then matching is performed by using the remaining face feature templates, so that the situation that the target face image lacks the part of face feature information and the success rate of face recognition is reduced is avoided, and the implementation mode can be specifically realized by the following modes:
the method comprises the steps of obtaining a face area in an initial face image through edge extraction, then obtaining a target position of a stain area in the initial face image relative to the face area, removing local feature information corresponding to the target position in a face feature template to obtain a target face feature template, and then matching the face feature information with the target face feature template.
Therefore, by using the face recognition method described in fig. 7, when dirt exists on the lens, the dirt area on the obtained face image is removed, and then the face recognition is performed, so that noise introduced into the obtained face image when dirt exists on the lens can be eliminated, and the accuracy and the success rate of the face recognition are improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a face recognition device according to an embodiment of the present invention. As shown in fig. 8, the face recognition apparatus 800 may include:
an acquiring unit 801 is configured to acquire an initial face image at a current time.
In this embodiment of the present invention, the obtaining unit 801 may be a front-facing camera, so as to obtain an initial face image at the current time. The obtaining unit 801 may be configured above the display screen of the mobile terminal, so that when the user operates the mobile terminal, the obtaining unit 801 may obtain the face image of the user.
A removing unit 802, configured to remove a dirty area in the initial face image according to a historical face image acquired by the mobile terminal before the current time to obtain a target face image.
In the embodiment of the invention, if the front-facing camera is stained, the historical face image and the initial face image are both stained areas corresponding to the stains on the front-facing camera at the same position relative to the edge of the image. Therefore, the stain area in the initial face image can be removed to obtain the target face image as follows:
the removing unit 802 identifies a dirty region in the initial face image, and then judges whether the historical face image has a dirty region with the same position and the same shape as the initial face image; and if so, removing the dirty area in the initial face image to obtain the target face image.
A recognition unit 803, configured to perform face recognition on the target face image.
In the embodiment of the present invention, a face feature template is stored in the face recognition apparatus 800, after a target face image is obtained, the recognition unit 803 extracts face feature information in the target face image, then matches the face feature information with the face feature template, and if matching is successful, it is determined that face recognition is passed; and if the matching fails, determining that the face recognition fails.
It is understood that the mobile terminal includes hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present invention can be implemented in hardware or a combination of hardware and computer software, with the exemplary elements and algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The embodiment of the present invention may perform the division of the functional units for the mobile terminal according to the method example described above, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
As an alternative embodiment, the removing Unit 802 and the identifying Unit 803 may be a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The acquisition unit 801 may be a front camera.
Therefore, by using the face recognition device described in fig. 8, when dirt exists on the lens, the dirt area on the obtained face image is removed, and then the face recognition is performed, so that noise introduced into the obtained face image when dirt exists on the lens can be eliminated, and the accuracy and the success rate of the face recognition are improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of another mobile terminal 900 according to an embodiment of the disclosure. As shown, the mobile terminal comprises a processor 901, a memory 902, a communication interface 903 and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the above-described method embodiments.
For example, the program includes instructions for performing the steps of:
acquiring an initial face image at the current moment;
removing a stain area in the initial face image according to a historical face image acquired by the mobile terminal before the current moment to acquire a target face image;
and carrying out face recognition on the target face image.
As an optional implementation manner, in terms of removing a dirty area in the initial face image according to a historical face image acquired by the mobile terminal before the current time to obtain a target face image, the program includes instructions specifically configured to perform the following steps:
identifying the dirty region in the initial face image;
judging whether the historical face image has the dirt area with the same position and the same shape as the initial face image;
if so, removing the dirty area in the initial face image to obtain the target face image.
As an alternative embodiment, in terms of identifying said dirty region in said initial face image, the program comprises instructions specifically for carrying out the following steps:
and acquiring the dirty area in the initial face image through edge extraction.
As an alternative implementation, in the aspect of face recognition of the target face image, the program includes instructions specifically configured to perform the following steps:
extracting face feature information in the target face image;
and matching the face feature information with a preset face feature template.
As an alternative implementation, in the aspect of face recognition of the target face image, the program includes instructions specifically configured to perform the following steps: acquiring a face region in the initial face image through edge extraction;
acquiring a target position of the stain area relative to the face area in the initial face image;
removing local feature information corresponding to the target position in a preset face feature template to obtain a target face feature template;
extracting the face feature information in the target face image, and matching the face feature information with the target face feature template.
Therefore, by using the mobile terminal described in fig. 9, when dirt exists on the lens, the dirt area on the obtained face image is removed, and then the face recognition is performed, so that noise introduced into the obtained face image when dirt exists on the lens can be eliminated, and the accuracy and the success rate of the face recognition are improved.
Referring to fig. 10, fig. 10 is a schematic structural diagram of another mobile terminal 1000 according to an embodiment of the disclosure. As shown in fig. 10, for convenience of illustration, only the portion related to the embodiment of the present invention is shown, and the detailed technical details are not disclosed, please refer to the method portion of the embodiment of the present invention. The terminal may be any mobile terminal including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, etc., taking the mobile terminal as a mobile phone as an example:
fig. 10 is a block diagram showing a partial structure of a cellular phone related to a mobile terminal provided by an embodiment of the present invention. Referring to fig. 10, the cellular phone includes: radio Frequency (RF) circuit 1001, memory 1002, input unit 1003, display unit 1004, sensor 1005, audio circuit 1006, wireless fidelity (WiFi) module 1007, processor 1008, and power supply 1009. Those skilled in the art will appreciate that the handset configuration shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 10:
the RF circuit 1001 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then processes the received downlink information to the processor 1008; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 1001 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 1001 may also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 1002 may be used to store software programs and modules, and the processor 1008 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1002. The memory 1002 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1002 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1003 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1003 may include a touch panel 10031 and a biometric module 10032. The touch panel 10031, also referred to as a touch screen, can collect touch operations performed by a user on or near the touch panel 10031 (e.g., operations performed by the user on or near the touch panel 10031 by using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 10031 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 1008, and can receive and execute commands from the processor 1008. In addition, the touch panel 10031 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1003 may include a front camera 10032 in addition to the touch panel 10031.
The display unit 1004 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The Display unit 1004 may include a Display panel 10041, and optionally, the Display panel 10041 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 10031 can cover the display panel 10041, and when the touch panel 10031 detects a touch operation thereon or nearby, the touch operation can be transmitted to the processor set 1008 to determine the type of the touch event, and then the processor set 1008 can provide a corresponding visual output on the display panel 10041 according to the type of the touch event. Although in fig. 10, the touch panel 10031 and the display panel 10041 are two independent components for implementing the input and output functions of the mobile phone, in some embodiments, the touch panel 10031 and the display panel 10041 may be integrated for implementing the input and output functions of the mobile phone.
The handset may also include at least one sensor 1005, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 10041 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 10041 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The audio circuitry 1006, speaker 10061, microphone 10062 can provide an audio interface between the user and the cell phone. The audio circuit 1006 may transmit the electrical signal converted from the received audio data to the speaker 10061, and the audio signal is converted by the speaker 10061 to be output as a sound signal; on the other hand, the microphone 10062 converts the collected sound signals into electrical signals, converts the electrical signals into audio data after being received by the audio circuit 1006, and then outputs the audio data to the processor set 1008 for processing, and then transmits the audio data to, for example, another mobile phone through the RF circuit 1001, or outputs the audio data to the memory 1002 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 1007, and provides wireless broadband Internet access for the user. Although fig. 10 shows the WiFi module 1007, it is understood that it does not belong to the essential constitution of the handset, and it can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1008 is a control center of the mobile phone, and the processor 1008 connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1002 and calling data stored in the memory 1002, thereby performing overall monitoring of the mobile phone. Optionally, processor 1008 may include one or more processing units; preferably, the processor 1008 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1008.
The handset also includes a power source 1009 (e.g., a battery) for providing power to the various components, and preferably the power source is logically connected to the processor 1008 via a power management system, so that functions such as managing charging, discharging, and power consumption are performed via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment shown in fig. 7, the steps and the method flow can be implemented based on the structure of the mobile phone.
In the embodiment shown in fig. 8, the functions of the units can be implemented based on the structure of the mobile phone.
For example, the processor 1008 may invoke a computer program stored in the memory 1002 to perform the following operations:
acquiring an initial face image at the current moment;
removing a stain area in the initial face image according to a historical face image acquired by the mobile terminal before the current moment to acquire a target face image;
and carrying out face recognition on the target face image.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a mobile terminal.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as recited in the above method embodiments. The computer program product may be a software installation package, said computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A face recognition method, comprising:
the mobile terminal acquires an initial face image at the current moment through a front camera;
when the front-facing camera is stained, removing a stain area in the initial face image according to a historical face image acquired by the mobile terminal before the current moment to acquire a target face image, wherein the method comprises the following steps: identifying the dirty area in the initial face image, judging whether the historical face image has the dirty area with the same position and the same shape as the initial face image, and if so, removing the dirty area in the initial face image to obtain the target face image;
performing face recognition on the target face image, wherein the performing face recognition on the target face image comprises: the method comprises the steps of obtaining a face area in an initial face image through edge extraction, obtaining a target position of a stain area in the initial face image relative to the face area, removing local feature information corresponding to the target position in a preset face feature template to obtain a target face feature template, extracting face feature information in the target face image, and matching the face feature information with the target face feature template.
2. The method of claim 1, wherein the identifying the dirty region in the initial face image comprises:
and acquiring the dirty area in the initial face image through edge extraction.
3. The method according to claim 1 or 2, wherein the performing face recognition on the target face image comprises:
extracting face feature information in the target face image;
and matching the face feature information with a preset face feature template.
4. A mobile terminal, characterized in that the mobile terminal comprises a processor, a front-facing camera connected to the processor, and a memory connected to the processor, wherein,
the memory is used for storing historical face images acquired by the front-facing camera before the current moment;
the front-facing camera is used for acquiring an initial face image at the current moment;
the processor is configured to, when the front-facing camera is stained, remove a stained area in the initial face image according to the historical face image to obtain a target face image, and perform face recognition on the target face image, wherein the processor is further configured to recognize the stained area in the initial face image, determine whether the historical face image has the stained area with the same position and the same shape as the initial face image, and if so, remove the stained area in the initial face image to obtain the target face image;
the processor is further configured to acquire a face region in the initial face image through edge extraction, acquire a target position of the stain region in the initial face image relative to the face region, remove local feature information corresponding to the target position in a preset face feature template to obtain a target face feature template, extract face feature information in the target face image, and match the face feature information with the target face feature template.
5. The mobile terminal of claim 4, wherein in said identifying a dirty region in the initial face image, the processor is specifically configured to:
and acquiring the dirty area in the initial face image through edge extraction.
6. A mobile terminal according to claim 4 or 5,
the memory is also used for storing a face feature template;
in the aspect of performing face recognition on the target face image, the processor is specifically configured to: extracting the face feature information in the target face image, and matching the face feature information with the face feature template.
7. A face recognition apparatus, comprising:
the acquisition unit is used for acquiring an initial face image at the current moment through the front camera;
the removing unit is used for removing a stain area in the initial face image according to a historical face image acquired by a mobile terminal before the current moment to acquire a target face image when the front-facing camera is stained, identifying the stain area in the initial face image, judging whether the historical face image has the stain area with the same position and the same shape as the initial face image, and if so, removing the stain area in the initial face image to acquire the target face image;
the identification unit is used for carrying out face identification on the target face image, and is also used for acquiring a face region in the initial face image through edge extraction, acquiring a target position of the stain region in the initial face image relative to the face region, removing local feature information corresponding to the target position in a preset face feature template to acquire a target face feature template, extracting face feature information in the target face image, and matching the face feature information with the target face feature template.
8. A face recognition method is applied to a mobile terminal comprising a processor, a memory and a front camera, and comprises the following steps:
the front camera acquires an initial face image at the current moment;
when the front-facing camera is stained, the processor removes a stained area in the initial face image according to the historical face image stored in the memory to obtain a target face image, and the method comprises the following steps: identifying the dirty area in the initial face image, judging whether the historical face image has the dirty area with the same position and the same shape as the initial face image, and if so, removing the dirty area in the initial face image to obtain the target face image;
the processor performs face recognition on the target face image, and the face recognition includes: the method comprises the steps of obtaining a face area in an initial face image through edge extraction, obtaining a target position of a stain area in the initial face image relative to the face area, removing local feature information corresponding to the target position in a preset face feature template to obtain a target face feature template, extracting face feature information in the target face image, and matching the face feature information with the target face feature template.
9. A mobile terminal comprising a processor, memory, a communications interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of the method of any of claims 1 to 3.
10. A computer-readable storage medium, characterized in that it stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1 to 3, the computer comprising a mobile terminal.
CN201710822114.8A 2017-09-12 2017-09-12 Face recognition method and related product Expired - Fee Related CN107644159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710822114.8A CN107644159B (en) 2017-09-12 2017-09-12 Face recognition method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710822114.8A CN107644159B (en) 2017-09-12 2017-09-12 Face recognition method and related product

Publications (2)

Publication Number Publication Date
CN107644159A CN107644159A (en) 2018-01-30
CN107644159B true CN107644159B (en) 2021-04-09

Family

ID=61110519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710822114.8A Expired - Fee Related CN107644159B (en) 2017-09-12 2017-09-12 Face recognition method and related product

Country Status (1)

Country Link
CN (1) CN107644159B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063604A (en) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 A kind of face identification method and terminal device
CN111275649A (en) * 2020-02-03 2020-06-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103167149A (en) * 2012-09-20 2013-06-19 深圳市金立通信设备有限公司 System and method of safety of mobile phone based on face recognition
CN107122761A (en) * 2017-05-16 2017-09-01 广东欧珀移动通信有限公司 Fingerprint image processing method and Related product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4752918B2 (en) * 2009-01-16 2011-08-17 カシオ計算機株式会社 Image processing apparatus, image collation method, and program
CN102509086B (en) * 2011-11-22 2015-02-18 西安理工大学 Pedestrian object detection method based on object posture projection and multi-features fusion
CN102855496B (en) * 2012-08-24 2016-05-25 苏州大学 Block face authentication method and system
CN103927719B (en) * 2014-04-04 2017-05-17 北京猎豹网络科技有限公司 Picture processing method and device
CN104091163A (en) * 2014-07-19 2014-10-08 福州大学 LBP face recognition method capable of eliminating influences of blocking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103167149A (en) * 2012-09-20 2013-06-19 深圳市金立通信设备有限公司 System and method of safety of mobile phone based on face recognition
CN107122761A (en) * 2017-05-16 2017-09-01 广东欧珀移动通信有限公司 Fingerprint image processing method and Related product

Also Published As

Publication number Publication date
CN107644159A (en) 2018-01-30

Similar Documents

Publication Publication Date Title
CN107194228B (en) Unlocking control method and related product
CN106127481B (en) A kind of fingerprint method of payment and terminal
US10769464B2 (en) Facial recognition method and related product
EP3249578B1 (en) Fingerprint unlocking method and terminal
CN107437009B (en) Authority control method and related product
CN107122761B (en) Fingerprint image processing method and related product
CN106293751B (en) Method for displaying information on terminal equipment and terminal equipment
CN106951767B (en) Unlocking control method and related product
CN107194224B (en) Unlocking control method and related product
CN105912915B (en) A kind of unlocked by fingerprint method and terminal
CN105975833B (en) A kind of unlocked by fingerprint method and terminal
CN107451450B (en) Biometric identification method and related product
CN105912916B (en) A kind of unlocked by fingerprint method and terminal
CN106022058B (en) A kind of unlocked by fingerprint method and terminal
AU2018299499B2 (en) Method for iris recognition and related products
CN107403148B (en) Iris identification method and related product
CN107516070B (en) Biometric identification method and related product
CN109544172B (en) Display method and terminal equipment
WO2019019837A1 (en) Biological identification method and related product
CN107025434A (en) A kind of fingerprint register method and mobile terminal
CN107770478A (en) video call method and related product
CN107480495B (en) Unlocking method of mobile terminal and related product
CN107193470B (en) Unlocking control method and related product
CN107743108A (en) A kind of Media Access Control address recognition methods and device
CN107590464A (en) Face identification method and Related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: No.18, Wusha Haibin Road, Chang'an Town, Dongguan City, Guangdong Province

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210409