CN107679384B - Unlocking processing method and related equipment - Google Patents

Unlocking processing method and related equipment Download PDF

Info

Publication number
CN107679384B
CN107679384B CN201710945832.4A CN201710945832A CN107679384B CN 107679384 B CN107679384 B CN 107679384B CN 201710945832 A CN201710945832 A CN 201710945832A CN 107679384 B CN107679384 B CN 107679384B
Authority
CN
China
Prior art keywords
face
recognition model
user
facial
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710945832.4A
Other languages
Chinese (zh)
Other versions
CN107679384A (en
Inventor
王健
惠方方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710945832.4A priority Critical patent/CN107679384B/en
Publication of CN107679384A publication Critical patent/CN107679384A/en
Application granted granted Critical
Publication of CN107679384B publication Critical patent/CN107679384B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/66Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
    • H04M1/667Preventing unauthorised calls from a telephone set
    • H04M1/67Preventing unauthorised calls from a telephone set by electronic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application discloses an unlocking processing method and related equipment, wherein the method comprises the following steps: loading a facial recognition model; when the fact that the face unlocking is needed for the event to be processed is detected, calling the face recognition model to execute face image acquisition operation, and acquiring a face image of a user through the face image acquisition device; calling the facial recognition model to execute facial image matching operation so as to match the facial image of the user with a facial template; and when the face image of the user is matched with the face template, executing the event to be processed. By adopting the embodiment of the invention, the speed of unlocking the face can be improved.

Description

Unlocking processing method and related equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an unlocking processing method and a related device.
Background
Nowadays, with rapid development of scientific technology, a face unlocking technology has been widely applied to terminal devices (such as smart phones, tablet computers, and the like), and the face unlocking technology is a technology performed according to the unique characteristic that the face of a human is unique. Currently, the process of facial unlocking is usually: firstly, the terminal equipment starts a face unlocking function, then a face model is loaded, and then face unlocking operation is carried out according to the face model.
Disclosure of Invention
The embodiment of the application provides unlocking processing and related equipment so as to improve the speed of face unlocking.
In a first aspect, an embodiment of the present application provides an unlocking processing method, which is applied to a terminal device including a facial image capture device, and includes:
loading a facial recognition model;
when the fact that the face unlocking is needed for the event to be processed is detected, calling the face recognition model to execute face image acquisition operation, and acquiring a face image of a user through the face image acquisition device;
calling the facial recognition model to execute facial image matching operation so as to match the facial image of the user with a facial template;
and when the face image of the user is matched with the face template, executing the event to be processed.
In a second aspect, an embodiment of the present invention provides a terminal device, including a face information acquisition apparatus and a processor, wherein,
the processor is used for loading a face recognition model;
the processor is further used for calling the face recognition model to execute a face image acquisition operation when the fact that the face unlocking is needed for the event to be processed is detected, so that the face image of the user is acquired through the face image acquisition device;
the processor is further used for calling the face recognition model to execute a face image matching operation so as to match the face image of the user with a face template;
the processor is further configured to execute the event to be processed when the face image of the user matches the face template.
In a third aspect, an embodiment of the present invention provides a terminal device, including:
a model loading unit for loading a face recognition model;
the calling unit is used for calling the face recognition model to execute face image acquisition operation when the fact that the face unlocking is needed for the event to be processed is detected, so that the face image of the user is acquired through the face image acquisition device; calling the facial recognition model to execute facial image matching operation so as to match the facial image of the user with a facial template;
and the execution unit is used for executing the event to be processed when the face image of the user is matched with the face template.
In a fourth aspect, embodiments of the present application provide a terminal device, comprising one or more processors, one or more memories, one or more transceivers, and one or more programs stored in the memories and configured to be executed by the one or more processors, the programs including instructions for performing the steps in the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform the method of the first aspect.
Currently, facial recognition models are dynamically loaded, i.e., loaded when in use, and released when not in use. Because the loading process of the face recognition model is time-consuming abnormally, the total time consumption is about 750ms generally, if the face recognition model is loaded during use, the total time consumption of face unlocking is increased, and the user experience is further reduced. In the application, the face recognition model is changed into static loading, namely, the face recognition model is loaded before being used and is not released when not being used, so that the face recognition model does not need to be loaded when the subsequent face is unlocked, the face recognition model is directly called to unlock the face, the loading time of the face recognition model can be reduced in the whole face unlocking process, and the face unlocking speed is further improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of an unlocking processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another unlocking processing method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another terminal device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of another terminal device provided in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Hereinafter, some terms in the present application are explained to facilitate understanding by those skilled in the art.
(1) A terminal device, also called a User Equipment (UE), is a device providing voice and/or data connectivity to a User, for example, a handheld device with an unlimited connection function, a vehicle-mounted device, and so on. Common terminals include, for example: the mobile phone includes a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), and a wearable device such as a smart watch, a smart bracelet, a pedometer, and the like.
(2) The face recognition model is an algorithm used by the terminal device to perform face unlocking. The face recognition model includes a matching model and/or a living body recognition model.
(3) Parallel execution means that at least two actions are performed simultaneously on different processes, such as action a and action B, action a being performed on process 1, action B being performed on process 2, and action B being performed during the execution of action a.
(4) The events to be processed include: payment events, screen unlock events, video encrypted chat events, application login events, and the like.
Referring to fig. 1, fig. 1 is a schematic flowchart of an unlocking processing method provided in an embodiment of the present application, and is applied to a terminal device including a facial image capturing device, where the facial image capturing device may be a front camera of the terminal device or a common camera module, which is not limited herein, and the method includes:
step 101: the terminal device loads the face recognition model.
The number of face recognition models stored in the terminal device may be one or N, where N is an integer greater than 1, and is not limited herein.
In an embodiment, the specific implementation manner of loading the face recognition model by the terminal device includes: and loading the facial recognition model when the terminal equipment is started, or at the sleep awakening time of the user, or at the first set time.
Specifically, when the terminal device stores N face recognition models, the terminal device loads the N face recognition models at the same time when the terminal device is powered on, or the terminal device loads the N face recognition models at the same time when the user wakes up from sleep.
The sleep awakening time of the user is user-defined. Or, the sleep wake-up time of the user is determined by the terminal device, specifically including: and the sleep awakening time of the user is determined by the terminal equipment according to the time of the alarm clock set in the terminal equipment. For example, the alarm clock set in the terminal device includes a wake-up alarm clock at 7:00am on a working day (e.g., monday to friday), a wake-up alarm clock at 6:50 am on a working day (e.g., monday to friday), and a wake-up alarm clock at 9 am on a non-working day (e.g., saturday and sunday), so that the terminal device takes 7:00am or 6 am 50 as the sleep-up time of the user on the working day, and takes 9 am as the sleep-up time of the user on the non-working day.
In addition, when the terminal device stores N face recognition models, the terminal device may load the N face recognition models at the same time at the first setting time, or may load the N face recognition models at N first setting times, respectively, where the N first setting times correspond to the N face recognition models one-to-one.
In one embodiment, when N facial recognition models are stored in the terminal device, the terminal device may load the N facial recognition models at the same time at a first set time, which is user-defined. Alternatively, the first set time is determined by the terminal device, and is not limited herein.
In one embodiment, when N face recognition models are stored in the terminal device, different face recognition models correspond to different events to be processed in terms of loading the N face recognition models at N first set times, respectively, and the first set times corresponding to the face recognition models are determined according to the corresponding events to be processed.
Specifically, if the terminal device stores N face recognition models, and the N face recognition models are used for processing different events to be processed, if the terminal device loads the N face recognition models simultaneously, it may happen that some loaded face recognition models are not used for a long time, which may result in an increase in power consumption of the terminal device. Therefore, in order to reduce power consumption to some extent for the terminal device, in the present application, the terminal device loads the N face recognition models at the N first setting timings, respectively.
The first set time corresponding to the face recognition model is determined according to the corresponding to-be-processed event, and specifically includes: the terminal device stores the mapping relation between the facial recognition model and the event to be processed and the mapping relation of the first set time corresponding to the event to be processed, and the terminal device can determine the first set time corresponding to each facial recognition model according to the two mapping relations.
For example, assuming that N is 4, the 4 face recognition models are: face identification model 1, face identification model 2, face identification model 3 and face identification model 4, 4 pending events have: payment events, screen unlock events, video encryption chat events, and application login events. The mapping relationship between the facial recognition model and the event to be processed and the mapping relationship between the facial recognition model and the first set time corresponding to the event to be processed are shown in table 1, and according to table 1, the first set time corresponding to the facial recognition model 1 is 11:00am, the first set time corresponding to the facial recognition model 2 is 7:00am, the first set time corresponding to the facial recognition model 3 is 14:00am, and the first set time corresponding to the facial recognition model 4 is 8:00 am.
TABLE 1
In an embodiment, the method further comprises:
and at the sleeping time of the user or at a second set time, the terminal equipment releases the face recognition model.
Specifically, when the terminal device stores N face recognition models, since many users do not shut down the terminal device for a long time now, if the N face recognition models are not released all the time, extra power consumption is brought to the terminal device, and therefore, in the present application, in order to reduce the power consumption of the terminal device to some extent, the terminal device releases the N face recognition models at the same time when the user sleeps.
Wherein, the sleep time of the user is user-defined. Or, the sleep time of the user is determined by the terminal device, specifically: and the sleep awakening time of the user is determined by the terminal equipment according to the time of the alarm clock set in the terminal equipment. For example, the alarm clock set in the terminal device includes a sleep alarm clock at 11:00pm on a weekday (e.g., monday to friday) at night, a sleep alarm clock at 10:30pm on a weekday (e.g., monday to friday), and a sleep alarm clock at 11:30 pm on a non-weekday (e.g., saturday and sunday), so that the terminal device takes the night at 11:00 or the night at 10:30 as the sleep time of the user on the weekday, and takes the night at 11:30 as the sleep time of the user on the non-workday.
In addition, when the terminal device stores N face recognition models, the terminal device may release the N face recognition models at the same time at the second setting time, or may release the N face recognition models at N second setting times, respectively, and the N first setting times correspond to the N face recognition models one-to-one.
In one embodiment, when N facial recognition models are stored in the terminal device, the terminal device may simultaneously release the N facial recognition models at a second set time, which is user-defined. Alternatively, the second set time is determined by the terminal device, and is not limited herein.
In one embodiment, when N face recognition models are stored in the terminal device, different face recognition models correspond to different events to be processed in terms of releasing the N face recognition models at N second set times, respectively, and the second set times corresponding to the face recognition models are determined according to the corresponding events to be processed.
Specifically, if N face recognition models are stored in the terminal device and the N face recognition models are used for processing different pending events, if the terminal device releases the N face recognition models simultaneously, it may happen that some face recognition models are still used at a later time, which may result in a longer time required for face unlocking after some pending events. Therefore, in order to further improve the performance of the terminal device, in the present application, the terminal device releases the N face recognition models at the N second setting timings, respectively.
The second set time corresponding to the face recognition model is determined according to the corresponding to-be-processed event, and specifically includes: the terminal device stores the mapping relation between the facial recognition model and the event to be processed and the mapping relation of the second set time corresponding to the event to be processed, and the terminal device can determine the second set time corresponding to each facial recognition model according to the two mapping relations.
For example, assuming that N is 4, the 4 face recognition models are: face identification model 1, face identification model 2, face identification model 3 and face identification model 4, 4 pending events have: payment events, screen unlock events, video encryption chat events, and application login events. The mapping relationship between the facial recognition model and the event to be processed and the mapping relationship between the facial recognition model and the second set time corresponding to the event to be processed are shown in table 2, and it can be obtained from table 2 that the second set time corresponding to the facial recognition model 1 is 10:00pm, the second set time corresponding to the facial recognition model 2 is 11:00pm, the second set time corresponding to the facial recognition model 3 is 10:30pm, and the second set time corresponding to the facial recognition model 4 is 10:40 pm.
TABLE 2
Step 102: when the terminal equipment detects that the face unlocking is needed for the event to be processed, the terminal equipment calls the face recognition model to execute face image acquisition operation so as to acquire the face image of the user through the face image acquisition device.
When the event to be processed is a screen unlocking event and the terminal device is in a black screen state before the terminal device calls the face recognition model to execute the face image acquisition operation, the terminal device needs to light a touch display screen of the terminal device before the terminal device calls the face recognition model to execute the face image acquisition operation.
Further, when the facial image of the user is collected, the brightness of the touch display screen is the same under different events to be processed. Or, when the facial image of the user is collected, the brightness of the touch display screen is determined according to the event to be processed, specifically: each event to be processed corresponds to a safety level, the higher the safety level is, the higher the brightness of the touch display screen is when the facial image of the user is collected, and the lower the safety level is, the lower the brightness of the touch display screen is when the facial image of the user is collected. Or, when the facial image of the user is collected, the brightness of the touch display screen is determined according to the brightness of the ambient light, specifically: the higher the brightness of the ambient light is, the lower the brightness of the touch display screen is when the facial image of the user is collected, and the lower the brightness of the ambient light is, the higher the brightness of the touch display screen is when the facial image of the user is collected. Or, different time periods correspond to different brightness, and when the facial image of the user is collected, the brightness of the touch display screen is determined according to the time of the current system.
Step 103: and the terminal equipment calls the face recognition model to execute face image matching operation so as to match the face image of the user with the face template.
In an embodiment, when the facial image capturing device continuously captures M facial images of the user, where M is an integer greater than 1, the specific implementation manner of matching the facial image of the user with the facial template includes:
invoking and matching the face images of the N users with face templates in parallel;
when at least one of the N facial images of the user is matched with the facial template, determining that the facial image of the user is matched with the facial template;
when the face images of the N users do not match the face template, determining that the face images of the users do not match the face template.
For example, assuming that N is 3, the face image capturing device successively captures 3 face images of the user, for example: the face image matching method comprises the steps that face images 1, face images 2 and face images 3 are obtained, the terminal device matches the face images 1 with face templates in a first process, the terminal device matches the face images 2 with the face templates in a second process, the terminal device matches the face images 3 with the face templates in a third process, if none of the 3 face images is matched with the face templates, the face images are not matched with the face templates, and otherwise, the face images are matched with the face templates.
Further, the values of different pending events M are the same, such as all 3, or all 4, or all 5, or other values, and so on. Alternatively, the value of M is determined based on the pending event. The method specifically comprises the following steps: each event to be processed corresponds to a security level, the higher the security level is, the larger the value of M is, and the lower the security level is, the smaller the value of M is.
Step 104: and when the face image of the user is matched with the face template, the terminal equipment executes the event to be processed.
It should be noted that, when the matching value of the face image of the user and the face template is greater than a set value, it indicates that the face image of the user matches the face template, otherwise, the face image of the user does not match the face template. In addition, the setting value may be the same or different for different pending events, and is not limited herein.
In the application, the face recognition model is changed into static loading, namely, the face recognition model is loaded before being used and is not released when not being used, so that the face recognition model does not need to be loaded when the subsequent face is unlocked, the face recognition model is directly called to unlock the face, the loading time of the face recognition model can be reduced in the whole face unlocking process, and the face unlocking speed is further improved.
The embodiment of the present application further provides another more detailed method flow, as shown in fig. 2, including:
step 201: and when the terminal equipment is started, or at the sleep awakening time of the user, or at a first set time, the terminal equipment loads the facial recognition model.
Step 202: when the fact that the face unlocking is needed for the event to be processed is detected, the terminal equipment calls the face recognition model to execute face image acquisition operation, and therefore the face image of the user is acquired through the face image acquisition device.
Step 203: and the terminal equipment calls the face recognition model to execute face image matching operation so as to match the face image of the user with the face template.
Step 204: and when the face image of the user is matched with the face template, the terminal equipment executes the event to be processed.
Step 205: and at the sleeping time of the user or at a second set time, the terminal equipment releases the face recognition model.
It should be noted that, the specific implementation of the steps of the method shown in fig. 2 can refer to the specific implementation described in the above method, and will not be described here.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 3, fig. 3 is a terminal device 300 according to an embodiment of the present application, including: at least one processor, at least one memory, and at least one communication interface; and one or more programs;
the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of:
loading a facial recognition model;
when the fact that the face unlocking is needed for the event to be processed is detected, calling the face recognition model to execute face image acquisition operation, and acquiring a face image of a user through the face image acquisition device;
calling the facial recognition model to execute facial image matching operation so as to match the facial image of the user with a facial template;
and when the face image of the user is matched with the face template, executing the event to be processed.
In an embodiment, in loading the facial recognition model, the program comprises instructions specifically for performing the steps of:
and loading the facial recognition model when the terminal equipment is started, or at the sleep awakening time of the user, or at a first set time.
In an embodiment, the program comprises instructions for further performing the steps of:
and releasing the facial recognition model at the sleeping time of the user or at a second set time.
In an embodiment, the terminal device stores a plurality of face recognition models, different face recognition models correspond to different events to be processed, and the first set time corresponding to a face recognition model is determined according to the corresponding event to be processed.
In one embodiment, the second set time corresponding to the face recognition model is determined according to the corresponding to-be-processed event.
It should be noted that, the specific implementation manner of the content described in this embodiment may refer to the above method, and will not be described here.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the terminal device includes hardware structures and/or software modules for performing the respective functions in order to implement the functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the terminal device may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In case of integrated units, fig. 4 shows a block diagram of a possible functional unit composition of the terminal device involved in the above embodiments. The terminal device 400 includes: processing unit 401, communication unit 402, and storage unit 403, processing unit 401 includes model loading unit 4011, calling unit 4012, execution unit 4013, and release unit 4014. The storage unit 603 is used to store program codes and data of the terminal device. The communication unit 602 is configured to support communication between the terminal device and other devices. Some units (the model loading unit 4011, the calling unit 4012, the executing unit 4013 and the releasing unit 4014) are used for executing relevant steps of the method.
The model loading unit 4011 is configured to load a facial recognition model;
the calling unit 4012 is configured to, when it is detected that the to-be-processed event needs face unlocking, call the face recognition model to perform a face image acquisition operation, so as to acquire a face image of the user through the face image acquisition device; calling the facial recognition model to execute facial image matching operation so as to match the facial image of the user with a facial template;
and the execution unit 4013 is configured to execute the event to be processed when the face image of the user matches the face template.
In an embodiment, in terms of loading the facial recognition model, the model loading unit 4011 is specifically configured to:
and loading the facial recognition model when the terminal equipment is started, or at the sleep awakening time of the user, or at a first set time.
In an embodiment, the releasing unit 4014 releases the facial recognition model at a sleep time of the user or at a second set time.
In an embodiment, the terminal device stores a plurality of face recognition models, different face recognition models correspond to different events to be processed, and the first set time corresponding to a face recognition model is determined according to the corresponding event to be processed.
In one embodiment, the second set time corresponding to the face recognition model is determined according to the corresponding to-be-processed event.
The processing Unit 601 may be a Processor or a controller (e.g., a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof). The storage unit 403 may be a memory, and the communication unit 402 may be a transceiver, a transceiver circuit, a radio frequency chip, a communication interface, or the like.
As shown in fig. 5, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a terminal device 500 according to an embodiment of the present application, where the terminal device 500 includes: the terminal device 700 comprises a shell 10, a main board 20, a touch display screen 30, a battery 40 and an auxiliary board 50, wherein the main board 20 is provided with an infrared light source 21, an iris camera 22, a front camera 23, a processor 24, a memory 25, a SIM card slot 26 and the like, the auxiliary board is provided with a vibrator 51, an integrated sound cavity 52, a VOOC flash charging interface 53 and a fingerprint module 54, and the front camera 23 forms a facial information acquisition device of the terminal device 700.
Wherein, the processor 24 is used for loading the face recognition model;
the processor 24 is further configured to invoke the facial recognition model to perform a facial image acquisition operation to acquire a facial image of a user through the facial image acquisition device when it is detected that the to-be-processed event requires facial unlocking;
a processor 24, further configured to invoke the facial recognition model to perform a facial image matching operation to match the facial image of the user with a facial template;
the processor 24 is further configured to execute the event to be processed when the face image of the user matches the face template.
In one embodiment, in loading the facial recognition model, the processor 24 is specifically configured to:
and loading the facial recognition model when the terminal equipment is started, or at the sleep awakening time of the user, or at a first set time.
In an embodiment, the processor 24 is specifically configured to:
and releasing the facial recognition model at the sleeping time of the user or at a second set time.
In an embodiment, the terminal device stores a plurality of face recognition models, different face recognition models correspond to different events to be processed, and the first set time corresponding to a face recognition model is determined according to the corresponding event to be processed.
In one embodiment, the second set time corresponding to the face recognition model is determined according to the corresponding to-be-processed event.
It should be noted that, the specific implementation manner of the content described in this embodiment may refer to the above method, and will not be described here.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a terminal device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as set out in the above method embodiments. The computer program product may be a software installation package, said computer comprising terminal equipment.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. An unlocking processing method is applied to a terminal device comprising a face image acquisition device, and comprises the following steps:
loading a facial recognition model;
when the fact that the face unlocking is needed for the event to be processed is detected, calling the face recognition model to execute face image acquisition operation, and acquiring a face image of a user through the face image acquisition device;
calling the facial recognition model to execute facial image matching operation so as to match the facial image of the user with a facial template;
when the face image of the user is matched with a face template, executing the event to be processed;
wherein the loading of the facial recognition model comprises:
loading a facial recognition model when the terminal equipment is started, or at the sleep awakening time of the user, or at a first set time;
the terminal equipment stores a plurality of face recognition models, different face recognition models correspond to different events to be processed, and the first set time corresponding to the face recognition models is determined according to the corresponding events to be processed.
2. The method of claim 1, further comprising:
and releasing the facial recognition model at the sleeping time of the user or at a second set time.
3. The method according to claim 1 or 2, wherein the second set time corresponding to the face recognition model is determined according to the corresponding event to be processed.
4. A terminal device characterized by comprising a face information acquisition means and a processor, wherein,
the processor is used for loading a face recognition model;
the processor is further used for calling the face recognition model to execute a face image acquisition operation when the fact that the face unlocking is needed for the event to be processed is detected, so that the face image of the user is acquired through the face image acquisition device;
the processor is further used for calling the face recognition model to execute a face image matching operation so as to match the face image of the user with a face template;
the processor is further used for executing the event to be processed when the face image of the user is matched with the face template;
wherein the processor is specifically configured to:
loading a facial recognition model when the terminal equipment is started, or at the sleep awakening time of the user, or at a first set time;
the terminal equipment stores a plurality of face recognition models, different face recognition models correspond to different events to be processed, and the first set time corresponding to the face recognition models is determined according to the corresponding events to be processed.
5. The terminal device of claim 4, wherein the processor is specifically configured to:
and releasing the facial recognition model at the sleeping time of the user or at a second set time.
6. The terminal device according to claim 4 or 5, wherein the second set time corresponding to the face recognition model is determined according to the corresponding to-be-processed event.
7. A terminal device, comprising:
a model loading unit for loading a face recognition model;
the calling unit is used for calling the face recognition model to execute face image acquisition operation when the fact that the face unlocking is needed for the event to be processed is detected, so that the face image of the user is acquired through the face image acquisition device; calling the facial recognition model to execute facial image matching operation so as to match the facial image of the user with a facial template;
an execution unit configured to execute the event to be processed when the face image of the user matches a face template;
wherein the loading of the facial recognition model comprises: loading a facial recognition model when the terminal equipment is started, or at the sleep awakening time of the user, or at a first set time;
the terminal equipment stores a plurality of face recognition models, different face recognition models correspond to different events to be processed, and the first set time corresponding to the face recognition models is determined according to the corresponding events to be processed.
8. A terminal device comprising one or more processors, one or more memories, one or more transceivers, and one or more programs stored in the memories and configured to be executed by the one or more processors, the programs comprising instructions for performing the steps in the method of any of claims 1-3.
9. A computer-readable storage medium, characterized in that it stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1-3.
CN201710945832.4A 2017-10-11 2017-10-11 Unlocking processing method and related equipment Expired - Fee Related CN107679384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710945832.4A CN107679384B (en) 2017-10-11 2017-10-11 Unlocking processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710945832.4A CN107679384B (en) 2017-10-11 2017-10-11 Unlocking processing method and related equipment

Publications (2)

Publication Number Publication Date
CN107679384A CN107679384A (en) 2018-02-09
CN107679384B true CN107679384B (en) 2020-01-14

Family

ID=61140530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710945832.4A Expired - Fee Related CN107679384B (en) 2017-10-11 2017-10-11 Unlocking processing method and related equipment

Country Status (1)

Country Link
CN (1) CN107679384B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760736A (en) * 2016-02-19 2016-07-13 北京奇虎科技有限公司 Unlocking method and unlocking device of application program
CN107122649A (en) * 2017-04-28 2017-09-01 广东欧珀移动通信有限公司 Solve lock control method and Related product

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140354405A1 (en) * 2013-05-31 2014-12-04 Secure Planet, Inc. Federated Biometric Identity Verifier

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760736A (en) * 2016-02-19 2016-07-13 北京奇虎科技有限公司 Unlocking method and unlocking device of application program
CN107122649A (en) * 2017-04-28 2017-09-01 广东欧珀移动通信有限公司 Solve lock control method and Related product

Also Published As

Publication number Publication date
CN107679384A (en) 2018-02-09

Similar Documents

Publication Publication Date Title
EP3355223B1 (en) Unlock method and mobile terminal
CN107566650B (en) Unlocking control method and related product
CN107832595B (en) Locking method and related equipment
CN108197450B (en) Face recognition method, face recognition device, storage medium and electronic equipment
CN104869305B (en) Method and apparatus for processing image data
MX2014008798A (en) Method and apparatus for image processing and terminal device.
US9998924B2 (en) Electronic device and method for acquiring biometric information thereof
CN108491526A (en) Daily record data processing method, device, electronic equipment and storage medium
US20150294108A1 (en) Method and apparatus for managing authentication
WO2020015259A1 (en) Data backup method and terminal
CN107729781B (en) Method for preventing loss of mobile terminal, mobile terminal and computer readable storage medium
US20240005695A1 (en) Fingerprint Recognition Method and Electronic Device
CN112532885B (en) Anti-shake method and device and electronic equipment
CN107808081B (en) Reminding method and related equipment
CN113971076A (en) Task processing method and related device
CN107480998B (en) Information processing method and related product
CN114077519B (en) System service recovery method and device and electronic equipment
CN114040048A (en) Privacy protection method and electronic equipment
CN107679384B (en) Unlocking processing method and related equipment
CN113282361B (en) Window processing method and electronic equipment
CN107463819B (en) Unlocking processing method and related product
CN115129143A (en) Display method and device of screen locking interface, wearable device and storage medium
CN112764824B (en) Method, device, equipment and storage medium for triggering identity verification in application program
CN107818015B (en) System resource calling method and related equipment
CN107862191B (en) Unlocking processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200114