CN116704586A - Face recognition method, electronic device, storage medium and program product - Google Patents

Face recognition method, electronic device, storage medium and program product Download PDF

Info

Publication number
CN116704586A
CN116704586A CN202310955462.8A CN202310955462A CN116704586A CN 116704586 A CN116704586 A CN 116704586A CN 202310955462 A CN202310955462 A CN 202310955462A CN 116704586 A CN116704586 A CN 116704586A
Authority
CN
China
Prior art keywords
face
face recognition
image
detected
eye gaze
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310955462.8A
Other languages
Chinese (zh)
Inventor
林金峰
聂大伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310955462.8A priority Critical patent/CN116704586A/en
Publication of CN116704586A publication Critical patent/CN116704586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the application provides a face recognition method, electronic equipment, a storage medium and a program product, and relates to the technical field of image processing, wherein the method comprises the following steps: responding to the human face recognition instruction, and starting human eye gaze detection; if the eye gazing is detected, acquiring a face image; and recognizing the face image to obtain a face recognition result. Compared with the prior art, when the user starts the APP of the specific type, the image is immediately captured, and the image acquisition time after the user starts the APP of the specific type can be accurately judged by acquiring the face image when the eye gaze is detected. Because the face image is obtained when the eye gaze is detected, the face image does not have the large-angle face deflection and other conditions, so that the face image obtained at the moment has a more complete face of the user, and the accuracy of face recognition can be increased.

Description

Face recognition method, electronic device, storage medium and program product
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a face recognition method, an electronic device, a storage medium, and a program product.
Background
APP (Application) for payment, games, shopping and the like cannot identify and manage minors for payment due to insufficient face recognition detection capability.
At present, after users open the APP, the detection scheme adopted by the APP is that under the condition that the users agree to privacy notification, the face photo of the users in the current state is immediately obtained, and then face recognition is carried out based on the face photo of the users. However, the obtained face photo may have insufficient face shooting, too large shooting angle or no face shooting, and the like, which results in too low accuracy of the subsequent face recognition based on the face photo.
Disclosure of Invention
In view of the above, the present application provides a face recognition method, an electronic device, a storage medium and a program product for solving the problem that the accuracy of face recognition is not high due to incomplete face in an obtained face picture caused by inaccurate shooting time.
In a first aspect, an embodiment of the present application provides a face recognition method, where the method includes:
responding to the human face recognition instruction, and starting human eye gaze detection;
if the eye gazing is detected, acquiring a face image;
And recognizing the face image to obtain a face recognition result.
In one embodiment of the application, the method further comprises:
and after the eye gaze detection is started to reach the first preset time period, if the eye gaze is not detected, the step of starting the eye gaze detection is re-executed after a second preset time period is separated until the eye gaze is detected.
In one embodiment of the application, the method further comprises:
responding to the face recognition instruction, and acquiring an initial face image;
after the eye gazing detection is started to reach the first preset time, if the eye gazing is not detected, the initial face image is identified, and an initial face recognition result is obtained.
In one embodiment of the application, the face recognition result characterizes a first verification result and a first confidence of user identity verification;
the method further comprises the steps of:
acquiring a voice recording segment;
performing voice biological recognition on the voice recording segment to obtain a voice recognition result; the voice recognition result represents a second verification result and a second confidence coefficient of user identity verification;
and combining the voice recognition result and the face recognition result to make a decision, and obtaining a decision result representing the user identity verification result.
In one embodiment of the application, the face recognition instruction is triggered by detection of an opening of an application associated with payment.
In one embodiment of the present application, the face recognition instruction is a minor face recognition instruction, and the face recognition result characterizes whether the user identity is a minor.
In a second aspect, an embodiment of the present application provides a face recognition device, where the device includes:
the starting module is used for responding to the human face recognition instruction and starting human eye gazing detection;
the face acquisition module is used for acquiring a face image if the eye gaze is detected;
and the face recognition module is used for recognizing the face image and obtaining a face recognition result.
In one embodiment of the application, the apparatus further comprises:
the detection module is used for re-executing the step of starting the human eye gazing detection after the first preset time period is reached when the human eye gazing detection is started, and if the human eye gazing is not detected, the second preset time period is spaced until the human eye gazing is detected.
In one embodiment of the application, the apparatus further comprises:
the initial face acquisition module is used for responding to the face recognition instruction and acquiring an initial face image;
And the initial face recognition module is used for recognizing the initial face image to acquire an initial face recognition result if the eye gaze is not detected after the eye gaze detection is started for the first preset time period.
In one embodiment of the present application, the face recognition result characterizes a first verification result and a first confidence of user authentication;
the apparatus further comprises:
the voice acquisition module is used for acquiring voice recording fragments;
the voice recognition module is used for performing voice biological recognition on the voice recording segment to acquire a voice recognition result; the voice recognition result represents a second verification result and a second confidence coefficient of user identity verification;
and the decision module is used for combining the voice recognition result and the face recognition result to make a decision so as to obtain a decision result representing the user identity verification result.
In one embodiment of the application, the face recognition instruction is triggered by detection of an opening of an application associated with payment.
In one embodiment of the present application, the face recognition instruction is a minor face recognition instruction, and the face recognition result characterizes whether the user identity is a minor.
In a third aspect, an embodiment of the present application provides an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the steps of any of the first aspects.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where the program when executed controls a device in which the computer readable storage medium is located to perform the method of any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method of any of the first aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 1b is a block diagram of a software architecture of an electronic device according to an embodiment of the present application;
FIG. 2 is a flow chart of a prior art minors identification method;
FIG. 3 is a schematic diagram of a face shot image of the prior art;
fig. 4 is a schematic flow chart of a face recognition method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a privacy notification provided by an embodiment of the present application;
fig. 6a is a schematic diagram of an initial face image according to an embodiment of the present application;
fig. 6b is a schematic diagram of a face image obtained when an eye gaze is detected according to an embodiment of the present application;
fig. 7 is a flow chart of a method for detecting eye gaze according to an embodiment of the present application;
fig. 8 is a flow chart of a face recognition method for minors according to an embodiment of the present application;
fig. 9 is a schematic flow chart of a method for combining face image recognition and voice recognition according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a face recognition device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solution of the present application, the following detailed description of the embodiments of the present application refers to the accompanying drawings.
In order to clearly describe the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first instruction and the second instruction are for distinguishing different user instructions, and the sequence of the instructions is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The embodiment of the application can be applied to terminals with communication functions, such as mobile phones, tablet personal computers, personal computers (Personal Computer, PC), personal digital assistants (Personal Digital Assistant, PDA), intelligent watches, netbooks, wearable electronic devices, augmented Reality (Augmented Reality, AR) devices, virtual Reality (VR) devices, vehicle-mounted devices, intelligent automobiles, robots, intelligent glasses, intelligent televisions and the like.
By way of example, fig. 1a shows a schematic diagram of the structure of a terminal 100. The terminal 100 may include a processor 110, a display 120, a camera 130, an internal memory 140, a sim (Subscriber Identification Module, subscriber identity module) card interface 150, a usb (Universal Serial Bus ) interface 160, a charge management module 170, a power management module 171, a battery 172, a sensor module 180, a mobile communication module 190, a wireless communication module 200, an antenna 1, an antenna 2, and the like. The sensor modules 180 may include, among other things, pressure sensors 180A, fingerprint sensors 180B, touch sensors 180C, ambient light sensors 180D, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal 100. In other embodiments of the application, terminal 100 may include more or less components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include a central processor (Central Processing Unit, CPU), an application processor (Application Processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a Neural network processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, terminal 100 can also include one or more processors 110. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. In other embodiments, memory may also be provided in the processor 110 for storing instructions and data. Illustratively, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the terminal 100 in processing data or executing instructions.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include Inter-integrated circuit (Inter-Integrated Circuit, I2C) interfaces, inter-integrated circuit audio (Inter-Integrated Circuit Sound, I2S) interfaces, pulse code modulation (Pulse Code Modulation, PCM) interfaces, universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interfaces, mobile industry processor interfaces (Mobile Industry Processor Interface, MIPI), general-Purpose Input/Output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others. The USB interface 160 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 160 may be used to connect a charger to charge the terminal 100, or may be used to transfer data between the terminal 100 and a peripheral device. The USB interface 160 may also be used to connect headphones through which audio is played.
It should be understood that the interfacing relationship between the modules illustrated in the embodiment of the present application is for illustrative purposes, and is not limited to the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also use different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 190, the wireless communication module 200, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 100 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
Terminal 100 implements display functions through a GPU, display 120, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 120 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 120 is used to display images, videos, and the like. The display 120 includes a display panel. The display panel may employ a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), an Active-Matrix Organic Light Emitting Diode (AMOLED), a flexible Light-Emitting Diode (FLED), miniled, microLed, micro-OLED, a quantum dot Light-Emitting Diode (Quantum Dot Light Emitting Diodes, QLED), or the like. In some embodiments, terminal 100 may include 1 or more display screens 120.
In some embodiments of the present application, when the display panel is made of OLED, AMOLED, FLED, the display 120 shown in fig. 1a may be folded. Here, the display 120 may be folded, which means that the display may be folded at any angle at any portion and may be held at the angle, for example, the display 120 may be folded in half from the middle. Or folded up and down from the middle.
The display 120 of the terminal 100 may be a flexible screen that is currently of great interest due to its unique characteristics and great potential. Compared with the traditional screen, the flexible screen has the characteristics of strong flexibility and bending property, can provide a new interaction mode based on the bending property for the user, and can meet more requirements of the user on the terminal. For a terminal equipped with a foldable display, the foldable display on the terminal can be switched between a small screen in a folded configuration and a large screen in an unfolded configuration at any time. Accordingly, users use a split screen function on a terminal configured with a foldable display screen, also more and more frequently.
The terminal 100 may implement a photographing function through an ISP, a camera 130, a video codec, a GPU, a display 120, an application processor, and the like, wherein the camera 130 includes a front camera and a rear camera.
The ISP is used to process the data fed back by the camera 130. For example, when shooting, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing, so that the electric signal is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 130.
The camera 130 is used to take pictures or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Cevice, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV format image signal, and the like. In some embodiments, the terminal 100 may include 1 or N cameras 130, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The terminal 100 may support one or more video codecs. In this way, the terminal 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The NPU is a Neural-Network (NN) computing processor, and can rapidly process input information by referencing a biological Neural Network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the terminal 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The internal memory 140 may be used to store one or more computer programs, including instructions. The processor 110 may cause the terminal 100 to perform the video segmentation method provided in some embodiments of the present application, as well as various applications, data processing, and the like, by executing the above-described instructions stored in the internal memory 140. The internal memory 140 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage program area may also store one or more applications (such as gallery, contacts, etc.), etc. The storage data area may store data (e.g., photos, contacts, etc.) created during use of the terminal 100, etc. In addition, the internal memory 140 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage units, flash memory units, universal flash memory (Universal Flash Storage, UFS), and the like. In some embodiments, the processor 110 may cause the terminal 100 to perform the video segmentation methods provided in embodiments of the present application, as well as other applications and data processing, by executing instructions stored in the internal memory 140, and/or instructions stored in a memory provided in the processor 110.
The internal memory 140 may be used to store a related program of the video segmentation method provided in the embodiment of the present application, and the processor 110 may be used to call the related program of the video segmentation method stored in the internal memory 140 when information is presented, to perform the video segmentation method of the embodiment of the present application.
The sensor module 180 may include a pressure sensor 180A, a fingerprint sensor 180B, a touch sensor 180C, an ambient light sensor 180D, and the like.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 120. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates of conductive material, the capacitance between the electrodes changing as a force is applied to the pressure sensor 180A, the terminal 100 determining the strength of the pressure based on the change in capacitance. When a touch operation is applied to the display screen 120, the terminal 100 detects the touch operation according to the pressure sensor 180A. The terminal 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The fingerprint sensor 180B is used to collect a fingerprint. The terminal 100 can utilize the collected fingerprint characteristics to realize the functions of unlocking, accessing an application lock, shooting and receiving an incoming call, and the like.
The touch sensor 180C, also referred to as a touch device. The touch sensor 180C may be disposed on the display screen 120, and the touch sensor 180C and the display screen 120 form a touch screen, which is also referred to as a touch screen. The touch sensor 180C is used to detect a touch operation acting thereon or thereabout. The touch sensor 180C may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 120. In other embodiments, the touch sensor 180C may also be disposed on the surface of the terminal 100 and at a different location than the display 120.
The ambient light sensor 180D is used to sense ambient light level. The terminal 100 may adaptively adjust the brightness of the display 120 according to the perceived ambient light level. The ambient light sensor 180D may also be used to automatically adjust white balance at the time of photographing. Ambient light sensor 180D may also communicate the ambient information in which the device is located to the GPU.
The ambient light sensor 180D is also used to acquire the brightness, light ratio, color temperature, etc. of the acquisition environment in which the camera 130 acquires an image.
Taking electronic equipment as a smart phone as an example, the face recognition method in the embodiment of the application can be realized through a smart phone system architecture shown in fig. 1b, and referring to fig. 1b, the smart phone system architecture comprises a kernel part, a frame layer part and an application layer part; the kernel part comprises a driving layer and a real-time operating system, wherein the driving layer comprises a GPU (graphic processor), a display driver (in the figure, an LCD driver is concrete), a TP driver (touch screen driver), keys and the like; the real-time operating system comprises interrupt management, task scheduling and MEM (memory management); the frame layer includes: system basic capability, underlying software services, hardware service capability, etc.; the application layer comprises: shooting applications, display applications, system applications, communication applications, etc.
At present, in order to avoid the minors to carry out game recharging and live broadcasting and appreciation and reduce the cost loss of payment type APP underage payment, user identification is required to be carried out when the payment type APP is started. When the user is found to be a minor, the payment class APP will trigger a specific operation, protecting the fund security of the account.
The existing minors recognition scheme is shown in fig. 2, and fig. 2 is a flow chart of a minors recognition method in the prior art. Step S201, after detecting that the user opens the payment type APP, step S202 is executed to acquire a face image, step S203 is executed to start a minor identification algorithm, and minor judgment is performed. However, when the payment APP is just started, the situation that the face is not full, the angle is not right, the face is not left, and the like may exist in the portrait shot by the camera, as shown in fig. 3, fig. 3 is a schematic diagram of a face shot image in the prior art. Under such an image, it is difficult to face recognition of the user, resulting in limitation in use on the service side.
In order to solve the above technical problems, the present application provides a face recognition method, as shown in fig. 4, fig. 4 is a schematic flow chart of the face recognition method provided in the embodiment of the present application, and the method is applied to a terminal device. Specifically, the terminal device may be an electronic device with a camera function, such as a personal computer, a mobile phone, a tablet computer, a smart watch, a vehicle-mounted device, and the like, which is not limited in the embodiment of the present application. The face recognition method provided by the embodiment of the application comprises the following steps:
in step S401, eye gaze detection is initiated in response to a face recognition instruction.
In this embodiment, the face recognition instruction is triggered by detecting the opening of an application related to payment. I.e. face recognition instructions can be understood as instructions triggered when a user is detected to open a specific type APP or Activity.
The embodiment of the application does not limit the event triggering the face recognition instruction. For example, the user may interact with the electronic device and actively issue a face recognition instruction to the electronic device to instruct the electronic device to perform face recognition. Alternatively, the face recognition instruction may be an instruction that the electronic device detects that the user opens a specific type of APP or Activity (application component) trigger.
For example, when a user activates an APP or Activity component associated with payment, it is stated that the user may have a payment intention. Considering that in practice, the user may look at the screen during the payment process, so that the user eye gaze may be detected to select a more appropriate photographing time, that is, to start eye gaze detection in response to the face recognition instruction.
Specifically, the APP or Activity component related to payment may be a payment, game, shopping, or other type of APP or Activity component.
The eye gaze detection is initiated to determine the gaze point position of the user's eye by tracking and analyzing the movement locus of the eye with the purpose of detecting whether the user gazes at the terminal device screen or at the camera area of the terminal device. The specific human eye gaze detection method is not limited in the embodiment of the present application.
As one example, it may be detected by training a neural network model whether the user is looking at the terminal device screen or the camera area of the terminal device. In the process of training the neural network model, a sample image containing the face of the user can be analyzed and processed, the relative coordinates of the left pupil in the left eye, the relative coordinates of the right pupil in the right eye, the distance between the two eyes and the nose and the like can be used as input data of the model, and the labeling information of the sample image is whether the user looks at a screen of the terminal equipment or a camera area of the terminal equipment. The trained neural network model can be used for judging whether a user looks at a screen of the terminal equipment or a camera area of the terminal equipment.
As another example, it is also possible to analyze the pupil position and the head pose of the user in the face image by collecting the face image, and then calculate whether the user looks at the screen of the terminal device or the camera area of the terminal device by combining with the calibration parameters of the camera.
In step S402, if the eye gaze is detected, a face image is acquired.
It will be appreciated by those skilled in the art that upon detection of eye gaze, the user's face is directed towards the terminal device without large angular deflections, such as user low head, etc. Therefore, the terminal equipment detects that human eyes watch the acquired face image, and the face image is an image containing a more complete face.
For example, as shown in fig. 6b, fig. 6b is a schematic view of a face image acquired when eye gaze is detected. As can be seen from fig. 6b, an image with a more complete face can be understood as an image with a clearer facial sense and outline of the face of the user.
Step S403, the face image is identified, and a face identification result is obtained.
In this embodiment, face recognition may be triggered after the face image is acquired. Or after the face image is acquired, storing the face image, and after receiving the face recognition instruction, recognizing the face image.
Because the face image is obtained when the eye gaze is detected, the face in the face image is complete, so that the face recognition can be performed based on the image.
In this embodiment, face recognition may refer to feature extraction and matching of face images, so as to implement recognition and judgment of a user face. Specifically, the face recognition step may be that face detection is performed first: and detecting a face area from the acquired face image. And then extracting the characteristics: and converting the face image into a digitized feature vector so as to facilitate subsequent comparison and recognition. And finally, carrying out identity authentication: and comparing the extracted feature vector with data in a pre-stored face feature library to determine the identity information of the face. The identity information may include the sex, age, etc. of the face. Common face recognition techniques include statistical and machine learning based methods, deep learning based convolutional neural networks, and the like.
In one embodiment of the present application, the face recognition result corresponds to the face recognition instruction. For example, the face recognition instruction is a minor face recognition instruction, and the face recognition result is accordingly a judgment whether the user is a minor. Or the face recognition instruction is a gender face recognition instruction, and the corresponding face recognition result is that the gender of the user is judged.
In the embodiment of the application, the human eye gazing detection is started in response to the human eye gazing detection, and the human face image used for human eye gazing detection is an image obtained when human eye gazing is detected, so that the image is a human face image with a more complete human face. And recognizing the face image to obtain a face recognition result. Compared with the prior art, when the user starts the APP of the specific type, the image is immediately captured, and the image acquisition time after the user starts the APP of the specific type can be accurately judged by acquiring the face image when the eye gaze is detected. Because the face image is obtained when the eye gazing is detected, the face in the face image is not deflected at a large angle, so that the face image obtained at the moment has a more complete face of the user, and the accuracy of face recognition can be increased.
In order to avoid that after the eye gaze detection is started, the eye gaze is not detected in a short time, so that the subsequent recognition step cannot be performed, an initial face image can be acquired first in response to a face recognition instruction. Specifically, the initial face image may be a face image obtained immediately after the specific type APP is detected to be started. If the eye gaze is not detected in a short period, the face recognition can be performed based on the initial face image to obtain a face recognition result. Based on this, the present embodiment provides two schemes, and the two schemes will be further described below, respectively.
Scheme 1: in this embodiment, a first preset duration may be set, and after the eye gaze detection is started to reach the first preset duration, if the eye gaze is not detected, the initial face image is identified, and an initial face recognition result is obtained.
For example, the first preset duration is set to be 3s, after the eye gaze detection is started to reach 3s, if the eye gaze is not detected, the initial face image can be identified, and an initial face recognition result can be obtained.
It can be understood that, within a first preset time period after the human eye gaze detection is started, the human eye gaze is detected at any moment, the human eye detection can be stopped, a new face image is obtained, and then the subsequent face image recognition is performed based on the face image.
Also taking the first preset duration as an example of 3 s. If the eye gazing is detected in the 1 st step, stopping detection, acquiring the face image of the 1 st step, and carrying out subsequent face image recognition. Similarly, if the eye gaze is detected at the 2 nd s, the detection is stopped, the face image of the 2 nd s is acquired, and the subsequent face image recognition is performed.
Scheme 2: after the eye gaze detection is started for a first preset time period, if the eye gaze is not detected, the step of starting the eye gaze detection is re-executed after a second preset time period is spaced until the eye gaze is detected.
For example, the first preset duration is 4s, and the second preset duration is 1s. After the eye gaze detection is started for 4s, if the eye gaze is not detected, restarting the eye gaze detection after a time interval of 1s. Similarly, at any time within 4s after the eye gaze detection is restarted, if the eye gaze is detected, the detection is stopped, a new face image is acquired, and the subsequent face image recognition is performed. If no eye gaze is detected, the interval is continued for 2s, and then eye gaze detection is started again until eye gaze is detected.
In addition, a relatively long third preset time period can be set, and in the process of repeatedly executing the scheme 2, if the total time period exceeds the third preset time period and then the eye fixation is not detected, the initial face image can be identified, and an initial face recognition result can be obtained.
In this embodiment, the first preset duration, the second preset duration, and the third preset duration may be set according to actual requirements. It is understood that the third preset time period should be greater than the sum of the first preset time period and the second preset time period. For example, the first preset duration is set to 3s, the second preset duration is set to 2s, and the third preset duration is set to 20s.
In addition, in the embodiment of the application, if the user exits the payment APP during the eye gaze detection, the eye gaze detection is stopped, and the initial face image is directly used for recognition.
In the following, a mobile phone is taken as an example of a terminal device used by a user through an exemplary specific scenario. The face recognition method provided by the embodiment of the application is described in detail.
Scene 1, the user opens the payment class APP for the first time.
1) After the user opens the payment APP for the first time, a privacy notification popup box is popped up, and the user waits for agreeing to the privacy notification. Fig. 5 is a schematic diagram of a privacy notification according to an embodiment of the present application, as shown in fig. 5. If the user clicks consent, go to step 2), or if the user opens the payment class APP for the first time, go directly to step 2). If the user clicks reject, the payment is exited.
In the embodiment of the application, the payment APP in the mobile phone is used for the first time and is not used for the first time. For example, the user can be considered to be the first use when opening the payment APP for the first time after downloading the payment APP each time, or when the user upgrades the payment APP or the mobile phone system.
2) When a user opens the payment APP for the first time to agree with privacy notification or the user does not open the payment APP for the first time, the mobile phone camera captures an initial image, as shown in fig. 6a, fig. 6a is a schematic diagram of an initial face image provided by the embodiment of the present application, and at the same time, the mobile phone starts eye gaze detection in response to a face recognition instruction.
3) When the eye gaze is detected, the mobile phone camera shoots a new image, as shown in fig. 6b, fig. 6b is a schematic diagram of a face image obtained when the eye gaze is detected, and the face image is a face image with a relatively complete face.
4) And responding to the face recognition instruction, recognizing the new image, and obtaining a face recognition result.
In this embodiment, fig. 6a shows a photograph obtained immediately after the user opens the payment APP, and if the photograph is directly used as an image for performing face recognition, the result is not accurate enough. Fig. 6b is a schematic diagram of a face image obtained when eye gaze is detected according to an embodiment of the present application, which is a photograph taken when eye gaze is detected, the photograph exposes the front face of the full face of the user, and the result of recognition is more accurate when the photograph is used as a face image for face recognition. As is apparent from a comparison of fig. 6a and 6b, the face integrity obtained for images taken at different moments is different. By starting the eye gaze detection, the face image is acquired when the eye gaze is detected, so that the moment of taking the picture by the terminal equipment is more appropriate, the acquired face image has more complete face of the user, and the accuracy of face recognition is improved.
In order to further explain specific steps of human eye gaze detection, an embodiment of the present application provides a human eye gaze detection method, as shown in fig. 7, fig. 7 is a flow chart of the human eye gaze detection method provided in the embodiment of the present application, including:
step S701, registering a human eye fixation fence.
Initializing a human eye gaze detection module, and setting a first preset duration and a second preset duration. The first preset time length is the detection time length for starting the eye gaze detection, and the second preset time length is the time interval between the end of the last eye gaze detection and the start of the next eye gaze detection.
Step S702, initializing the camera and loading a gaze detection algorithm.
Initializing a camera (camera) of the terminal equipment, and loading a gaze detection algorithm. Specifically, the gaze detection algorithm may calculate the position of eyes and the gaze point in a video or image captured by a camera of the terminal device, so as to determine whether the eyes of the user gaze the screen.
Step S703, whether there is a face.
In this embodiment, further detection of the human eyes on the basis of detection of the human face can be more convenient for locating the position of the human eyes of the user. If a face is detected, step S704 is performed, whether or not there is a human eye. If no face is detected, step S706 is performed, reporting a non-gaze.
Step S704, whether there is a human eye.
Because the face image obtained on the basis of detecting the eye gaze is a face image with a relatively complete face, the face recognition is more convenient, and therefore, whether the eye gaze is detected or not needs to be judged.
Step S705, the gaze is reported.
And reporting the eye gaze, namely indicating that the eye gaze is detected within a first preset time, and acquiring a face image at the moment so as to carry out subsequent face recognition.
Step S706, reporting the non-gazing.
And reporting non-gazing, namely indicating that the human eye gazing is not detected within a first preset time period, and starting the human eye gazing detection again for detection after a second preset time period is formed at the moment.
In this embodiment, by further locating the position of the eyes of the user when the face is judged, whether the user looks at the screen can be detected more quickly and accurately. If the eye gaze is not detected, reporting the non-gaze, and returning to the step of detecting the face. If the human eye gazing is detected, reporting the human eye gazing so as to acquire a human face image in the subsequent condition of detecting the human eye gazing, and carrying out more accurate human face recognition based on the image.
In order to further explain the flow of the juvenile detection, the embodiment of the application provides a juvenile face recognition method, as shown in fig. 8, fig. 8 is a schematic flow diagram of the juvenile face recognition method provided by the embodiment of the application, including:
Step S801, a minor identification fence is registered.
The business side registers minor identification fences.
Step S802, detecting whether to open the payment class APP.
And detecting whether the user starts the payment APP.
Step S803, open the payment class APP.
In this embodiment, the user opens the payment APP, triggering the face recognition instruction.
In step S804, photographing is started.
And responding to the face recognition instruction, and starting the camera to take pictures by the terminal equipment.
Step S805, the photo is cached.
After the photo is started and the terminal equipment shoots, the photo is used as an initial face image to be cached.
In step S806, eye gaze detection is initiated for 3S.
In this embodiment, a first preset duration for starting eye gaze detection is set to 3s. And judging whether the eye gazing of the user is detected within 3 seconds continuously.
In step S807, a user gaze is detected.
If the user's gaze is detected through gaze detection, information of the detected user's gaze is sent to the perception layer.
Step S808, a gaze is detected and a new photograph is acquired.
If the eye gaze is detected within 3s at any time, the camera is started to take a picture.
Step S809, the photograph is returned.
The photograph taken in step S808 is used as a face image, and face recognition is performed based on the minors algorithm by aiplug in (Artificial Intelligence Plugin, artificial intelligence plug-in). For example, face recognition can be performed by performing a comparison analysis with a known minor face model in a registered minor recognition fence in step S801.
Step S810, the face minors recognition result is returned.
And after the AiPlugin acquires the face recognition result, the face recognition result is sent to the perception layer.
In step S811, no gaze is detected and the cached photo is used.
If the eye gaze detection is started in step S806 for 3S without detecting the user gaze, the face recognition is performed using the buffered initial face image.
Step S812, the face minors recognition result is returned.
And carrying out face recognition based on the initial face image through AiPlugin, and sending an initial face recognition result to a perception layer.
Step S813, the face detection result is returned.
And returning the face detection result to the service side. If the face detection result is that the user is the minor, stopping the payment action so as to avoid causing the loss of resources. If the face detection result is that the user is judged to be non-minors, the payment action can be continued.
In this embodiment, the face recognition instruction is a minor face recognition instruction, and the face recognition result characterizes whether the user identity is a minor. By starting the eye gaze detection, the photograph used for face recognition is a photograph taken when eye gaze is detected, and thus the photograph is a face image with a more complete face. The face image is identified, the face recognition result is obtained, the time for obtaining the image after the user starts the payment APP can be accurately judged, the face image obtained at the time has a more complete face of the user, and the face recognition accuracy of minors can be increased. And an initial image is acquired as spam when the user opens the payment class APP. If no gaze is detected later, the face recognition of the minors can be performed by using the initial image, so that the situation that the follow-up recognition process cannot be performed due to the fact that the gaze of human eyes cannot be detected all the time is avoided.
In the embodiment of the application, in order to enable the face recognition result to be more accurate, the terminal recognition user can be further recognized by combining sound, fingerprint, APP use condition, sensor and the like. Taking sound as an example, a voice recording segment can be obtained, voice biological recognition is carried out on the voice recording segment, and a voice recognition result is obtained, wherein the voice recognition result represents a second verification result and a second confidence coefficient of user identity verification. And then combining the voice recognition result and the face recognition result (the face recognition result represents the first verification result and the first confidence coefficient of the user identity verification) to make a decision, and obtaining a decision result representing the user identity verification result. The process of identifying and judging the user by combining the face image and the sound will be further described below. As shown in fig. 9, fig. 9 is a flow chart of a method for combining face image recognition and voice recognition according to an embodiment of the present application, including:
step S901, a minor identification fence is registered.
The business side registers for the minors identification fence, which may include known minors face models.
Step S902, detecting whether to open the payment class APP.
The perception layer detects whether a user starts the payment APP.
Step S903, open the payment class APP.
After the user opens the payment APP, the human eye detection instruction is triggered.
Step S904, human eye gaze detection.
And the terminal equipment perception layer responds to the human face recognition instruction and starts human eye gaze detection.
Step S905, 3S gaze is continued.
Setting a first preset time length for starting eye gazing detection to be 3s, and judging whether the eye gazing of the user is detected within 3s continuously.
Step S906, a face is photographed and an age is recognized.
Specifically, if the eye gazing of the user is detected, photographing is carried out, a face image is obtained, and the age of the face image is identified based on an minors algorithm through AiPlugin.
Step S907, the recognition result is returned.
Specifically, after the AiPlugin obtains a face recognition result, the face recognition result is sent to a decision model, wherein the face recognition result characterizes a first verification result and a first confidence coefficient of user identity verification.
Step S908, the face detection result is returned.
And returning the face detection result to the service side.
Step S909, query whether the current is underage.
Based on the face detection result, whether the user is underage is queried.
Step S910, initiate recording.
In order to make the recognition result more accurate, after the face recognition result based on the face image recognition is obtained, an Audio is triggered, the voice recording is started for the user, the voice recording segment is obtained, and the user is further recognized. In this embodiment, noise removal, background interference reduction, and the like may be performed on the obtained voice recording segment, so as to improve accuracy of subsequent processing. The content of the voice recording segment may be a phrase or sentence or the like that the specified user speaks.
Step S911, a recording result is returned.
And performing voice biological recognition on the voice recording segment to obtain a voice recognition result, wherein the voice biological recognition is a technology for performing identity verification or recognition on a recorded person by analyzing and comparing voice characteristics of an individual. The speech recognition result characterizes a second verification result and a second confidence level of the user authentication.
Step S912, obtaining a fusion decision result.
And combining the voice recognition result and the face recognition result to make a decision, namely, performing combined operation on a second verification result and a second confidence coefficient for verifying the user identity and a first verification result and a first confidence coefficient for verifying the user identity, and further obtaining a decision result for verifying the user identity.
Step S913, the minors recognize the result.
By combining the face recognition result based on the image and the voice recognition result based on the voice recording section, a judgment is made as to whether the user is a minor.
In this embodiment, human face images are acquired after human eye gazing is detected to perform face recognition, after a face recognition result based on the human face images is obtained, the identity of the user is further identified and verified by acquiring the voice recording segment, and whether the user is minors or not is detected by combining the human face image identification and recording identification, so that the accuracy of the face recognition of the user can be further improved.
Based on the same inventive concept, the embodiment of the present application correspondingly provides a face recognition device, as shown in fig. 10, fig. 10 is a schematic structural diagram of the face recognition device provided in the embodiment of the present application, including:
a starting module 1001, responsive to a face recognition instruction, for starting eye gaze detection;
a face acquisition module 1002, configured to acquire a face image if eye gaze is detected;
the face recognition module 1003 is configured to recognize a face image, and obtain a face recognition result.
Compared with the prior art, when the user starts the APP of the specific type, the image is immediately captured, and the image acquisition time after the user starts the APP of the specific type can be accurately judged by acquiring the face image when the eye gaze is detected. Because the face image is obtained when the eye gaze is detected, the face image does not have the large-angle face deflection and other conditions, so that the face image obtained at the moment has a more complete face of the user, and the accuracy of face recognition can be increased.
In a specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, where when the program runs, the device where the computer readable storage medium is controlled to execute some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In a specific implementation, an embodiment of the present application further provides a computer program product, where the computer program product includes executable instructions, where the executable instructions when executed on a computer cause the computer to perform some or all of the steps in the method embodiment described above.
Embodiments of the disclosed mechanisms may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (Digital Signal Processor, DSP), microcontroller, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, compact disk Read-Only memories (Compact Disc Read Only Memory, CD-ROMs), magneto-optical disks, read-Only memories (ROMs), random Access Memories (RAMs), erasable programmable Read-Only memories (Erasable Programmable Read Only Memory, EPROMs), electrically erasable programmable Read-Only memories (Electrically Erasable Programmable Read Only Memory, EEPROMs), magnetic or optical cards, flash Memory, or tangible machine-readable Memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the drawings of the specification. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module mentioned in each device is a logic unit/module, and in physical terms, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is only a key for solving the technical problem posed by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems posed by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application.

Claims (10)

1. A method of face recognition, the method comprising:
responding to the human face recognition instruction, and starting human eye gaze detection;
if the eye gazing is detected, acquiring a face image;
and recognizing the face image to obtain a face recognition result.
2. The method according to claim 1, wherein the method further comprises:
and after the eye gaze detection is started to reach the first preset time period, if the eye gaze is not detected, the step of starting the eye gaze detection is re-executed after a second preset time period is separated until the eye gaze is detected.
3. The method according to claim 1, wherein the method further comprises:
responding to the face recognition instruction, and acquiring an initial face image;
after the eye gazing detection is started to reach the first preset time, if the eye gazing is not detected, the initial face image is identified, and an initial face recognition result is obtained.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the face recognition result represents a first verification result and a first confidence coefficient of user identity verification;
the method further comprises the steps of:
acquiring a voice recording segment;
Performing voice biological recognition on the voice recording segment to obtain a voice recognition result; the voice recognition result represents a second verification result and a second confidence coefficient of user identity verification;
and combining the voice recognition result and the face recognition result to make a decision, and obtaining a decision result representing the user identity verification result.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the face recognition instruction is triggered by detecting the opening of an application program related to payment.
6. The method of claim 1, wherein the face recognition instruction is a minor face recognition instruction, and wherein the face recognition result characterizes whether the user identity is a minor.
7. A face recognition device, the device comprising:
the starting module is used for responding to the human face recognition instruction and starting human eye gazing detection;
the face acquisition module is used for acquiring a face image if the eye gaze is detected;
and the face recognition module is used for recognizing the face image and obtaining a face recognition result.
8. An electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any one of claims 1-6.
9. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program, when run, controls a device in which the computer readable storage medium is located to perform the method of any one of claims 1-6.
10. A computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1-6.
CN202310955462.8A 2023-08-01 2023-08-01 Face recognition method, electronic device, storage medium and program product Pending CN116704586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310955462.8A CN116704586A (en) 2023-08-01 2023-08-01 Face recognition method, electronic device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310955462.8A CN116704586A (en) 2023-08-01 2023-08-01 Face recognition method, electronic device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116704586A true CN116704586A (en) 2023-09-05

Family

ID=87827979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310955462.8A Pending CN116704586A (en) 2023-08-01 2023-08-01 Face recognition method, electronic device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116704586A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104818A (en) * 2018-10-25 2020-05-05 珠海格力电器股份有限公司 Face recognition method and face recognition equipment
CN112308568A (en) * 2020-11-18 2021-02-02 支付宝(杭州)信息技术有限公司 Payment method, payment device, storage medium and computer equipment
CN113349460A (en) * 2021-05-26 2021-09-07 深圳麦克韦尔科技有限公司 Sound detection subassembly and electron atomizing device
CN114331457A (en) * 2021-12-31 2022-04-12 深圳市商汤科技有限公司 Payment method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104818A (en) * 2018-10-25 2020-05-05 珠海格力电器股份有限公司 Face recognition method and face recognition equipment
CN112308568A (en) * 2020-11-18 2021-02-02 支付宝(杭州)信息技术有限公司 Payment method, payment device, storage medium and computer equipment
CN113349460A (en) * 2021-05-26 2021-09-07 深圳麦克韦尔科技有限公司 Sound detection subassembly and electron atomizing device
CN114331457A (en) * 2021-12-31 2022-04-12 深圳市商汤科技有限公司 Payment method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20200349551A1 (en) Code scanning method, code scanning device and mobile terminal
CN111782102B (en) Window display method and related device
JP7110412B2 (en) LIFE DETECTION METHOD AND DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
WO2019080797A1 (en) Living body detection method, terminal, and storage medium
CN109635542B (en) Biological identification interaction method, graphical interaction interface and related device
WO2021219095A1 (en) Living body detection method, and related device
CN111242273B (en) Neural network model training method and electronic equipment
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2021218695A1 (en) Monocular camera-based liveness detection method, device, and readable storage medium
CN113723144A (en) Face watching unlocking method and electronic equipment
CN107977636B (en) Face detection method and device, terminal and storage medium
CN115291724A (en) Man-machine interaction method and device, storage medium and electronic equipment
KR20200144196A (en) Electronic device and method for providing function using corneal image thereof
CN113031813A (en) Instruction information acquisition method and device, readable storage medium and electronic equipment
CN116704586A (en) Face recognition method, electronic device, storage medium and program product
CN111557007A (en) Method for detecting opening and closing states of eyes and electronic equipment
CN115546248A (en) Event data processing method, device and system
CN114255505A (en) Eyeball tracking processing method and related device
CN115988339B (en) Image processing method, electronic device, storage medium, and program product
CN117689611B (en) Quality prediction network model generation method, image processing method and electronic equipment
CN113676670B (en) Photographing method, electronic device, chip system and storage medium
CN116661630B (en) Detection method and electronic equipment
CN115079822B (en) Alternate gesture interaction method and device, electronic chip and electronic equipment
CN114296818B (en) Automatic application starting method, equipment terminal and storage medium
CN117615440B (en) Mode switching method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination