CN110784592A - Biological identification method and electronic equipment - Google Patents

Biological identification method and electronic equipment Download PDF

Info

Publication number
CN110784592A
CN110784592A CN201910936454.2A CN201910936454A CN110784592A CN 110784592 A CN110784592 A CN 110784592A CN 201910936454 A CN201910936454 A CN 201910936454A CN 110784592 A CN110784592 A CN 110784592A
Authority
CN
China
Prior art keywords
camera
electronic device
display screen
user
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910936454.2A
Other languages
Chinese (zh)
Inventor
程国梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910936454.2A priority Critical patent/CN110784592A/en
Publication of CN110784592A publication Critical patent/CN110784592A/en
Priority to PCT/CN2020/115532 priority patent/WO2021057571A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/66Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
    • H04M1/667Preventing unauthorised calls from a telephone set
    • H04M1/67Preventing unauthorised calls from a telephone set by electronic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device

Abstract

The application discloses biological identification method is applied to the electronic equipment who disposes first camera and first display screen, and the shooting direction of first camera is towards first direction, and the display surface of first display screen is towards the second direction, and first direction and second direction are different, the method includes: the electronic equipment displays a first interface on a first display screen; the electronic equipment responds to the received first instruction and acquires face information of a first user through a first camera; and when the face information of the first user is matched with the stored face information template, the electronic equipment displays a second interface on the first display screen. Therefore, other helpers can use the rear camera to help the user to unlock the human face, operation of the helpers can be facilitated, misoperation is reduced, and running resources of the electronic equipment are saved.

Description

Biological identification method and electronic equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a biometric identification method and an electronic device.
Background
The existing mobile phones are basically provided with front and rear cameras, but only front cameras are still used in the modes of face recognition input and face recognition. The rear camera has stronger resolution and devices, which is a waste.
In the prior art, a mobile phone face input scheme is to use a front-facing camera to input face information, and a mobile phone face recognition scheme is to use the front-facing camera to collect the face information and use the data to authenticate. However, when the face information is recorded, if the operator is an old person or a user unfamiliar with the operation of the smart phone, other people may be needed to assist the recording, the mobile phone display screen and the front camera need to be aligned to the operator when the front camera is recorded, the helper cannot observe the prompt and the image on the mobile phone display screen, the operation is not easy for the helper, the misoperation is easy to generate, the time is wasted, and the mobile phone operation resources are wasted. Moreover, when the user uses the face recognition to unlock the mobile phone, if the user is inconvenient to pick up the mobile phone, the user needs to unlock the mobile phone, and when the user uses the front camera to unlock the mobile phone, the user is easy to operate mistakenly, and time and mobile phone operation resources are wasted.
Therefore, how to solve the technical problem of wasting operating resources of the electronic device due to the difficult operation is a problem being studied by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a biological recognition method and electronic equipment, which simplify the operation of triggering face recognition by an operator and save the running resources of the electronic equipment.
In a first aspect, the present application provides a biometric method applied to an electronic device configured with a first camera and a first display screen, a shooting direction of the first camera faces a first direction, a display surface of the first display screen faces a second direction, and the first direction and the second direction are different, where the method includes: the electronic equipment displays a first interface on a first display screen; the electronic equipment responds to the received first instruction and acquires face information of a first user through a first camera; and when the face information of the first user is matched with the stored face information template, the electronic equipment displays a second interface on the first display screen.
The method provided by the embodiment of the application can realize the acquisition of the face information through the first camera when a user uses the first display screen, the first camera is equivalent to a rear camera, when the acquired face information is matched with the stored face information template, the first interface of the first display screen is converted into the second interface, and when the face unlocking is used by others, the rear camera is used for unlocking, so that the operation of the assistor can be facilitated, the misoperation is reduced, and the running resources of the electronic equipment are saved.
In one possible implementation, before the electronic device displays the first interface on the first display screen, the method may further include: and the electronic equipment responds to the received second instruction and inputs the face information template through the first camera. Therefore, when a user uses the first display screen, the face information can be collected through the first camera, the first camera is equivalent to the rear camera, the collected face information is stored as a face information template, and the face information is input by the rear camera under the condition that other people input the face information, so that the operation of an assistant can be facilitated, the misoperation is reduced, and the operation resources of the electronic equipment are saved.
In a possible implementation manner, the electronic device further includes a second camera, a shooting direction of the second camera faces a second direction, and the first interface includes a first control and a second control; the first control is used for triggering the electronic equipment to start the first camera, and the second control is used for triggering the electronic equipment to start the second camera; the electronic equipment responds to the received first instruction, and acquires the face information of the first user through the first camera, and the method specifically comprises the following steps: the electronic equipment responds to the operation aiming at the first control, and the face information of the first user is collected through the first camera. The method provides a first interface, and the operation on the first interface aiming at the second control is received, the first camera is used for collecting the face information, and the operation can be clicking, long-time pressing, double-clicking, suspension operation and the like.
In one possible implementation, the method further includes: the electronic equipment detects the distance between the electronic equipment and a first user; when the face information of the first user is matched with the stored face information template, the electronic device displays a second interface on the first display screen, and the method specifically comprises the following steps: and when the distance between the electronic equipment and the first user does not exceed a first threshold value and the face information of the first user is matched with the stored face information template, the electronic equipment displays a second interface on the first display screen. The electronic equipment judges the distance between the first user and the electronic equipment before matching the face information, and compares the face information if the distance does not exceed a first threshold value, so that malicious authentication can be prevented, and the safety of the electronic equipment is improved.
In a possible implementation manner, the electronic device further includes a second camera, and a shooting direction of the second camera faces a second direction; the method further comprises the following steps: when the distance between the electronic equipment and the first user exceeds a first threshold value, the electronic equipment collects and stores face information of the second user through the second camera. The second camera is equivalent to a front-facing camera, the second user is equivalent to a current operator, when the electronic device is matched with face information, the distance between the first user and the electronic device is judged firstly, if the distance exceeds a first threshold value, the current environment is determined to be an unsafe environment, the face information of the current operator is collected and stored through the front-facing camera and stored to the local and cloud sides, and the safety of the electronic device is improved.
In one possible implementation, the method further includes: and when the face information of the first user is matched with the stored face information template, displaying a third interface on the first display screen, wherein the third interface comprises prompt information which is used for prompting the first user to complete the first action. After the electronic equipment is successfully matched with the face information, the first user is prompted to complete the first action and the second identification, so that malicious authentication can be prevented, and the safety of the electronic equipment is improved.
In one possible implementation manner, the electronic device further comprises a second display screen, wherein the display surface of the second display screen faces to the first direction; when the electronic device acquires the face information of the first user through the first camera, the method further comprises the following steps: and the electronic equipment displays the picture captured by the first camera through the second display screen. The electronic equipment is double-display-screen equipment, when a user uses the first camera to collect the face of the first user on the first display screen, the electronic equipment simultaneously displays the picture captured by the first camera on the second display screen, so that the first user can see the picture input by the face, the face input efficiency is improved, and the running resources of the electronic equipment are saved.
In one possible implementation, the electronic device further includes a second display screen, a display surface of the second display screen facing the first direction, and the method further includes: in the process that the electronic equipment collects the face information of the first user through the first camera, when the electronic equipment detects that the light intensity of the picture captured by the first camera is smaller than a specified light intensity threshold value, the electronic equipment lights the second display screen. Here, electronic equipment is two display screen devices, and when the user used first camera to carry out people's face to first user and gathered on first display screen, if electronic equipment detected that the required light of gathering people's face is not enough, then lighted the second display screen with supplementary light, improved the efficiency of gathering people's face information.
In one possible implementation manner, the electronic device further comprises a second display screen, and the display surface of the second display screen faces to the first direction; before the electronic device displays the first interface on the first display screen, the method further comprises: the electronic equipment displays a first interface on a second display screen; the electronic device prompts the second user to use the first display screen in response to the received third instruction. Here, the electronic device is a dual-display device, the third instruction may be for recognizing face information through the first camera, and when the electronic device currently uses the second display screen and receives the third instruction, the electronic device outputs a prompt message to prompt a current operator to use the first display screen, so that the operation efficiency is improved.
In a second aspect, the present application provides an electronic device comprising: the system comprises one or more processors, a memory, one or more display screens, one or more cameras, a first display screen and a second display screen, wherein the first display screen and the second display screen comprise a first display screen, the one or more cameras comprise a first camera, the shooting direction of the first camera faces to a first direction, the display surface of the first display screen faces to a second direction, and the first direction and the second direction are different; the memory, the display screen, the one or more cameras coupled with the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions, the one or more processors executing the computer instructions to perform: displaying a first interface on a first display screen; responding to the received first instruction, and acquiring face information of a first user through a first camera; and when the face information of the first user is matched with the stored face information template, displaying a second interface on the first display screen.
In one possible implementation, before the processor displays the first interface on the first display screen, the processor is further configured to: and responding to the received second instruction, and inputting the face information template through the first camera.
In a possible implementation manner, the electronic device further includes a second camera, a shooting direction of the second camera faces a second direction, and the first interface includes a first control and a second control; the first control is used for triggering the processor to start the first camera, and the second control is used for triggering the processor to start the second camera; the processor responds to the received first instruction, and acquires the face information of the first user through the first camera, and the method specifically comprises the following steps: and responding to the operation of the first control, and acquiring the face information of the first user through the first camera.
In one possible implementation, the processor is further configured to: detecting a distance between the electronic device and a first user; when the face information of the first user is matched with the stored face information template, displaying a second interface on the first display screen, specifically comprising: and when the distance between the electronic equipment and the first user does not exceed a first threshold value and the face information of the first user is matched with the stored face information template, displaying a second interface on the first display screen.
In a possible implementation manner, the electronic device further includes a second camera, and a shooting direction of the second camera faces a second direction; the processor is further configured to: when the distance between the electronic equipment and the first user exceeds a first threshold value, the face information of the second user is collected and stored through the second camera.
In one possible implementation, the processor is further configured to: and when the face information of the first user is matched with the stored face information template, displaying a third interface on the first display screen, wherein the third interface comprises prompt information which is used for prompting the first user to complete the first action.
In one possible implementation manner, the electronic device further comprises a second display screen, wherein the display surface of the second display screen faces to the first direction; when the processor collects the face information of the first user through the first camera, the processor is further used for: and displaying the picture captured by the first camera through the second display screen.
In one possible implementation, the electronic device further includes a second display screen, a display surface of the second display screen facing the first direction, and the processor is further configured to: in the process that the processor collects the face information of the first user through the first camera, when the processor detects that the light intensity of the first camera capturing the picture is smaller than the specified light intensity threshold value, the second display screen is lightened.
In one possible implementation manner, the electronic device further comprises a second display screen, and the display surface of the second display screen faces to the first direction; before the processor displays the first interface on the first display screen, the processor is further configured to: displaying a first interface on a second display screen; in response to receiving the third instruction, prompting the second user to use the first display screen.
In a third aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions that, when executed on an electronic device, cause the electronic device to perform a biometric identification method provided in the first aspect of the present application or any one implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer program product, which when run on an electronic device, causes the electronic device to perform the biometric identification method provided in the first aspect of the present application or any implementation manner of the first aspect.
It is to be understood that the electronic device provided by the second aspect, the computer storage medium provided by the third aspect, and the computer program product provided by the fourth aspect are all configured to perform the method for biometric identification provided by the first aspect, and therefore, the beneficial effects achieved by the method for biometric identification provided by the first aspect can be referred to and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a diagram of a software architecture according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
FIGS. 4 a-4 e are schematic diagrams of an interface provided by an embodiment of the present application;
FIG. 5 is a schematic view of another set of interfaces provided by embodiments of the present application;
FIG. 6 is a schematic flow chart of a method provided by an embodiment of the present application;
FIGS. 7 a-7 c are schematic views of another set of interfaces provided by embodiments of the present application;
FIGS. 8 a-8 c are schematic views of another set of interfaces provided by embodiments of the present application;
fig. 9 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
fig. 10 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
fig. 11a to fig. 11d are schematic structural diagrams of another electronic device provided in the embodiment of the present application;
fig. 12 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
fig. 13 is a schematic diagram of another software architecture provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
The electronic device related in the embodiment of the present application may be a mobile phone, a tablet Computer, a desktop Computer, a laptop Computer, a notebook Computer, an Ultra-mobile Personal Computer (UMPC), a handheld Computer, a netbook, a Personal Digital Assistant (PDA), a wearable electronic device, a virtual reality device, and the like.
Next, exemplary electronic devices provided in the following embodiments of the present application are described.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The following describes an embodiment specifically by taking the electronic device 100 as an example. It should be understood that electronic device 100 may have more or fewer components than shown, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. The N cameras may be on different sides of the electronic device 100, for example, the electronic device 100 may have a front camera and a rear camera, may further have a front camera and a plurality of rear cameras, may further have a front camera and a side camera, may further have a rear camera and a side camera, and the like.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100.
The air pressure sensor 180C is used to measure air pressure.
The magnetic sensor 180D includes a hall sensor.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode.
The ambient light sensor 180L is used to sense the ambient light level.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine a touch event type. Visual output related to touch operations may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects when it is applied to touch operations in different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, event manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The event manager can be used for judging whether the touch coordinate of the touch operation of the user is in the first area or not under the condition that the first control mode is started. If so, reporting the touch operation event to an application program layer; if not, no processing is carried out on the touch operation.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing. The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In the following, with reference to the face recognition scenario, the workflow of software and hardware of the electronic device 100 in the case that the electronic device 100 has front and rear cameras is exemplarily described.
In the case where the electronic device 100 has front and rear cameras, when the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, timestamp of the touch operation, and other information). The raw input events are stored at the kernel layer. The application framework layer acquires the original input event from the kernel layer, and the event manager judges whether the touch coordinate is in the first area. If so, identifying a control corresponding to the original input event, taking the touch control operation as a touch click operation, taking the control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through a camera 193. If not, no processing is carried out on the original input event.
In the embodiment of the application, the front-facing camera may be a camera on a display screen currently used by a user, the non-front-facing camera may include a rear-facing camera or a side-facing camera, and the rear-facing camera may be a camera on a side opposite to the display screen currently used by the user; the side camera may be a camera on the side of the display screen currently in use by the user.
Several embodiments related to the embodiments of the present application and User Interface (UI) embodiments in application scenarios of the embodiments are described below to describe in detail how to solve the technical problems of inconvenient operation and waste of operating resources of an electronic device in the embodiments of the present application.
Example 1: the electronic equipment is provided with a displayable screen, and cameras are arranged on the displayable screen and the back cover opposite to the displayable screen.
Fig. 3 exemplarily shows an external view of the electronic device 30, which includes: display screen 301, back lid 302, camera 303, camera 304. The display screen 301 may be the display screen 194, and the rear cover 302 does not have a display screen or a touch screen function. The shooting direction of the camera 303 is directed to a first direction, and the display surface of the display screen 301 is directed to a second direction, the first direction and the second direction being opposite, in fig. 3, the camera 303 is a camera on the back cover 302 side, and the shooting direction of the camera 304 is directed to the second direction, and is consistent with the display surface of the display screen 301. In the embodiment of the present application, the camera 303 may be referred to as a first camera and may also be referred to as a rear camera, and the camera 304 may be referred to as a second camera and may also be referred to as a front camera; the hardware structure and software architecture of the electronic device 30 are the same as those of the electronic device 100.
The camera 303 or 304 may include an infrared camera, a dot matrix projector, a floodlight, an infrared image sensor, the aforementioned proximity light sensor 180G, and other modules. The dot matrix projector includes a high-power laser (such as VCSEL), a diffractive optical component, and the like, i.e., a structured light emitter, for emitting infrared laser light of a "structure" by using the high-power laser, and projecting the infrared laser light onto the surface of an object, and according to actual conditions, the resolutions of the camera 303 and the camera 304 can be adjusted by themselves.
When an object (e.g., a human face) approaches the electronic device 30, the proximity light sensor 180G senses that the object approaches the electronic device 30, and sends a signal indicating that the object approaches the processor 110 of the electronic device 30. The processor 110 receives the object approach signal and controls the floodlight to be activated, and a low-power laser in the floodlight projects infrared laser light onto the surface of the object. The infrared camera captures the infrared laser light reflected by the object surface, thereby acquiring image information of the object surface, and then uploading the acquired image information to the processor 110. The processor 110 determines whether the object approaching the electronic device 30 is a human face according to the uploaded image information. When the processor 110 judges that the object approaching the electronic device 30 is a human face, the dot matrix projector is controlled to start. The high power lasers in the dot matrix projector emit infrared laser light that, through the action of structures such as diffractive optical elements in the dot matrix projector, produce spots of light that form a large number (e.g., about 3 thousand) "structured" of light that are projected onto the surface of the photographic target. The array formed by the light spots of the structured light is reflected by different positions of the surface of the shooting target, the infrared camera captures the light spots of the structured light reflected by the surface of the shooting target, so that the depth data of the different positions of the surface of the shooting target are acquired, and then the acquired depth data are uploaded to the processor 110. The processor 110 compares and calculates the uploaded depth data with the face depth data of the owner stored in the internal memory 121, and identifies whether the face of the proximity electronic device 30 is the face of the owner.
It can be understood that the face information entered by the camera 303 can be stored in the internal memory 121, and when the face recognition unlocking is performed, an image can be acquired by the camera 303, and the image acquired by the camera 303 is compared with the face information stored in the internal memory. Similarly, when the face recognition unlocking is used, an image may be acquired by the camera 304, and the image acquired by the camera 304 is compared with the face information previously entered by the camera 303.
Based on the electronic device 30, some application scenarios implemented on the electronic device 30 are described below.
Scene one: and (5) inputting the human face.
Fig. 4a illustrates a user interface 40 for face entry. The user interface 40 may include: a prompt area 401, an operation area 402, a function control 403, a status bar 404 and a navigation bar 405. Wherein:
the prompt field 401 may be used to display a prompt for the operation steps of the face entry. The user can conveniently and quickly complete correct face entry operation by viewing the operation step prompt content of the prompt area 401. In some possible embodiments, the hint region 401 may be referred to as a first display region.
In some possible embodiments, the face entry may include a first type entry, which may be a helper entry, and a second type entry, which may be a self entry.
The operating area 402 may include two controls, one for others to enter and one for themselves to enter. The self-entry control is used for starting a camera on the opposite side of the electronic equipment 30 to the current operator so that the camera can shoot the current operator and enter the face of the current operator; for example, when the current operator is holding the electronic device 30 with the screen in the 3a pattern of fig. 3 facing himself, the camera 304 on the screen in the 3a pattern may be the camera on the opposite side of the current operator. The people entering assisting control is used for starting a camera on the side, which is not opposite to the current operator, of the electronic equipment 30 so that the camera can shoot the face opposite to the current operator; for example, when the current operator is holding the electronic device 30 with the screen in the 3a pattern of fig. 3 facing himself, the camera 303 on the screen in the 3b pattern may be a camera on a side not opposite to the current operator. In some possible embodiments, the operating region 402 may be referred to as a second display region.
In one embodiment, the camera on the side opposite to the current operator in the embodiment of the present application may also be a camera on a side of the electronic device 30, for example, a screen in the drawing 3a of fig. 3 is a front side of the electronic terminal 30, a screen in the drawing 3b is a rear side of the electronic terminal 30, and then the remaining four sides of the electronic device 30 are sides of the electronic device.
In one embodiment, the function control 403 may be configured to provide a screen-up function, where the screen-up function is to provide light when the electronic device 30 senses that the electronic device is lifted up (lifted up, the device is perpendicular to the ground angle, and the like), so as to improve the face recognition unlocking experience, and the electronic device 30 may detect a touch operation (e.g., a click operation on the function control 403) applied to the function control 403, and in response to the operation, the electronic device 30 may light up the screen when the electronic device 30 senses that the electronic device is lifted up.
Status bar 404 may include: an operator indicator (e.g., the operator's name "china mobile"), one or more signal strength indicators for wireless fidelity (Wi-Fi) signals, one or more signal strength indicators for mobile communication signals (which may also be referred to as cellular signals), a time indicator, and a battery status indicator.
Navigation bar 405 may include: a return button 406, a Home screen button 407, a menu setting button 408, and other system navigation keys. The main interface is an interface displayed by the electronic device 100 after any user interface detects a user operation on the main interface button 407. When it is detected that the user clicks the return button 405, the electronic apparatus 100 may display a user interface previous to the current user interface. When it is detected that the user clicks the home interface button 407, the electronic device 100 may display the home interface. When the user is detected to click the menu setting button 408, the electronic device 100 may display other shortcut functions, for example. The names of the navigation keys may also be other, for example, 406 may be called Back Button, 407 may be called Home Button, and 408 may be called Menu Button, which is not limited in this application. The navigation keys in the navigation bar 405 are not limited to virtual keys and may be implemented as physical keys.
For the face entry user interface 40 shown in fig. 4a, after the electronic device 30 enters the face entry user interface 40, the user may select a first type entry (for example, as shown in fig. 4a, a helper entry control) or a second type entry (for example, as shown in fig. 4a, a self entry control) in the operation area 402, when the user selects self entry (for example, click a virtual button of "self entry" to trigger the self entry control), as shown in fig. 4b, the electronic device 30 receives a self entry instruction triggered by the user, and opens a camera (for example, the camera 304) of a screen opposite to the current user, so that the camera can shoot the current user and enter the face of the current user; when the user selects the other people to enter (for example, clicking a virtual button of "enter by other people" to trigger the enter control by the other people), as shown in fig. 4c, the electronic device 30 receives an instruction of entering by the other people triggered by the user, and starts a camera (i.e., the camera 303) of a screen opposite to the current user, so that the camera 303 can shoot the face opposite to the current user and enter the face opposite to the current user, and in the entering process, the display screen 301 can display the face image preview acquired by the camera 303, so as to ensure that the face entry is correct. The self-entry instruction and the instruction for assisting others to enter may also be voice instructions, for example, the electronic device 30 may start a voice assistant to receive voice input of the user. In some possible embodiments, the helper entry instruction may also be referred to as a second instruction.
In one embodiment, if the rear device (e.g., the camera 303, the light sensor, etc.) detects that the light does not reach the required brightness during the process of acquiring the face image by the electronic device, the rear flash lamp is automatically turned on to supplement light.
In one embodiment, during the entry process, the display screen 301 may display a preview of the acquired face image to ensure that the face entry is correct, and during the entry process, a current operator is prompted to rotate the mobile phone left and right to enter complete face information, fig. 4d and 4e exemplarily show user interfaces during the face entry process, and the user interfaces shown in fig. 4d may include: a face recognition area 409 and a prompt area 410, wherein the face recognition area 409 is used for acquiring a face image and displaying the currently acquired face image in real time synchronously, the prompt area 410 is used for prompting a current operator to enter a correct operation of a face, at this time, a prompt displayed in the prompt area 410 is "please ensure your face, all the faces are displayed in the recognition area, and the mobile phone is rotated to the left to enter complete face information", and the user interface shown in fig. 4e may include: the system comprises a human image identification area 409 and a prompt area 411, wherein the human image identification area 409 is used for acquiring a human face image and can synchronously display the currently acquired human face image in real time, the prompt area 411 is used for prompting a current operator to enter a correct operation of a human face, the prompt displayed in the prompt area 411 is that the user asks to ensure the face of the user, all the face of the user is displayed in the identification area and the user rotates the mobile phone to the right to enter complete human face information, under the condition that the current operator selects to enter the user, the human image identification area 409 displays a dynamic real-time human face image of the user, the current operator can be prompted to rotate the mobile phone to the left to completely acquire the left human face image of the user, and then the current operator is prompted to rotate the mobile phone to the right to completely acquire the right human face image of the user.
Scene two: and unlocking the screen.
Fig. 5 illustrates a user interface 50 for screen unlocking. In some possible embodiments, the user interface 50 may also be referred to as the first interface, in which case the second interface may be the home screen interface after the unlocking has been successful. The user interface 50 may include: a time zone 501 and an operation zone 502. Wherein:
the time zone 501 is used to display characters or numbers that can describe the time, such as the current time, date, year, etc.
In some possible embodiments, the screen unlock may include a first type of unlock, which may be an others unlock, and a second type of unlock, which may be a self-unlock.
The operating area 502 may include two controls, which are a help-others unlocking control and a self-unlocking control, respectively. The self-unlocking control is used for starting a camera on the opposite side of the electronic equipment 30 to the current operator so that the camera can shoot the current operator and recognize the face of the current operator to unlock the screen; for example, when the current operator is holding the electronic device 30 with the screen in the 3a pattern of fig. 3 facing himself, the camera 304 on the screen in the 3a pattern may be the camera on the opposite side of the current operator. The people-assisting unlocking control is used for starting a camera on the side, which is not opposite to the current operator, of the electronic device 30, so that the camera can shoot the face opposite to the current operator, and the face opposite to the current operator is identified to unlock the screen; for example, when the current operator is holding the electronic device 30 with the screen in the 3a pattern of fig. 3 facing himself, the camera 303 on the screen in the 3b pattern may be a camera on a side not opposite to the current operator. In some possible embodiments, the others-helped unlock control may be referred to as a first control, and the own unlock control may be referred to as a second control.
The manner of triggering the display of the user interface 50 may be that the electronic device 30 receives an instruction from a user to trigger a power key, and displays the user interface 50, or that the electronic device 30 displays the user interface 50 when detecting that the mobile phone is lifted or held vertically, or that the user interface 50 is displayed by an AOD message screen, that is, when the mobile phone is in a locked state, a part of the display screen still remains on and displays the user interface 50.
When the current operator can hold the electronic device 30 with the screen facing himself in the pattern 3a of fig. 3, the electronic device 30 displays the user interface 50 shown in fig. 5, and after the electronic device 30 receives a selection instruction for unlocking himself, the electronic device 30 triggers to activate a camera (i.e., the camera 304 in fig. 3) on the opposite side of the current operator, so that the camera can capture the face of the current operator, where the selection instruction may be a touch operation, for example, the user manually clicks the own unlocking control 502 on the user interface 50 for unlocking the screen, and the selection instruction may also be a voice instruction, for example, the electronic device 30 may activate a voice assistant, receive a voice input from the user, and the selection instruction may also be detection that the mobile phone is held vertically and is in a screen-locked state, which is not limited herein. If the unlocking is failed, an unlocking failure message is output, or the camera 303 can be triggered to be started, and the camera 303 is used for unlocking again.
After the electronic device 30 receives a selection instruction for assisting the unlocking of the other person, as shown in fig. 6, the screen unlocking method provided by the embodiment of the present application may include the following steps, where an assistor (current operator) may be referred to as a first user, and an assisted person may be referred to as a second user:
s601: the first camera is started.
Specifically, after receiving a first instruction, the electronic device 30 triggers to start a camera on the side opposite to the current operator, where the first instruction may be a selection instruction for assisting the unlocking of another person, and after receiving the selection instruction for assisting the unlocking of another person, the electronic device 30 triggers to start a camera (i.e., the camera 303 in fig. 3) on the side opposite to the current operator, so that the camera can capture a face opposite to the current operator, where the first instruction may be a touch operation (e.g., a click operation, a long press operation, a hover operation, etc.), for example, a user manually selects an unlock control for assisting the unlocking of another person on the user interface 50 unlocked on the screen, the first instruction may also be a voice instruction, for example, the electronic device 30 may start a voice assistant, receive a voice input of the user, and the first instruction may also be an instruction that after the electronic device 30 fails to unlock using the camera 304, the electronic device 30 automatically triggers the camera 303 to start, the selection instruction may also be an instruction that when the electronic device 30 detects that the electronic device 30 is held vertically and is in a screen-locking state, and the unlocking using the camera 304 fails, the electronic device 30 automatically triggers the camera 303 to start, which is not limited herein. Illustratively, as shown in fig. 7a, fig. 7a is an interface 70 displayed by the electronic device 30 after a screen unlocking failure is performed by using the camera 304, and includes a time zone 701 and a prompt zone 702, where the time zone 701 is used for displaying characters or numbers capable of describing time, such as a current time, a date, a year, and the like; the prompt field 702 displays "success in non-recognition, retry with rear camera" to prompt the user that unlocking with the camera 304 failed, retry with the camera 303, and start the camera 303.
S602: the distance between the face of the first user and the electronic equipment is recognized through the first camera.
Specifically, after the electronic device 30 starts the camera 303, the distance between the face of the first user and the electronic device 30 is identified through the camera 303, as shown in fig. 4c, the electronic device 30 starts the camera 303, so that the camera 303 can shoot the face opposite to the current operator, and the distance between the face of the first user and the electronic device 30 is identified.
In one embodiment, if a rear device (e.g., the camera 303, a light sensor, etc.) detects that the light intensity of a picture captured by the camera 303 does not reach a required brightness in a process that the electronic device recognizes the distance between the face of the first user and the electronic device through the camera 303, a flash lamp is automatically turned on to supplement light.
S603: and judging whether the distance between the face of the first user and the electronic equipment exceeds a preset distance.
Specifically, whether the distance exceeds a preset distance is judged according to the distance between the recognized face of the first user and the electronic device 30, if so, the current shooting environment is judged to be an unsafe environment, the electronic device 30 starts the camera 304 to collect face information (namely the face information of the second user) in the shooting range of the camera 304 and stores the face information to the local and/or cloud end so as to prevent malicious unlocking of people under the condition that the owner does not know the person, and outputs a prompt message to prompt the face of the first user to approach the electronic device 30 and output a prompt message of authentication failure. The prompting message can be a text prompt, a voice prompt, a light prompt and the like. For example, as shown in fig. 7b, fig. 7b is an interface 70 displayed after the electronic device 30 determines that the distance between the human face and the mobile phone exceeds the preset distance, and the interface includes a time zone 701 and a prompt zone 702, where the time zone 701 is used to display characters or numbers capable of describing time, such as the current time, date, year, and the like; the prompt area 702 displays "success in unrecognization, please retry approaching the face" to prompt the user that the user has failed in the recognization due to the distance between the face and the device, and the user can retry after the device approaches the face by using the camera 303, which fails in unlocking by using the camera 303 this time. The subsequent steps are not continued.
In one embodiment, the triggering condition that the camera 304 acquires the face information in the shooting range of the camera (i.e., the face information of the second user) and stores the face information in the local and/or cloud end is unlocking failure, that is, after the first unlocking failure is caused by the fact that the camera 303 is too far away from the face of the first user, the prompt area 702 displays "not successfully recognized, please retry close to the face", and meanwhile, the camera 304 acquires the face information in the shooting range of the camera and stores the face information in the local and/or cloud end.
S604: and if the distance between the face and the electronic equipment does not exceed the preset distance, comparing whether the face information of the first user acquired by the first camera is matched with the face information template.
Specifically, if the distance between the face and the mobile phone does not exceed the preset distance, the current shooting environment is judged to be a safe environment, after the electronic device collects the face information of the first user through the camera 303, the electronic device may perform some necessary processing, and match the processed face information with the stored face information template. The face information template may be entered by a user before face recognition is performed on the electronic device. The embodiment of the application does not limit the devices and specific algorithms for face recognition, as long as face recognition can be realized. If the face information comparison fails, the authentication is directly output to fail. Illustratively, as shown in fig. 7c, fig. 7c is an interface 70 displayed by the electronic device 30 after the face information comparison fails, and includes a time zone 401 and a prompt zone 402, where the time zone 401 is used to display characters or numbers capable of describing time, such as the current time, date, year, and the like; the prompt field 402 becomes to display "authentication failed" to prompt the user that the unlocking failed because the currently acquired face information is not matched with the entered face information template. The subsequent steps are not continued.
In one embodiment, if a rear device (e.g., the camera 303, the light sensor, etc.) detects that the light intensity of the picture captured by the camera 303 does not reach the required brightness in the process that the electronic device acquires the face information of the first user through the camera 303, the flash lamp is automatically turned on to supplement light.
S605: and if the face information of the first user is successfully matched with the face information template, comparing whether the action image acquired by the first camera is matched with the preset action or not.
Specifically, in order to prevent malicious authentication without owner perception, if the face information of the first user is successfully matched with the face information template, the electronic device 30 displays an action recognition interface, and displays prompt information in the action recognition interface, where the prompt information is used to prompt the first user to make a specified action. The first user makes a specified motion (e.g. blinking, shaking, opening the mouth, etc.) according to the prompt information, the motion recognition interface is further configured to display a motion image captured by the camera 303 to help the second user aim at the camera 303, and after the electronic device collects motion image data of the first user through the camera 303, the electronic device may perform some necessary processing to match the processed motion image data with the stored motion information template. Wherein, the action information template can be input by the user before the electronic equipment performs action recognition. The embodiment of the present application does not limit the device and the specific algorithm of the motion recognition, as long as the motion recognition can be realized. And if the acquired motion image data fails to be matched with the motion information template, directly outputting that the authentication fails. In some possible embodiments, the action recognition interface may be referred to as a third interface and the specified action may be referred to as a first action.
S606: if the comparison of the preset action is successful, the authentication is successful.
Specifically, if the preset action comparison is successful, the electronic device 30 is successfully unlocked and enters the interface of the main screen.
In one embodiment, the display screen 301 may not be lit when the screen unlocking application scene described in the scene two is performed, it is preset that the electronic device 30 starts the camera 304 for face recognition first by default, at this time, the display screen 301 is not lit, if the face unlocking of the camera 304 fails, the camera 303 is automatically started for face recognition, at this time, the display screen 301 is not lit, and the electronic device executes the actions described in the steps S602 to S606, which is not repeated here.
Scene three: application unlock/payment.
Illustratively, as shown in FIG. 8a, the electronic device 30 displays a home screen interface 80 on a display 301 that displays a page with application icons placed therein, the page including a plurality of application icons (e.g., gallery application icon 801, Payment application icon 802, weather application icon, stock application icon, calculator application icon, setup application icon, mail application icon, facebook application icon, browser application icon, music application icon, video application icon, application store icon). The multiple application icons further comprise a page indicator to indicate the position relation of the currently displayed page and other pages, and the multiple tray icons (such as a dialing application icon, an information application icon, a contact application icon and a camera application icon) are arranged below the page indicator and keep being displayed when the page is switched. The page may include a plurality of application icons and a page indicator; the page indicator may not be part of the page and may exist alone. The tray icon is also optional, and the embodiment of the present invention is not limited thereto.
The electronic device 30 may receive an input operation 803 (e.g., a click) by the user with respect to the gallery application icon 801, and in response to the input operation 803 (e.g., the click), the electronic device may turn on the face recognition module and display the face unlock interface 81 as shown in fig. 8 b. The face unlocking interface 81 comprises a face display area 811, an operation area 812 and a password area 813, wherein the face display area 811 is used for displaying a face image captured by a camera to help a user align with the camera. After the electronic equipment acquires the face information through the camera, the electronic equipment can perform some necessary processing, and the processed face information is matched with the stored face information template. In some possible embodiments, the face unlocking interface 81 may also be referred to as a first interface, and in this case, the second interface is a primary interface of the gallery application after the unlocking is successful. The electronic device 30 may receive an input operation of the user for the pay bank application icon 802, where after the user enters the pay bank application, a payment interface of the pay bank application may also be referred to as a first interface, and at this time, a second interface is an interface after payment is successful.
The operation area 812 is the same as the operation area 402, and the description of the operation area 402 also applies to the operation area 812, which is not repeated here.
The password area 813 includes a password-used icon, and the electronic device 30 can receive an input operation (e.g., clicking) on the password area 813, and in response to the input operation (e.g., clicking), the electronic device switches to a password input interface to help a user unlock with a password in a situation where unlocking with face recognition is inconvenient.
The electronic device 30 may also display a gesture unlock interface 82 on the display screen 301 as shown in fig. 8 c. As shown in fig. 8c, the gesture unlocking interface includes a recognition area 821 for displaying a gesture image captured by the gesture recognition module to help the user align with the camera of the gesture recognition module, and after the electronic device collects gesture image data through the camera, the electronic device may perform some necessary processing to match the processed motion image data with the stored gesture information template. The gesture information template can be input by a user before gesture recognition of the electronic device. The device and the specific algorithm for gesture recognition are not limited in the embodiment of the application, as long as gesture recognition can be realized. And if the acquired gesture image data action fails to be matched with the gesture information template, directly outputting unlocking failure.
Similarly, the human face unlocking interface 81 and the gesture unlocking interface 82 can also be applied to payment operations for the pay bank application icon 802 and unlocking operations for various encryption applications.
In the embodiment of the application, the electronic device 30 has a displayable screen and a front camera and a rear camera, and the two cameras have the functions of face entry and recognition, so that when a user inconveniently uses a mobile phone to perform operations such as face entry/unlocking/payment, an assistant can use the rear camera to help the user to complete related operations, the risk of misoperation is reduced, and the running resources of the electronic device 30 are saved.
Example 2: the electronic equipment is provided with two opposite displayable screens, and only one of the displayable screens is provided with the camera.
Fig. 9 exemplarily shows an external view of the electronic device 90, which includes: a first display 901, a second display 902, and a camera 903. In the embodiment of the present invention, when an operator uses the first display screen 901 currently, the camera 903 may be referred to as a front camera, and when an operator uses the second display screen 902 currently, the camera 903 may be referred to as a rear camera; the hardware structure and software architecture of the electronic device 90 are the same as those of the electronic device 100.
The camera 903 may include an infrared camera, a dot matrix projector, a floodlight, an infrared image sensor, the proximity light sensor 180G, and other modules. The dot matrix projector includes a high power laser (such as VCSEL), a diffractive optical component, and the like, i.e., a structured light emitter, for emitting infrared laser light of "structure" by using the high power laser to project on the surface of an object. The text description of the camera 303 or the camera 304 in the electronic device 30 also applies to the camera 903 in the electronic device 90, and is not repeated here.
Based on the electronic device 90, some application scenarios implemented on the electronic device 90 are described below.
Scene one: and (5) inputting the human face.
The user interface for inputting the face of the electronic device 90 is the same as the user interface 40 for inputting the face of the electronic device 30, and the description of fig. 4a is also applicable to the user interface for inputting the face of the electronic device 90, and is not repeated here.
For the user interface of the face entry of the electronic device 90, after the electronic device 90 enters the user interface of the face entry, the user may select a helper entry control or a self entry control in an operation area, the electronic device 90 may identify whether a display currently used by the user is a first display 901 or a second display 902, if the display currently used by the user is the first display 901 and the camera 903 is on the first display 901, and when the user selects the self entry control, as shown in fig. 4b, the electronic device 90 receives a selection instruction of the self entry, the electronic device 90 starts the camera (the camera 903) on the currently used screen (the first display 901), and at this time, the electronic device may be regarded as starting a front-facing camera, so that the camera can shoot the currently operated user and enter the face of the currently operated user.
When the user selects the helper entry control, after the electronic device 90 receives a selection instruction entered by the helper, the electronic device 90 may output a prompt message to prompt the current operator to use the second display screen 902, where the selection instruction entered by the helper may be referred to as a third instruction, for example, the second display screen 902 may be automatically lit when a screen (the second display screen 902) opposite to a currently used screen (the first display screen 901) is a black screen; the user is prompted to use the second display 902 either by voice or by displaying text on the first display 901. When the user turns over the electronic device 90, so that the second display screen 902 faces the user, and the camera 903 on the first display screen 901 is turned on, at this time, it can be regarded as turning on the rear camera, as shown in fig. 4c, so that the camera can shoot the face opposite to the currently operating user, and the face opposite to the currently operating user is entered. At this time, both the first display 901 and the second display 902 may be turned on, so that both users can see the face images entered. The selection instruction may also be a voice instruction, for example, the electronic device 90 may turn on a voice assistant to receive a voice input from the user.
If the display screen currently used by the user is the second display screen 902 and the camera 903 is on the first display screen 901, when the user selects the self-entry control, the electronic device 90 receives the selection instruction entered by the user, and the electronic device 90 may output a prompt message to prompt the current operator to use the first display screen 901, for example, the first display screen 901 may be automatically lit up when the screen (the first display screen 901) opposite to the currently used screen (the second display screen 902) is a black screen; the user is prompted to use the first display 901 either by voice or by displaying text on the second display 902. When the user turns over the electronic device 90, so that the first display screen 901 faces the user and the camera 903 on the first display screen 901 is turned on, at this time, the user can be regarded as turning on the front camera, as shown in fig. 4b, so that the camera can shoot the face of the currently operating user and enter the face of the currently operating user.
When the user selects the help-others input control, after the electronic device 90 receives a selection instruction input by the help-others, as shown in fig. 4c, the electronic device 90 starts a camera (a camera 903) on a screen (a first display 901) opposite to a currently used screen (a second display 902), and at this time, the camera can be regarded as a started rear camera, so that the camera can shoot a face opposite to the currently operated user and input the face opposite to the currently operated user; at this time, both the first display 901 and the second display 902 may be turned on, so that both users can see the face images entered. The selection instruction may also be a voice instruction, for example, the electronic device 90 may turn on a voice assistant to receive a voice input from the user. In a possible embodiment, the selection instruction entered by the helper may be referred to as a second instruction.
Scene two: and unlocking the screen.
The user interface for unlocking the screen of the electronic device 90 is the same as the user interface 50 for unlocking the screen of the electronic device 30, and the description of fig. 5 also applies to the user interface for unlocking the screen of the electronic device 90, which is not repeated here.
For the user interface of the electronic device 90 with the unlocked screen, when the camera 903 is on the first display 901, the electronic device 90 enters the user interface with the unlocked screen, and there are four cases as follows:
in the first situation, the user currently uses the first display 901, and selects the own unlocking control, after the electronic device 90 receives the own unlocking selection instruction, the camera on the opposite side of the electronic device 90 from the current operator is turned on, that is, the camera 903 is turned on, at this time, the camera 903 may be regarded as a front camera, after the electronic device collects face information through the camera 903, the electronic device may perform some necessary processing to match the processed face information with the stored face information template, and the electronic device 90 then performs the actions performed by the electronic device 30 described in the foregoing steps S605 to S606, which is not described herein again.
In case two, the user currently uses the first display 901, and selects the unlocking control for the other person, after receiving the selection instruction for unlocking the other person, the electronic device 90 prompts the user to turn over the mobile phone to use the second display 902, and when the electronic device 90 senses that the mobile phone is turned over by the sensor, the electronic device opens the second display 902 and starts the camera 903 (i.e., starts the camera on the side of the electronic device 90 not opposite to the current operator), at this time, the camera 903 may be regarded as a rear camera, and the electronic device 90 then executes the actions executed by the electronic device 30 described in the foregoing steps S602 to S606, which is not described herein again. At this time, both the first display 901 and the second display 902 may be turned on, so that both users can see the face images entered.
And in a third case, the user currently uses the second display screen 902, selects the self-unlocking control, and after receiving a selection instruction of self-unlocking, the electronic device 90 prompts the user to turn over the mobile phone to use the first display screen 901, and when the electronic device 90 senses that the user turns over by using the sensor, the electronic device opens the first display screen 901 and opens the camera 903 (i.e., opens the camera on the opposite side of the electronic device 90 from the current operator), at this time, the camera 903 may be regarded as a front-facing camera, after the electronic device collects face information by using the camera, the electronic device may perform some necessary processing to match the processed face information with the stored face information template, and the electronic device 90 then performs the actions performed by the electronic device 30 described in the foregoing steps S605 to S606, which are not described herein again.
In case four, the user currently uses the second display screen 902, and selects the unlocking control for the other person, after the electronic device 90 receives the selection instruction for unlocking the other person, the camera on the non-opposite side of the electronic device 90 from the current operator is turned on, that is, the camera 903 is started, at this time, the camera 903 may be regarded as a rear camera, and the electronic device 90 then executes the actions executed by the electronic device 30 described in the foregoing steps S602 to S606, which is not described herein again. At this time, both the first display 901 and the second display 902 may be turned on, so that both users can see the face images entered.
In one embodiment, for the fourth case, if a rear device (e.g., the camera 903, a light sensor, etc.) detects that the intensity of light captured by the camera 303 on the picture does not reach the required brightness in the process that the electronic device acquires the face information of the first user through the camera 903, the second display 902 may be automatically turned on to supplement the light.
In one embodiment, if a rear device (e.g., the camera 903, a light sensor, etc.) detects that the light intensity of a picture captured by the camera 303 does not reach a required brightness in a process that the electronic device acquires face information of a first user through the camera 903, a flash is automatically turned on to supplement light.
In one embodiment, the first display 901 and the second display 902 may not be lit up when the screen is unlocked, and the electronic device 90 directly turns on the camera 903 for face recognition, and for the above case two and case four, after turning on the camera 903, the electronic device 90 performs the actions described in steps S602 to S606, which are not repeated here.
The same applies when the camera 903 is on the second display 902.
Scene three: application unlock/payment.
The interface of the main screen of the electronic device 90 is the same as the interface 80 of the main screen of the electronic device 30, and the description of fig. 8a is also applicable to the interface of the main screen of the electronic device 90, and will not be repeated here. The face unlocking interface and the gesture unlocking interface of the electronic device 90 are the same as the face unlocking interface 81 and the gesture unlocking interface 82 of the electronic device 30, and the descriptions in fig. 8b and 8c are also applicable to the face unlocking interface and the gesture unlocking interface of the electronic device 90, and are not repeated here.
In the embodiment of the present application, the camera 903 may have two software communication links that do not interfere with each other in the internal logic. For example, in the above application scenario of this embodiment, a front-facing camera is used for the first type entry or the first type unlocking (self-entry or self-unlocking), a rear-facing camera is used for the second type entry or the second type unlocking (others-assisting entry or others-assisting unlocking), communication links of the front-facing camera and the rear-facing camera in the software architecture are two independent links, and both the front-facing camera and the rear-facing camera on the hardware device are the camera 903, that is, the front-facing camera and the rear-facing camera are the same physical camera.
In the embodiment of the application, the electronic device 90 has two displayable screens, and only one of the displayable screens has a camera, the camera has the functions of face entry and identification, when a user inconveniently uses a mobile phone to perform operations such as face entry/unlocking/payment, the electronic device 90 prompts the helper to perform screen turnover, so that the helper can use the camera to help the user to complete related operations, the risk of misoperation is reduced, and resources are saved.
Example 3: the electronic device has two back-to-back displayable screens with cameras on both of the two displayable screens.
Fig. 10 exemplarily shows an external view of the electronic apparatus 101, which includes: a first display screen 1011, a second display screen 1012, a camera 1013, and a camera 1014. Here, the camera 1013 may be referred to as a first camera, the camera 1014 may be referred to as a second camera, the first display screen 1011 and the second display screen 1012 may be the aforementioned display screen 194, a shooting direction of the camera 1013 is directed to a first direction, a shooting direction of the camera 1014 is directed to a second direction, a display surface of the first display screen 1011 is directed to the second direction, and a display surface of the first display screen 1012 is directed to the first direction. When the operator is currently using the first display screen 1011, camera 1013 may be referred to as a rear camera and camera 1014 may be referred to as a front camera, and when the operator is currently using the second display screen 1012, camera 1013 may be referred to as a front camera and camera 1014 may be referred to as a rear camera; the hardware structure and the software architecture of the electronic device 101 are the same as those of the electronic device 100. Note that the description of the camera 303 or the camera 304 in the electronic apparatus 30 also applies to the camera 1013 or the camera 1014 in the electronic apparatus 101, and the description is not repeated here.
Based on the electronic device 101, some application scenarios implemented on the electronic device 101 are described below.
Scene one: and (5) inputting the human face.
The user interface for inputting the face of the electronic device 101 is the same as the user interface 40 for inputting the face of the electronic device 30, and the description of fig. 4a is also applicable to the user interface for inputting the face of the electronic device 101, and is not repeated here.
For the user interface of the face entry of the electronic device 101, after the electronic device 101 enters the user interface of the face entry, the user may select a helper entry control or a self-entry control in the operation area, and the electronic device 101 may identify whether the display screen currently used by the user is the first display screen 1011 or the second display screen 1012;
if the display screen currently used by the user is the first display screen 1011, when the user selects the self-entry control, the electronic device 101 receives a selection instruction entered by the user, the electronic device 101 starts a camera (a camera 1014) on the currently used screen (the first display screen 1011) so that the camera can shoot the current operating user and enter the face of the current operating user, when the user selects the other-person entry control, the electronic device 101 receives the selection instruction entered by the other person, and the electronic device 101 starts a camera (a camera 1013) on the second display screen 1012 so that the camera can shoot the face opposite to the current operating user and enter the face opposite to the current operating user; at this time, both the first display screen 1011 and the second display screen 1012 may be turned on so that both users can see the face images entered.
If the display screen currently used by the user is the second display screen 1012, when the user selects the self-entry control, the electronic device 101 receives the selection instruction entered by the user, and the electronic device 101 starts the camera (the camera 1013) on the currently used screen (the second display screen 1012) so that the camera can shoot the currently operated user and enter the face of the currently operated user. When the user selects the people helping enter the control, the electronic device 101 receives a selection instruction entered by the people helping, and the electronic device 101 starts a camera (a camera 1014) on a first display screen 1011, so that the camera can shoot the face opposite to the current operation user and enter the face opposite to the current operation user; at this time, both the first display screen 1011 and the second display screen 1012 may be turned on so that both users can see the face images entered. The selection instruction may also be a voice instruction, for example, the electronic device 101 may turn on a voice assistant to receive a voice input of the user.
Scene two: and unlocking the screen.
The user interface for unlocking the screen of the electronic device 101 is the same as the user interface 50 for unlocking the screen of the electronic device 30, and the description of fig. 5 is also applicable to the user interface for unlocking the screen of the electronic device 101, and is not repeated here.
For the user interface with the unlocked screen of the electronic device 101, after the electronic device 101 enters the user interface with the unlocked screen, there are four cases as follows:
in the first case, the user currently uses the first display screen 1011, and selects the self-unlocking control, after the electronic device 101 receives a selection instruction for self-unlocking, the camera on the opposite side of the electronic device 101 from the current operator is turned on, that is, the camera 1014 is started, at this time, the camera 1014 can be regarded as a front-facing camera, after the electronic device acquires face information through the camera, the electronic device can perform some necessary processing, match the processed face information with the stored face information template, and the electronic device 90 then performs the actions performed by the electronic device 30 described in the foregoing steps S605-S606, which is not described herein again.
In case two, the user currently uses the first display screen 1011, and selects the user to unlock the control, after the electronic device 101 receives the selection instruction to unlock the user, the camera on the non-opposite side of the electronic device 101 from the current operator is turned on, that is, the camera 1013 is started, at this time, the camera 1013 may be regarded as a rear camera, and the electronic device 101 then executes the actions executed by the electronic device 30 described in the foregoing steps S602 to S606, which is not described again here. In one embodiment, if the rear device (e.g., the camera 1013, the light sensor, etc.) detects that the light does not reach the required brightness while the electronic device 101 is assisting in unlocking the other person, the electronic device 101 automatically lights a screen (i.e., the second display 1012) facing the face of the currently operating user for light supplement.
And in a third case, the user currently uses the second display screen 1012, selects the self-unlocking control, and after receiving a selection instruction of self-unlocking, starts the camera on the opposite side of the electronic device 101 from the current operator, that is, starts the camera 1014, where the camera 1014 can be regarded as a front-facing camera, after the electronic device acquires face information through the camera, the electronic device can perform some necessary processing to match the processed face information with the stored face information template, and the electronic device 101 then performs the actions performed by the electronic device 30 described in the foregoing steps S605 to S606, which is not described herein again.
In case four, the user currently uses the second display screen 1012, and selects the unlocking control for the other person, after the electronic device 101 receives the selection instruction for unlocking the other person, the camera on the non-opposite side of the electronic device 101 from the current operator is turned on, that is, the camera 1013 is started, at this time, the camera 1013 may be regarded as a rear camera, and the electronic device 90 then executes the actions executed by the electronic device 30 described in the foregoing steps S602 to S606, which is not described again here. The selection instruction for helping others to unlock may also be referred to as a first instruction. In one embodiment, if the rear device (e.g., the camera 1014) detects that the light does not reach the required brightness during the process of unlocking the electronic device 101, the electronic device 101 automatically lights a screen (i.e., the first display screen 1012) facing the face of the currently operating user for light supplement.
In one embodiment, when the screen is unlocked, the first display screen 1011 and the second display screen 1012 may not be lit, the electronic device 101 may default to start the camera 1013 for face recognition, and then automatically start the camera 1014 for face recognition after the camera 1013 fails to unlock the face, and after the camera 1014 is started, the display screen 301 is not lit, and the electronic device 101 executes the actions described in steps S602 to S606, which is not repeated here.
Scene three: application unlock/payment.
The interface of the main screen of the electronic device 101 is the same as the interface 80 of the main screen of the electronic device 30, and the description of fig. 8a is also applicable to the interface of the main screen of the electronic device 101, and will not be repeated here. The face unlocking interface and the gesture unlocking interface of the electronic device 101 are the same as the face unlocking interface 81 and the gesture unlocking interface 82 of the electronic device 30, and the descriptions in fig. 8b and 8c are also applicable to the face unlocking interface and the gesture unlocking interface of the electronic device 101, and are not repeated here.
In the embodiment of the application, the electronic device 101 has two displayable screens which are opposite to each other, and the two displayable screens are provided with the cameras, the two cameras have the face inputting and identifying functions, when a user inconveniently uses a mobile phone to perform operations such as face inputting/unlocking/payment, the helper can use the camera on the screen opposite to the current operation screen to help the user to complete the related operations, the risk of misoperation is reduced, and resources are saved.
Example 4: the electronic device has a foldable and displayable screen and a camera is provided only on the displayable screen or on a back cover opposite to the displayable screen.
Fig. 11a illustrates an external view of the electronic device 110, which includes: a first display 1101, a second display 1102, a back cover 1103, and a camera 1104. When the electronic device 110 is unfolded (i.e., in an unfolded state), the first display 1101 and the second display 1102 may be regarded as one display 1105, the display 1105 is opposite to the back cover 1103, and the display 1105 may be the display 194; when the electronic device 110 is folded (in a folded state), the first display screen 1101 and the second display screen 1102 are opposite, and the first display screen 1101 and the second display screen 1102 may be the display screen 194; the hardware structure and the software architecture of the electronic device 110 are the same as those of the electronic device 100. The text description of the camera 303 or the camera 304 in the electronic device 30 also applies to the camera 1104 in the electronic device 110, and the camera 1104 may also be referred to as a first camera, which is not repeated here.
For example, when the angle between the first display screen 1101 and the second display screen 1102 is 180 degrees, the electronic device 110 is in the unfolded state (or in the completely unfolded state), when the angle between the first display screen 1101 and the second display screen 1102 is 360 degrees, the electronic device 110 is in the folded state (or in the completely folded state), when the angle between the first display screen 1101 and the second display screen 1102 is 180 degrees and 270 degrees, the electronic device 110 is in the unfolded state (or in the incompletely unfolded state), and when the angle between the first display screen 1101 and the second display screen 1102 is 270 degrees and 360 degrees, the electronic device 110 is in the folded state (or in the incompletely folded state).
The appearance of the electronic device 110 will be described in detail below, as an example, from the following several cases according to the distribution positions of the cameras.
In a first case, as shown in fig. 11b, a diagram in fig. 11b is an expanded state (incomplete expanded state) of the electronic device 110, which includes: a back cover 1103, a camera 1104, and a display 1105, the camera 1104 being on the same surface as the display 1105. The electronic device 110 has only one display screen 1105 and the camera 1104 may be referred to as a front-facing camera. Fig. 11b is a diagram b of the folded state (fully folded state) of the electronic device, which includes: a first display 1101, a second display 1102, and a camera 1104. The camera 1104 is on the first display 1101, and when the operator is currently using the first display 1101, the camera 1104 may be referred to as a front camera, and when the operator is currently using the second display 1102, the camera 1104 may be referred to as a rear camera.
In a second case, as shown in fig. 11c, a diagram in fig. 11c is an expanded state (incomplete expanded state) of the electronic device, which includes: the display system comprises a back cover 1103, a camera 1104 and a display screen 1105, wherein the camera 1104 is on the same surface as the display screen 1105, the display screen 1105 is opposite to the back cover 1103, and the display screen 1105 can be the display screen 194. The electronic device 110 has only one display screen 1105 and the camera 1104 may be referred to as a front-facing camera. Fig. 11c, b, is a folded state (fully folded state) of the electronic device, which includes: a first display 1101, a second display 1102, and a camera 1104. The camera 1104 is on the first display 1101, and when the operator is currently using the first display 1101, the camera 1104 may be referred to as a front camera, and when the operator is currently using the second display 1102, the camera 1104 may be referred to as a rear camera.
In a third case, as shown in fig. 11d, a diagram in fig. 11d is an expanded state (incomplete expanded state) of the electronic device, which includes: the electronic device comprises a rear cover 1103, a camera 1104 and a display screen 1105, wherein the camera 1104 is arranged on the rear cover 1103, the electronic device is equivalent to only one display screen 1105, and the camera 1104 can be called a rear camera; fig. 11b in fig. 11d is a folded state (fully folded state) of the electronic device, which includes: the electronic device 110 is equivalent to two display screens, namely a first display screen 1101 and a second display screen 1102, the two display screens are equivalent to the first display screen 1101 and the second display screen 1102 respectively, when an operator uses the first display screen 1101 currently, the camera 1104 can be called a rear camera, and when the operator uses the second display screen 1102 currently, the camera 1104 can be called a front camera.
For the above-mentioned case one, case two, or case three, when the electronic device 110 is in the folded state, the electronic device 110 is equivalent to two display screens, namely, the first display screen 1101 and the second display screen 1102, and one camera 1104, and the electronic device 110 is the same as the electronic device 90 in embodiment 2, that is, the first display screen 901 in the electronic device 90 is the same as the first display screen 1101 of the electronic device 110, the second display screen 902 in the electronic device 90 is the same as the second display screen 1102 of the electronic device 110, and the camera 904 in the electronic device 90 is the same as the camera 1104 of the electronic device 110. The text description of the application scenario of the electronic device 90 also applies to the electronic device 110, and is not repeated here.
When the electronic device 110 is in the extended state, the electronic device 110 is equivalent to having a display screen 1105 and a camera 1104. When the camera 1104 is on the back cover 1103 and the electronic device 110 receives a selection instruction of self-entering/unlocking/payment, the electronic device 110 may prompt the user to fold the device up by voice or display text on the display screen 1105 and open the camera 1104 so that the camera 1104 can photograph the face of the user who is currently operating.
When the electronic device 110 is in the unfolded state, and the camera 1104 is on the rear cover 1103, and the electronic device 110 receives a selection instruction for helping others to enter/unlock/pay, the electronic device 110 turns on the camera (the camera 1104) on the screen opposite to the currently used screen (the display screen 1105) so that the camera can shoot the face opposite to the currently operated user; in one embodiment, if the rear device (e.g., the camera 1104) detects that the light does not reach the required brightness during the process of entering/unlocking/paying by the electronic device 110, the electronic device 110 automatically turns on the rear flash for light supplement.
When the electronic device 110 is in the unfolded state, and the camera 1104 is on the same side as the display screen 1105, that is, the shooting direction of the camera 1104 is the same as the orientation of the display surface of the display screen 1105, and when the electronic device 110 receives a selection instruction of self-entry/unlocking/payment, the electronic device 110 turns on the camera (the camera 1104) on the currently used screen (the display screen 1105) so that the camera can shoot the currently operated user.
When the electronic device 110 is in the unfolded state, and the camera 1104 is on the same side as the display screen 1105, that is, the shooting direction of the camera 1104 is the same as the orientation of the display surface of the display screen 1105, and when the electronic device 110 receives a selection instruction for assisting others to enter/unlock/pay, the electronic device 110 may prompt the user to fold the device up by voice or display characters on the display screen 1105, and open the camera 1104, so that the camera 1104 can shoot a face (i.e., a face of a second user) opposite to the currently operating user; in one embodiment, if the electronic device 110 detects that the light does not reach the required brightness during the process of helping others to enter/unlock/pay, the electronic device 110 automatically lights a screen of a face opposite to the currently operating user for light supplement.
In the embodiment of the application, the electronic device 110 has a foldable and displayable screen, and only the camera is arranged on the displayable screen or on the rear cover corresponding to the displayable screen, and the camera has the functions of face entry and recognition, so that when a user inconveniently uses a mobile phone to perform operations such as face entry/unlocking/payment, the electronic device 110 prompts the helper to whether to turn over or fold the screen, the helper can use the camera to help the user to complete related operations, the risk of misoperation is reduced, and resources are saved.
Example 5: the electronic device has a foldable and displayable screen with cameras on both the one screen and the back cover opposite the screen.
Fig. 12 exemplarily shows an external view of the electronic device 120, which includes: a first display screen 1201, a second display screen 1202, a back cover 1203, a camera 1204, and a camera 1205. The camera 1204 may be referred to as a first camera, and the camera 1205 may be referred to as a second camera, wherein when the electronic device 120 is in the unfolded state, the first display screen 1201 and the second display screen 1202 may be one display screen 1206, the display screen 1206 is opposite to the rear cover 1203, and the display screen 1206 may be the aforementioned display screen 194; when the electronic device 120 is in the folded state, the first display 1201 and the second display 1202 are opposite in direction, and the first display 1201 and the second display 1202 may be the display 194 described above. The cameras 1204 are distributed on the first display screen 1201, the cameras 1205 are distributed on the rear cover 1203, when the electronic device 120 is in a folded state, the cameras 1205 on the rear cover 1203 are not covered, and the cameras 1205 are on the same side as the second display screen 1202; the hardware structure and software architecture of the electronic device 30 are the same as those of the electronic device 100. The text description of the camera 303 or the camera 304 in the electronic device 30 also applies to the camera 1204 and the camera 1205 in the electronic device 120, and is not repeated here.
Based on the electronic device 120, when the electronic device 120 is in the extended state, the electronic device 120 is equivalent to a display 1206, a back cover 1203, a camera 1204 and a camera 1205, the electronic device 120 is the same as the electronic device 30 in embodiment 1, wherein the text description of the application scene of the electronic device 30 is also applicable to the electronic device 120, and is not repeated here.
When the electronic device 120 is in the folded state, the electronic device 120 is equivalent to two display screens, namely a first display screen 1101, a second display screen 1102, a camera 1204 and a camera 1205, the electronic device 120 is the same as the electronic device 101 in embodiment 3, wherein the text description of the application scene of the electronic device 101 is also applicable to the electronic device 120, and is not repeated here.
In the embodiment of the application, the electronic device 120 has a foldable and displayable screen, and the screen and the rear cover opposite to the screen are provided with the cameras, and the two cameras have the functions of face entry and recognition, so that when a user inconveniently uses a mobile phone to perform operations such as face entry/unlocking/payment, the electronic device 120 prompts the helper to perform screen turnover, the helper can use the camera to help the user to complete related operations, the risk of misoperation is reduced, and resources are saved.
In this embodiment of the application, the first camera may be a telescopic camera, a shooting direction of the telescopic camera may be toward the first direction, the telescopic camera may be hidden in the electronic device, and when a start instruction (for example, the first instruction and the like) is received, the electronic device may control the telescopic camera to extend out of the electronic device. Or, the telescopic camera may be configured with physical keys, and the user may perform a single-click or double-click operation on the physical keys to start the telescopic camera. Similarly, the second camera may be a telescopic camera, and the shooting direction of the telescopic camera may be toward the second direction.
In this embodiment, the first camera may be a rotatable camera, a shooting direction of the rotatable camera may be towards the first direction, the second direction, or both the first direction and the second direction, the rotatable camera may be hidden in the electronic device, and when receiving a start instruction (for example, a first instruction, etc.), the electronic device may control the retractable camera to extend out from the electronic device, and may perform a rotation operation manually or automatically. Or the rotatable camera may be located on one side of the electronic device, and may be configured with a physical key, and the user may perform a single-click or double-click operation on the physical key to start the rotatable camera, and may perform a rotation operation in a manual or automatic manner. The second camera is the same.
The software architecture of the embodiments of the present application is described in detail below.
As shown in fig. 13, the third-party application, the screen unlocking, the setting, and the like are application layers, and the third-party application may include applications such as a camera, a gallery, a calendar, a call, a map, a navigation, a WLAN, bluetooth, music, a video, and a short message.
The portrait service provides a portrait service interface and a programming framework for applications at the application layer. The application framework layer includes a number of predefined functions.
The local equipment provides a first interface and a second interface for the first camera and the second camera respectively, wherein the first interface is connected to the first camera client application through a first camera hardware abstraction layer, and the second interface is connected to the second camera client application through a second camera hardware abstraction layer.
The real-time operating system runs on the virtual machine, can receive external requests and process the requests at a high enough speed, and the processing result can control the production process or make a quick response to the processing system within a specified time, schedule all available resources to complete real-time tasks and control all real-time tasks to run in a coordinated and consistent manner. The real-time operating system may include a first camera driver and a second camera driver, and the real-time operating system receives a request message sent by a first camera client application or a second camera client application on the local device, and drives corresponding computer hardware in the trusted execution environment operating system to start.
The operating system of the trusted execution environment is a general operating system on a server corresponding to the local device, the first camera drive corresponds to the first computer hardware, the second camera drive corresponds to the second computer hardware, and the first camera drive and the second camera drive are respectively connected to the trusted application of the first camera and the trusted application of the second camera through the serial peripheral interface of the first trusted execution environment and the serial peripheral interface of the second trusted execution environment, so that the function call of the first camera and the function call of the second camera are realized.
As can be understood, the first camera and the second camera have two software connection channels that are not interfered with each other, for the electronic device in the embodiment of the present invention, one piece of face information can be respectively input through the first camera and the second camera, and both the first camera and the second camera can recognize two face information templates stored in the face information when the face recognition is unlocked.
In this application embodiment, the first camera and the second camera can be understood as logical cameras, the cameras have two software connection channels that do not interfere with each other in internal logic, and the first camera and the second camera can be the same physical camera in hardware.
The embodiment of the application also provides a computer readable storage medium. All or part of the processes in the above method embodiments may be performed by relevant hardware instructed by a computer program, which may be stored in the above computer storage medium, and when executed, may include the processes in the above method embodiments. The computer-readable storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (19)

1. A biometric method applied to an electronic device equipped with a first camera and a first display screen, wherein a shooting direction of the first camera is oriented in a first direction, a display surface of the first display screen is oriented in a second direction, and the first direction and the second direction are different, the method comprising:
the electronic equipment displays a first interface on the first display screen;
the electronic equipment responds to the received first instruction and acquires face information of a first user through the first camera;
and when the face information of the first user is matched with the stored face information template, the electronic equipment displays a second interface on the first display screen.
2. The method of claim 1, wherein before the electronic device displays the first interface on the first display screen, the method further comprises:
and the electronic equipment responds to the received second instruction and inputs the face information template through the first camera.
3. The method according to claim 1 or 2, wherein the electronic device further comprises a second camera, a shooting direction of the second camera faces to the second direction, and the first interface comprises a first control and a second control; the first control is used for triggering the electronic equipment to start the first camera, and the second control is used for triggering the electronic equipment to start the second camera;
the electronic equipment responds to the received first instruction, acquires the face information of the first user through the first camera, and specifically comprises the following steps:
and the electronic equipment responds to the operation aiming at the first control and acquires the face information of the first user through the first camera.
4. The method of claim 1, further comprising:
the electronic equipment detects the distance between the electronic equipment and the first user;
when the face information of the first user is matched with the stored face information template, the electronic device displays a second interface on the first display screen, and the method specifically includes:
and when the distance between the electronic equipment and the first user does not exceed a first threshold value and the face information of the first user is matched with the stored face information template, the electronic equipment displays the second interface on the first display screen.
5. The method according to claim 4, wherein the electronic device further comprises a second camera, a shooting direction of the second camera facing the second direction; the method further comprises the following steps:
when the distance between the electronic equipment and the first user exceeds a first threshold value, the electronic equipment collects and stores face information of a second user through the second camera.
6. The method of claim 1, further comprising:
and when the face information of the first user is matched with the stored face information template, displaying a third interface on the first display screen, wherein the third interface comprises prompt information which is used for prompting the first user to complete a first action.
7. The method of claim 2, wherein the electronic device further comprises a second display screen, a display surface of the second display screen facing the first direction; when the electronic device acquires the face information of the first user through the first camera, the method further comprises the following steps:
and the electronic equipment displays the picture captured by the first camera through the second display screen.
8. The method of claim 1, wherein the electronic device further comprises a second display screen, a display surface of the second display screen facing the first direction, the method further comprising:
in the process that the electronic equipment acquires the face information of the first user through the first camera, when the electronic equipment detects that the light intensity of the picture captured by the first camera is smaller than a specified light intensity threshold value, the electronic equipment lights the second display screen.
9. The method of claim 1, wherein the electronic device further comprises a second display screen, the second display screen having a display surface facing the first direction; before the electronic device displays the first interface on the first display screen, the method further comprises:
the electronic equipment displays the first interface on the second display screen;
and the electronic equipment responds to the received third instruction and prompts a second user to use the first display screen.
10. An electronic device, comprising: one or more processors, memory, one or more display screens, one or more cameras, the first one or more display screens including a first display screen, the one or more cameras including a first camera, a shooting direction of the first camera facing a first direction, a display surface of the first display screen facing a second direction, the first direction and the second direction being different;
the memory, the display screen, the one or more cameras, and the one or more processors are coupled with the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
displaying a first interface on the first display screen;
responding to a received first instruction, and acquiring face information of a first user through the first camera;
and when the face information of the first user is matched with the stored face information template, displaying a second interface on the first display screen.
11. The electronic device of claim 10, wherein prior to the processor displaying the first interface on the first display screen, the processor is further to:
and responding to the received second instruction, and inputting the face information template through the first camera.
12. The electronic device of claim 10 or 11, wherein the electronic device further comprises a second camera, a shooting direction of the second camera faces the second direction, and the first interface comprises a first control and a second control; the first control is used for triggering the processor to start the first camera, and the second control is used for triggering the processor to start the second camera;
the processor responds to the received first instruction, acquires the face information of the first user through the first camera, and specifically comprises the following steps:
and responding to the operation of the first control, and acquiring the face information of the first user through the first camera.
13. The electronic device of claim 10, wherein the processor is further to:
detecting a distance between the electronic device and the first user;
when the face information of the first user is matched with the stored face information template, displaying a second interface on the first display screen, specifically including:
and when the distance between the electronic equipment and the first user does not exceed a first threshold value and the face information of the first user is matched with the stored face information template, displaying the second interface on the first display screen.
14. The electronic device of claim 13, wherein the electronic device further comprises a second camera, a shooting direction of the second camera facing the second direction; the processor is further configured to:
and when the distance between the electronic equipment and the first user exceeds a first threshold value, acquiring and storing face information of a second user through the second camera.
15. The electronic device of claim 10, wherein the processor is further to:
and when the face information of the first user is matched with the stored face information template, displaying a third interface on the first display screen, wherein the third interface comprises prompt information which is used for prompting the first user to complete a first action.
16. The electronic device of claim 11, wherein the electronic device further comprises a second display screen, a display surface of the second display screen facing the first direction; when the processor collects the face information of the first user through the first camera, the processor is further configured to:
and displaying the picture captured by the first camera through the second display screen.
17. The electronic device of claim 10, wherein the electronic device further comprises a second display screen, a display surface of the second display screen facing the first direction, the processor further to:
in the process that the processor collects the face information of the first user through the first camera, when the processor detects that the light intensity of the picture captured by the first camera is smaller than a specified light intensity threshold value, the second display screen is lightened.
18. The electronic device of claim 10, wherein the electronic device further comprises a second display screen, the second display screen having a display surface facing the first direction; before the processor displays the first interface on the first display screen, the processor is further to:
displaying the first interface on the second display screen;
and prompting a second user to use the first display screen in response to the received third instruction.
19. A computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-9.
CN201910936454.2A 2019-09-29 2019-09-29 Biological identification method and electronic equipment Pending CN110784592A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910936454.2A CN110784592A (en) 2019-09-29 2019-09-29 Biological identification method and electronic equipment
PCT/CN2020/115532 WO2021057571A1 (en) 2019-09-29 2020-09-16 Biometric recognition method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910936454.2A CN110784592A (en) 2019-09-29 2019-09-29 Biological identification method and electronic equipment

Publications (1)

Publication Number Publication Date
CN110784592A true CN110784592A (en) 2020-02-11

Family

ID=69384821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910936454.2A Pending CN110784592A (en) 2019-09-29 2019-09-29 Biological identification method and electronic equipment

Country Status (2)

Country Link
CN (1) CN110784592A (en)
WO (1) WO2021057571A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507863A (en) * 2020-12-04 2021-03-16 西安电子科技大学 Handwritten character and picture classification method based on quantum Grover algorithm
WO2021057571A1 (en) * 2019-09-29 2021-04-01 华为技术有限公司 Biometric recognition method and electronic device
CN112672057A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Shooting method and device
CN114863510A (en) * 2022-03-25 2022-08-05 荣耀终端有限公司 Face recognition method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484387B (en) * 2021-06-16 2023-11-07 荣耀终端有限公司 Prompting method and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016177154A1 (en) * 2015-05-06 2016-11-10 中兴通讯股份有限公司 Method and device for switching operation mode of mobile terminal
CN109348046A (en) * 2018-09-25 2019-02-15 罗源县凤山镇企业服务中心 A kind of mobile phone unlocking method and terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132747A (en) * 2017-01-03 2018-06-08 中兴通讯股份有限公司 A kind of screen content switching method and dual-screen mobile terminal
CN108804180A (en) * 2018-05-25 2018-11-13 Oppo广东移动通信有限公司 Display methods, device, terminal and the storage medium of user interface
CN114666435B (en) * 2019-04-19 2023-03-28 华为技术有限公司 Method for using enhanced function of electronic device, chip and storage medium
CN110784592A (en) * 2019-09-29 2020-02-11 华为技术有限公司 Biological identification method and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016177154A1 (en) * 2015-05-06 2016-11-10 中兴通讯股份有限公司 Method and device for switching operation mode of mobile terminal
CN109348046A (en) * 2018-09-25 2019-02-15 罗源县凤山镇企业服务中心 A kind of mobile phone unlocking method and terminal

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021057571A1 (en) * 2019-09-29 2021-04-01 华为技术有限公司 Biometric recognition method and electronic device
CN112507863A (en) * 2020-12-04 2021-03-16 西安电子科技大学 Handwritten character and picture classification method based on quantum Grover algorithm
CN112507863B (en) * 2020-12-04 2023-04-07 西安电子科技大学 Handwritten character and picture classification method based on quantum Grover algorithm
CN112672057A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Shooting method and device
CN112672057B (en) * 2020-12-25 2022-07-15 维沃移动通信有限公司 Shooting method and device
CN114863510A (en) * 2022-03-25 2022-08-05 荣耀终端有限公司 Face recognition method and device
CN114863510B (en) * 2022-03-25 2023-08-01 荣耀终端有限公司 Face recognition method and device

Also Published As

Publication number Publication date
WO2021057571A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
CN110058777B (en) Method for starting shortcut function and electronic equipment
CN110784592A (en) Biological identification method and electronic equipment
CN112751954B (en) Operation prompting method and electronic equipment
CN112492193B (en) Method and equipment for processing callback stream
CN112600961A (en) Volume adjusting method and electronic equipment
CN114125130B (en) Method for controlling communication service state, terminal device and readable storage medium
CN112527093A (en) Gesture input method and electronic equipment
CN115129196A (en) Application icon display method and terminal
CN112930533A (en) Control method of electronic equipment and electronic equipment
CN114089932A (en) Multi-screen display method and device, terminal equipment and storage medium
CN110559645A (en) Application operation method and electronic equipment
CN114115770A (en) Display control method and related device
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
CN114222020B (en) Position relation identification method and device and readable storage medium
CN113438366B (en) Information notification interaction method, electronic device and storage medium
CN113168257B (en) Method for locking touch operation and electronic equipment
CN114356195A (en) File transmission method and related equipment
CN115032640B (en) Gesture recognition method and terminal equipment
CN111027374A (en) Image identification method and electronic equipment
CN111132047A (en) Network connection method and device
CN115706916A (en) Wi-Fi connection method and device based on position information
CN114860178A (en) Screen projection method and electronic equipment
CN114338642A (en) File transmission method and electronic equipment
WO2022222702A1 (en) Screen unlocking method and electronic device
CN111475363B (en) Card death recognition method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211

RJ01 Rejection of invention patent application after publication