WO2022222702A1 - Procédé de déverrouillage d'écran et dispositif électronique - Google Patents

Procédé de déverrouillage d'écran et dispositif électronique Download PDF

Info

Publication number
WO2022222702A1
WO2022222702A1 PCT/CN2022/083580 CN2022083580W WO2022222702A1 WO 2022222702 A1 WO2022222702 A1 WO 2022222702A1 CN 2022083580 W CN2022083580 W CN 2022083580W WO 2022222702 A1 WO2022222702 A1 WO 2022222702A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unlocking
category
electronic device
user
Prior art date
Application number
PCT/CN2022/083580
Other languages
English (en)
Chinese (zh)
Inventor
吕帅林
孙斐然
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022222702A1 publication Critical patent/WO2022222702A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/66Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
    • H04M1/667Preventing unauthorised calls from a telephone set
    • H04M1/67Preventing unauthorised calls from a telephone set by electronic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Definitions

  • the present application relates to the technical field of smart terminals, and in particular, to a screen unlocking method and an electronic device.
  • the present application provides a screen unlocking method and electronic device, which can improve the screen unlocking efficiency of the electronic device under harsh conditions and improve user experience.
  • an embodiment of the present application provides a screen unlocking method, which is applied to an electronic device, including: acquiring a first unlocking image; if unlocking fails according to the unlocking image, determining a first image category to which the first unlocking image belongs, and the first image The category is used to describe the scene in which the electronic device is located when the first unlocking image is obtained; count the number of failed unlocking of the first image category; obtain the second unlocking image; unlock the screen according to the second unlocking image, and obtain the unlocking of the first image category The number of failures.
  • first prompt information is displayed, and the first prompt information is used to prompt the user to update the unlocked image of the first image category.
  • the number of failed unlocking of the first image category is counted, and if the number of failed unlocking times exceeds a first threshold, the user is prompted to update the unlocked image of the first image category, so that the next time the user is in the scene corresponding to the image category, the The success rate of screen unlocking improves the screen unlocking efficiency and user experience.
  • determining the first image category to which the first unlocked image belongs includes: using a preset first model to detect the image category to which the first unlocked image belongs to obtain the first image category; preset the first model Used to detect which image class the unlock image belongs to.
  • the method before acquiring the first unlocking image, further includes: judging that there is no unlocking image of the second image category in the unlocking image set by the user, and generating an unlocking image of the second image category according to the unlocking image set by the user .
  • generating the unlocking image of the second image category according to the unlocking image set by the user includes: obtaining the unlocking image of the third image category from the unlocking image set by the user, and obtaining the unlocking image of the third image category according to the unlocking image of the third image category An unlock image of the second image category is generated.
  • an embodiment of the present application provides a screen unlocking method, which is applied to an electronic device, including: acquiring a first unlocking image; if unlocking fails according to the first unlocking image, determining a first image category to which the first unlocking image belongs, An image category is used to describe the scene of the electronic device when the first unlock image is obtained; count the number of failed unlocks of the first image category; obtain a second unlock image; unlock the screen according to the second unlock image, and obtain the first The number of failed unlocking of the image category. When the number of failed unlocking of the first image category exceeds the first threshold, an unlocking image of the first image category is generated according to the preset unlocking image.
  • determining the first image category to which the first unlocked image belongs includes: using a preset first model to detect the image category to which the first unlocked image belongs to obtain the first image category; preset the first model Used to detect which image class the unlock image belongs to.
  • generating an unlock image of the first image category according to a preset unlock image includes: if there is no unlock image of the first image category in the preset unlock image, generating the first image according to the preset unlock image The unlocking image of the category; if there is an unlocking image of the first image category in the preset unlocking image, and the unlocking image of the first image category is the unlocking image set by the user, the unlocking image of the first image category is generated according to the preset unlocking image; if There is an unlock image of the first image category in the preset unlock image, and the unlock image of the first image category is an unlock image generated according to the first preset unlock image, and the unlock image of the first image category is generated according to the second preset unlock image .
  • generating the unlocking image of the first image category according to the preset unlocking image includes: acquiring the unlocking image of the fourth image category from the preset unlocking image, and generating the third image according to the unlocking image of the fourth image category. An unlocked image of an image category.
  • embodiments of the present application provide an electronic device, including a processor and a memory, where the memory is used to store a computer program, and when the processor executes the computer program, the electronic device is made to perform any one of the methods of the first aspect.
  • an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory is used to store a computer program, and when the processor executes the computer program, the electronic device is made to perform any method of the second aspect
  • embodiments of the present application provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program runs on a computer, causes the computer to execute the method of any one of the first aspect.
  • embodiments of the present application provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program runs on a computer, causes the computer to execute the method of any one of the second aspect.
  • the present application provides a computer program for performing the method of the first aspect when the computer program is executed by a computer.
  • the program in the seventh aspect may be stored in whole or in part on a storage medium packaged with the processor, or may be stored in part or in part in a memory not packaged with the processor.
  • FIG. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • 1B is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an embodiment of a screen unlocking method of the application
  • 3A is a UI interface diagram of a preset stage in the screen unlocking method of the application
  • 3B is a UI interface diagram of the unlocking stage in the screen unlocking method of the application.
  • 3C is a schematic flowchart of another embodiment of the screen unlocking method of the present application.
  • FIG. 5 is a schematic flowchart of another embodiment of the screen unlocking method of the application.
  • FIG. 6 is a schematic flowchart of another embodiment of the screen unlocking method of the application.
  • FIG. 7 is a schematic structural diagram of another embodiment of the electronic device of the present application.
  • the user generally pre-sets the user's face image in the electronic device as the comparison standard for screen unlocking; when the user triggers the screen unlocking, the electronic device collects the user's face image, It is compared with the preset face image in the electronic device, and if the comparison result is consistent, the screen is unlocked, otherwise the screen is not unlocked.
  • Whether the face unlock is successful or not is related to the scene where the user and the electronic device are located.
  • the above-mentioned scene includes the objective conditions of the environment where the user and the electronic device are located, such as lighting conditions, and the user's facial features.
  • the lighting conditions when the user unlocks the screen is similar to the lighting conditions when the face image is set, and the user's face does not change significantly
  • the face image collected by the electronic device when the screen is unlocked is similar to the face image preset by the user , the electronic device can unlock the screen, and the user successfully unlocks the screen.
  • the lighting conditions when the user unlocks the screen is quite different from the lighting conditions when the user presets the face image, for example, the light intensity is relatively small or relatively large, the face is backlit, etc., or the user's face changes, such as appearing
  • the face image collected by the electronic device when the screen is triggered to unlock may be quite different from the face image preset by the user, so that the electronic device cannot unlock the screen, and the user fails to unlock the screen. If the scene in which the electronic device collects the face image does not change, it may happen that the user fails to unlock the screen multiple times, which reduces the screen unlocking efficiency and the user experience.
  • the user generally pre-stores several fingerprint images of the user in the electronic device as the comparison standard for screen unlocking.
  • the electronic device collects the user's fingerprint image and uses it Compare with the pre-stored fingerprint image, if the result is the same, the screen is unlocked successfully, otherwise the screen is not unlocked.
  • the user can unlock the screen by entering a fingerprint.
  • the user's fingerprint may have dry, wet, cracked, or low-temperature fingerprints, which may cause the screen to fail to unlock.
  • Dry fingerprint means that the user's finger is too dry; wet fingerprint means that the user's finger is too wet; fingerprint under low temperature means that the temperature of the user's finger is too low.
  • the problem of mismatch so that it cannot be compared with the fingerprint image preset by the user, resulting in the unlocking failure; the above cracked fingerprint refers to the cracking of the user's fingerprint due to cracks, injuries, etc.
  • the fingerprint image collected by the electronic device cannot be compared with the fingerprint image preset by the user, resulting in the unlocking failure. If the above situation of the fingerprint cannot be alleviated, the user will fail to unlock the screen for many times in a row, or even cannot unlock the screen at all, which reduces the screen unlocking efficiency and reduces the user experience.
  • the embodiments of the present application provide a screen unlocking method and electronic device, which can improve screen unlocking efficiency under severe conditions and improve user experience.
  • FIG. 1A is a schematic structural diagram of an electronic device 100 .
  • Electronic device 100 may include cell phones, foldable electronic devices, tablet computers, desktop computers, laptop computers, handheld computers, notebook computers, ultra-mobile personal computers (UMPCs), netbooks, cell phones, personal computers Digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) device, virtual reality (virtual reality, VR) device, artificial intelligence (artificial intelligence, AI) device, wearable device, vehicle-mounted device, smart home equipment, or at least one of smart city equipment.
  • PDA personal digital assistant
  • augmented reality augmented reality, AR
  • VR virtual reality
  • AI artificial intelligence
  • wearable device wearable device
  • vehicle-mounted device smart home equipment
  • smart home equipment smart home equipment
  • smart home equipment smart home equipment
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) connector 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera module 193, display screen 194 , and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the processor can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in the processor 110 may be a cache memory.
  • the memory may store instructions or data that are used by the processor 110 or are frequently used. If the processor 110 needs to use the instructions or data, it can be called directly from this memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • the processor 110 may be connected to modules such as a touch sensor, an audio module, a wireless communication module, a display, a camera, and the like through at least one of the above interfaces.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the USB connector 130 is an interface conforming to the USB standard specification, which can be used to connect the electronic device 100 and peripheral devices, and specifically can be a Mini USB connector, a Micro USB connector, a USB Type C connector, and the like.
  • the USB connector 130 can be used to connect to a charger, so that the charger can charge the electronic device 100, and can also be used to connect to other electronic devices, so as to transmit data between the electronic device 100 and other electronic devices. It can also be used to connect headphones to output audio stored in electronic devices through the headphones.
  • This connector can also be used to connect other electronic devices, such as VR devices, etc.
  • the standard specifications of the Universal Serial Bus may be USB1.x, USB2.0, USB3.x, and USB4.
  • the charging management module 140 is used for receiving charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera module 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), bluetooth low power power consumption (bluetooth low energy, BLE), ultra wide band (UWB), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other electronic devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi- zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 may implement a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • electronic device 100 may include one or more display screens 194 .
  • the electronic device 100 may implement a camera function through a camera module 193, an ISP, a video codec, a GPU, a display screen 194, an application processor AP, a neural network processor NPU, and the like.
  • the camera module 193 can be used to collect color image data and depth data of the photographed object.
  • the ISP can be used to process the color image data collected by the camera module 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera module 193 .
  • the camera module 193 may be composed of a color camera module and a 3D sensing module.
  • the photosensitive element of the camera of the color camera module may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the 3D sensing module may be a time of flight (TOF) 3D sensing module or a structured light (structured light) 3D sensing module.
  • the structured light 3D sensing is an active depth sensing technology, and the basic components of the structured light 3D sensing module may include an infrared (Infrared) emitter, an IR camera module, and the like.
  • the working principle of the structured light 3D sensing module is to first emit a light spot of a specific pattern on the object to be photographed, and then receive the light coding of the light spot pattern on the surface of the object, and then compare the similarities and differences with the original projected light spot. And use the principle of trigonometry to calculate the three-dimensional coordinates of the object.
  • the three-dimensional coordinates include the distance between the electronic device 100 and the object to be photographed.
  • the TOF 3D sensing can be an active depth sensing technology, and the basic components of the TOF 3D sensing module can include an infrared (Infrared) transmitter, an IR camera module, and the like.
  • the working principle of the TOF 3D sensing module is to calculate the distance (ie depth) between the TOF 3D sensing module and the object to be photographed through the time of infrared reentry to obtain a 3D depth map.
  • Structured light 3D sensing modules can also be used in face recognition, somatosensory game consoles, industrial machine vision detection and other fields.
  • TOF 3D sensing modules can also be applied to game consoles, augmented reality (AR)/virtual reality (VR) and other fields.
  • AR augmented reality
  • VR virtual reality
  • the camera module 193 may also be composed of two or more cameras.
  • the two or more cameras may include color cameras, and the color cameras may be used to collect color image data of the photographed object.
  • the two or more cameras may use stereo vision technology to collect depth data of the photographed object.
  • Stereoscopic vision technology is based on the principle of human eye parallax. Under natural light sources, two or more cameras are used to capture images of the same object from different angles, and then operations such as triangulation are performed to obtain the electronic device 100 and the object. The distance information between the objects, that is, the depth information.
  • the electronic device 100 may include one or more camera modules 193 .
  • the electronic device 100 may include a front camera module 193 and a rear camera module 193 .
  • the front camera module 193 can usually be used to collect the color image data and depth data of the photographer facing the display screen 194, and the rear camera module can be used to collect the shooting objects (such as people, landscapes, etc.) that the photographer faces. etc.) color image data and depth data.
  • the CPU, GPU or NPU in the processor 110 may process the color image data and depth data collected by the camera module 193 .
  • the NPU can recognize the color image data collected by the camera module 193 (specifically, the color camera module) through a neural network algorithm based on the skeleton point recognition technology, such as a convolutional neural network algorithm (CNN). , to determine the skeleton point of the person being photographed.
  • CNN convolutional neural network algorithm
  • the CPU or GPU can also run the neural network algorithm to realize the determination of the skeletal points of the photographed person according to the color image data.
  • the CPU, GPU or NPU can also be used to confirm the figure (such as the body of the person being photographed) according to the depth data collected by the camera module 193 (which may be a 3D sensing module) and the identified skeletal points. ratio, the fatness and thinness of the body parts between the skeletal points), and can further determine the body beautification parameters for the photographed person, and finally process the photographed image of the photographed person according to the body beautification parameters, so that the photographed image
  • the body shape of the person to be photographed is beautified. Subsequent embodiments will introduce in detail how to perform body beautification processing on the image of the person being photographed based on the color image data and depth data collected by the camera module 193 , which will not be described here.
  • Digital signal processors are used to process digital signals and can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card. Or transfer music, video and other files from electronic devices to external memory cards.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional methods or data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 may listen to music through the speaker 170A, or output an audio signal for a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • a touch operation acts on the display screen 194
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the instruction for viewing the short message is executed.
  • the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and controls the reverse movement of the lens to offset the shaking of the electronic device 100 to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the magnetic sensor 180D can be used to detect the folding or unfolding of the electronic device, or the folding angle.
  • the electronic device 100 when the electronic device 100 is a flip machine, the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When the intensity of the detected reflected light is greater than the threshold, it may be determined that there is an object near the electronic device 100 . When the intensity of the detected reflected light is less than the threshold, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L may be used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is blocked, eg, the electronic device is in a pocket. When it is detected that the electronic device is blocked or in a pocket, some functions (such as touch functions) can be disabled to prevent misuse.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature detected by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in the performance of the processor in order to reduce the power consumption of the electronic device to implement thermal protection.
  • the electronic device 100 heats the battery 142 when the temperature detected by the temperature sensor 180J is below another threshold. In other embodiments, the electronic device 100 may boost the output voltage of the battery 142 when the temperature is below yet another threshold.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also referred to as a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 may include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support one or more SIM card interfaces.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. Multiple cards can be of the same type or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of the electronic device 100 .
  • FIG. 1B is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into five layers, from top to bottom, the application layer, the application framework layer, the Android runtime (Android runtime, ART) and the native C/C++ library, and the hardware abstraction layer (Hardware abstraction layer). Abstract Layer, HAL) and kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, resource managers, notification managers, activity managers, input managers, and the like.
  • the window manager provides window management services (Window Manager Service, WMS), WMS can be used for window management, window animation management, surface management and as a transfer station for the input system.
  • WMS Window Manager Service
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • This data can include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Activity Manager can provide activity management services (Activity Manager Service, AMS), AMS can be used for system components (such as activities, services, content providers, broadcast receivers) startup, switching, scheduling and application process management and scheduling work .
  • AMS Activity Manager Service
  • system components such as activities, services, content providers, broadcast receivers
  • the input manager can provide an input management service (Input Manager Service, IMS), and the IMS can be used to manage the input of the system, such as touch screen input, key input, sensor input and so on.
  • IMS Input Manager Service
  • IMS fetches events from input device nodes, and distributes events to appropriate windows through interaction with WMS.
  • the Android runtime includes the core library and the Android runtime.
  • the Android runtime is responsible for converting source code to machine code.
  • the Android runtime mainly includes the use of ahead or time (AOT) compilation technology and just in time (JIT) compilation technology.
  • the core library is mainly used to provide the functions of basic Java class libraries, such as basic data structures, mathematics, IO, tools, databases, networks and other libraries.
  • the core library provides an API for users to develop Android applications.
  • a native C/C++ library can include multiple functional modules. For example: surface manager, Media Framework, libc, OpenGL ES, SQLite, Webkit, etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media framework supports playback and recording of many common audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • OpenGL ES provides the drawing and manipulation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for applications of the electronic device 100 .
  • the hardware abstraction layer runs in user space, encapsulates the kernel layer driver, and provides a calling interface to the upper layer.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, for example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera module 193 captures still images or video.
  • FIG. 1A and FIG. 1B will take the electronic device having the structure shown in FIG. 1A and FIG. 1B as an example, and combine the drawings and application scenarios to specifically describe the methods provided by the embodiments of the present application.
  • the screen unlocking methods include face unlocking, fingerprint unlocking, iris unlocking, etc., and different unlocking images are used in different screen unlocking methods.
  • the face image and iris image can be captured by the camera, and the fingerprint image can be captured by the fingerprint sensor.
  • face unlocking and fingerprint unlocking are mainly used as examples.
  • FIG. 2 is a flowchart of an embodiment of the screen unlocking method of the present application. As shown in FIG. 2 , the method may include:
  • Step 201 The electronic device acquires a first unlock image.
  • the user performs a specified operation on the electronic device to trigger the electronic device to perform the screen unlocking process. Accordingly, the electronic device can obtain the corresponding unlocking image according to the screen unlocking method triggered by the user. For example, if the screen unlocking method triggered by the user is fingerprint unlocking, then The first unlocking image may be a fingerprint image, and the screen unlocking method triggered by the user is face unlocking, and the first unlocking image may be a face image.
  • Step 202 If the unlocking fails according to the first unlocking image, the electronic device determines the first image category to which the first unlocking image belongs, and the first image category is used to describe the scene where the electronic device is located when the first unlocking image is acquired.
  • the scene when the face is unlocked can be divided into image categories such as normal, dark light, occlusion, strong light, backlight, and others.
  • the first unlocked image is a face image
  • the first image category can be the above-mentioned image categories.
  • One of the image categories such as normal, dark light, occlusion, strong light, backlight, and others.
  • the scene when fingerprints are unlocked can be divided into dry fingerprints, wet fingerprints, cold fingerprints, strong light fingerprints, dark light fingerprints, cracked fingerprints, and others.
  • Image category if the first unlock image is a fingerprint image, the first image category may be one of the above image categories such as dry fingerprint, wet fingerprint, cold fingerprint, bright light fingerprint, dark light fingerprint, crack fingerprint and others.
  • Step 203 The electronic device counts the number of failed unlocking of the first image category.
  • the number of failed unlocking of the first image category is used to record the number of failed times to unlock the screen using the face image belonging to the first image category within the first time period, and the first time period may be a pre-determined end time point with the execution time of step 203.
  • a time period with a duration is set, and the specific duration of the first time period is not limited in this embodiment of the present application.
  • the electronic device can record the occurrence time of each unlocking failure and the image category to which the face image used for the unlocking failure belongs, then the electronic device can conveniently count the unlocking using the face image belonging to the first image category within the first time period. The number of screen failures.
  • Step 204 The electronic device acquires the second unlock image.
  • the user performs a specified operation on the electronic device to trigger the electronic device to perform the screen unlocking process. Accordingly, the electronic device can obtain the corresponding unlocking image according to the screen unlocking method triggered by the user. For example, if the screen unlocking method triggered by the user is fingerprint unlocking, then The second unlocking image may be a fingerprint image, and the screen unlocking method triggered by the user is face unlocking, and the second unlocking image may be a face image.
  • the first unlocking image and the second unlocking image are unlocking images obtained when the user triggers the electronic device to perform two screen unlocking processes. Therefore, the first unlocking image and the second unlocking image may be unlocking images in the same or different screen unlocking methods. , the embodiments of the present application are not limited.
  • the first unlock image obtained by the electronic device is a face image
  • the second unlock image obtained by the electronic device is a fingerprint image
  • the user triggers face unlock for the first time, correspondingly, the first unlock image obtained by the electronic device is a face image
  • the acquired second unlock image is a face image.
  • Step 205 the electronic device unlocks the screen according to the second unlock image, and obtains the number of failed unlocks of the first image category, and when the number of failed unlocks of the first image category exceeds the first threshold, displays first prompt information, and the first prompt information is used for prompting The user updates the unlocked image of the first image category.
  • the specific value of the first threshold is not limited in this embodiment of the present application.
  • the number of failed unlocking of the first image category is counted, and if the number of failed unlocking times exceeds a first threshold, the user is prompted to update the unlocked image of the first image category, so that the next time the user is in the scene corresponding to the image category, the The success rate of unlocking the screen of the electronic device improves the screen unlocking efficiency of the electronic device and improves the user experience.
  • the screen unlocking method provided by the embodiment of the present application may be applicable to the screen unlocking method of face unlocking.
  • the screen unlocking method provided by the embodiments of the present application may include two stages: a preset stage and an unlock stage.
  • an image library is preset in the electronic device, and the image library is used to store face images of various image categories. Face images of at least two image categories can be set in the image database of the present application, and at least one face image can be set for each image category.
  • the set face image may be, for example, the face image of the owner of the electronic device, such as the owner.
  • the image category of the face image is used to describe the scene in which the electronic device obtains the face image. For example, in FIG. 3A , a total of 6 image categories including normal, dark light, occlusion, strong light, backlight, and others are preset as examples.
  • the above six image categories are related to the scene where the user and the electronic device are located when the user unlocks the screen using the face unlock method.
  • the above-mentioned three image categories of "dark light”, “strong light” and “backlight” are mainly related to the light intensity and lighting direction in the scene. Face occlusion is related.
  • the image category “Normal” is the scene that the user unlocks daily. In this scene, the light intensity, light direction, and the user's own face are all within a certain standard.
  • the image category “Other” corresponds to the scene corresponding to the above image category. outside scene. Unless the user customizes the electronic device, when the electronic device is not used by the user, the face images stored under each image category in the image library can be empty, waiting for the user to set it independently after using the electronic device.
  • FIG. 3A an example diagram of a user interface (user interface, UI) for setting a face image in an image library for a user, wherein,
  • the user can enter the unlock password setting interface 210 of the electronic device, and select the "face" control to enter the face setting interface 220 .
  • the face setting interface 220 the above-mentioned six image categories preset for face images in the image library are displayed, and the user can choose according to his own wishes and scene conditions (such as the light intensity in the environment when setting, the direction of the light, whether the face is blocked, etc.).
  • the electronic device can also specify that the user must set the face images of a certain image category (such as the category "normal”); taking Figure 3A as an example, the user selects the image category "normal” corresponding to "Settings" control, enter the face image setting interface 230 of the category "Normal".
  • the user can perform editing operations of “normal” category face images in the face image setting interface 230, and the above editing operations may include, but are not limited to, operations such as adding a face image, deleting a face image, and the like. The specific process of adding a face image is not shown in FIG. 3A.
  • the electronic device detects the operation of adding a face image, starts the camera to capture an image, and starts from the captured image.
  • a face image is detected in the system, and the detected face image is stored to complete the setting of the face image.
  • the number of face images set for each image category is not limited in this application. If the user wants to activate face unlock, the user needs to set at least one face image in at least one of the above image categories. For example, the user sets the face image "face 1" in the normal image category, and then, Only electronic devices can activate face unlock.
  • the user can not only open the face setting interface 220 during the above-mentioned process of starting face unlocking to set the face ID, but also open the face setting interface 220 at any time after starting the face unlocking for each image category. Add, delete and other editing operations on the face image.
  • the interface 220 is an optional interface. If the user needs to start face unlocking, he can also enter the unlocking password setting interface 210 of the electronic device, and select the "face" control. Correspondingly, the electronic device displays the face image setting interface 230 to the user. In the face image setting interface 230 All face images in the image library can be displayed. In this implementation manner, the electronic device may not prompt the user of the image category to which the face image belongs in the face image setting interface, but only list the face images set by the user for the user to edit the face image.
  • a first model is preset in the electronic device, and the first model is used to detect the image category to which the face image belongs. For example, taking the above six image categories as an example, the first model can be used to detect which image category the face image belongs to among the above six image categories.
  • the first model may be a trained n-classifier, where n is the total number of image categories. Continuing the example of the above six image categories, the value of n is 6.
  • an n classifier can be preset, and the face images captured in different scenes are labeled with the image categories corresponding to the scenes as training samples to train the n classifiers, and the above-mentioned first model can be obtained.
  • the input of the first model is a face image
  • the output is the probability that the face image belongs to each of the n image categories.
  • the image category with the highest probability value can be considered as the output result of the first model, that is, the face image.
  • the algorithm of the N classifier is not limited in this application, for example, it may be a related deep learning algorithm.
  • the face image set by the user on the face image setting interface 230 can be stored in the image library after the image category is detected by the above-mentioned first model.
  • the UI interface diagram of the unlocking stage is shown in Figure 3B.
  • the user only sets at least one face image of the "normal" image category in the electronic device, if the user and the electronic device are in a dark light scene, that is, the light brightness is low In the scene, because the electronic device is in the dark light scene, the user's face image obtained by the electronic device is likely to be different from the face image set by the user in the normal scene in advance, so the electronic device cannot capture the dark light scene.
  • the comparison between the face image and the preset face image in the electronic device is successful, that is, the electronic device fails to verify the collected face image, and then the screen unlocking fails, so the user may encounter many times that the face unlocking method cannot be used. Unlocking of electronic devices.
  • the electronic device can display prompt information to the user, and the prompt information is used to prompt the user to update the "dark light" image category face image.
  • the electronic device is taken as an example of a face image of which the "dark light” image category is not set in the image library in advance, so that the prompt information can be used to prompt the user to add a face image of the "dark light” image category.
  • Face images of the "dark light” image category have been set in the image library, and the situation shown in Figure 3B may also occur that the user cannot unlock the electronic device using the face unlocking method for many times.
  • the electronic device shows the user.
  • the prompt information may be used to prompt the user to add a face image of the "dark light” image category or replace the existing face image of the "dark light” image category.
  • the UI interface example shown in FIG. 3B can also be extended to scenes corresponding to other image categories. For example, if the dark light scene in FIG. 3B is replaced with a strong light scene, the prompt information in the electronic device may be replaced with prompting the user to update the face image of the strong light category.
  • FIG. 3C it is a schematic flowchart of the unlocking stage, as shown in FIG. 3C, which may include the following steps:
  • Step 301 The electronic device receives the unlock request operation, and obtains a first face image.
  • Electronic devices are generally preset with buttons or designated gestures for users to trigger screen unlocking. Therefore, if the electronic device is in a screen lock state, a pressing operation for a designated key is detected, or a designated gesture operation on the screen of the electronic device is detected, the electronic device receives an unlock request operation. For example, assuming that pressing the power button of the electronic device by default is used to trigger the electronic device to trigger the face recognition of unlocking the screen, then, if the user presses the power button of the electronic device when the screen of the electronic device is locked, the electronic device can start the camera to shoot. For the user image, the first face image of the user is obtained from the captured user image.
  • Step 302 The electronic device compares the first face image with the face images stored in the image library in turn. If the comparison result between the first face image and a certain face image in the image library is consistent, the electronic equipment The screen is unlocked successfully; if the comparison results with all face images in the image library are inconsistent, the electronic device fails to unlock the screen, and step 303 is executed.
  • Step 303 The electronic device determines the first image category to which the first face image belongs.
  • the electronic device may use the above-mentioned first model to determine the first image category to which the face image belongs.
  • the possible values of the first image category should be consistent with the possible image categories of the face images in the image library, that is, if the face images in the image library are divided into normal, dark light, occlusion, strong light, backlight, and other 6 kinds of images. category, the possible values of the first image category may be normal, dark light, occlusion, strong light, backlight, or others.
  • Step 304 The electronic device counts the number of failed unlocking of the first image category.
  • step 203 For the implementation of this step, reference may be made to step 203, which is not repeated here.
  • Step 305 The electronic device determines whether the number of times of unlocking failures of the first image category exceeds the first threshold, and if so, executes Step 306, and if not, the branch flow ends.
  • the electronic device Regardless of whether the number of unlocking failures exceeds the first threshold, the electronic device is still in a screen-locked state and is not unlocked successfully.
  • Step 306 The electronic device receives the unlocking request operation, obtains a second unlocking image, unlocks the screen according to the second unlocking image, and displays the first prompt information.
  • the judging step in step 305 may also be performed after the electronic device unlocks the screen, and when the judging result exceeds the first threshold, the first prompt information is displayed.
  • the second unlocking image may be a fingerprint image, a face image, or an iris image, etc., which is not limited in this embodiment of the present application.
  • the first prompt information is used to prompt the user to update the face image of the first image category.
  • the electronic device can determine whether there is a face image of the first image category; if there is a face image of the first image category, it means that there may be a problem with the face image, and the first prompt
  • the information can be specifically used to remind the user that the face image of the first image category is inaccurate, and the user is requested to reset the face image of the first image category.
  • the user If the user resets the face image of the first image category, it can improve the user's subsequent The possibility of successfully unlocking the screen in this scenario; if there is no face image of the first image category, the first prompt message can be specifically used to prompt the user to add a face image of the first image category, if the user adds the first image category of face images, the next time the user uses face unlocking under the same environmental conditions, the unlocking can be successful, which increases the unlocking success rate and improves the user experience.
  • the number of failures of the user to unlock the screen using the face images of each image category within a certain period of time is recorded, and if the number of failures corresponding to a certain image category exceeds the first threshold, the user is prompted to update the The face image of the image category, so that the next time the user is in the scene corresponding to the image category, the success rate of screen unlocking can be improved, thereby improving the screen unlocking efficiency and improving the user experience.
  • the electronic device determines that there is no face image of the second image category in the face image set by the user, and can use the image
  • the preset face images in the library generate face images of the second image category.
  • the preset face image may be a face image set by a user, or a face image of a certain image category generated according to the face image set by the user. Compared with the new face image regenerated using the generated face image, the new face image generated by using the face image set by the user is relatively more accurate.
  • the face image generates a face image of the above-mentioned second image category.
  • the electronic device When the electronic device generates the face image of the second image category according to the face image set by the user, the face image of a certain image category specified in advance can be used as the basis for generating a new face image. It is easier to add a face image of the "normal" image category to the image library. Based on this, the electronic device generates a face image of the second image category according to the face image set by the user, which may specifically include:
  • a face image of the third image category is obtained from the face image set by the user, and a face image of the second image category is generated according to the face image of the third image category.
  • the third image category may be the "normal" image category in the above example.
  • the second image category may be an image category for which the user has not set a face image.
  • the third image category is a "normal" image category
  • the second image category may be an image category other than the "normal” image category, for example Dark light, strong light, backlight, occlusion, etc.
  • An example of a method for generating a face image of the second image category according to the face image of the third image category is as follows:
  • An image generation model may be pre-trained for generating a face image of the second image category according to the face image of the third image category.
  • the training samples of the image generation model can be: several face image pairs, each face image pair includes: a face image of a third image category and a face image of the second image category generated according to the same face, preset.
  • the initial model can be a neural network model, and the above-mentioned image generation model is obtained by using the training samples to train the initial model.
  • the input of the image generation model can be a face image of the third image category, and the output is a person of the second image category. face image.
  • the above-mentioned neural network model may specifically be a cyclic generative adversarial network (cyclegan), a unit, or a generative adversarial network (stargan), or the like.
  • the first prompt information shown in step 306 shown in FIG. 3B may be specifically used to prompt the user to update the face image of the first image category.
  • the face image of the first image category set by the user in the image library of the electronic device is not accurate enough or is based on the image
  • the face image of the first image category generated by the face image in the library is not accurate enough, that is, the face image of the first image category may be quite different from the face image actually obtained by the electronic device in the scene corresponding to the first image category , by resetting the face image of the first image category by the user, the face image of the first image category in the image library can be made more accurate, thereby improving the efficiency of the user using the face image of the first image category to unlock the screen, improving user experience.
  • the electronic device generates the face image of the second image category according to the face image preset in the image library in the preset stage.
  • the preset face image in the image library generates a new face image, but after step 305, a new face image is generated according to the preset face image in the image library, and the above-mentioned preset face image can be set by the user.
  • the face image can also be the face image generated by the electronic device according to the face image set by a certain user; at this time, referring to Fig. 4, with respect to the embodiment shown in Fig. 3C, step 306 can be replaced with the following Step 401:
  • Step 401 The electronic device receives the unlock request operation, obtains a second unlock image, unlocks the screen according to the second unlock image, and generates a face image of the first image category according to a preset face image in the image library.
  • the judging step in step 305 may also be performed after the electronic device unlocks the screen, and when the judging result exceeds the first threshold, a face image of the first image category is generated according to the preset face image in the image library.
  • the electronic device generates a face image of the first image category according to the preset face image in the image library, which may include:
  • the face image of the first image category is a face image set by the user, generate a face image of the first image category according to the preset face image;
  • the face image of the first image category is a face image generated according to the first preset face image
  • the face image is generated according to the second preset face image.
  • a face image of the first image category is a face image generated according to the first preset face image
  • the first preset face image and the second preset face image are different.
  • the first preset face image and the second preset face image are respectively existing face images in the image library.
  • the face image of the first image category can be generated by using the preset face images in the image library, so that the user can When the user is in the scene corresponding to the first image category subsequently, the screen unlocking efficiency can be improved.
  • the electronic device can generate the face image of the first image category according to the face image of the fourth image category pre-specified in the image library, and at this time, the above-mentioned generation of the unlocked image of the first image category according to the preset unlocked image is performed.
  • the method may include: acquiring a face image of a fourth image category from a preset face image, and generating a face image of the first image category according to the face image of the fourth image category.
  • the new face image generated by using the face image set by the user is relatively more accurate.
  • the face image set by the user in the face image generates a face image of the first image category.
  • the electronic device may generate a face image of the first image category according to the face image of the fourth image category set by the user.
  • the fourth image category in this step may be the "normal" image category in the above example.
  • the face image of the first image category is a face image set by the user
  • the face image of the first image category set by the user may be the same as that of the electronic device in the corresponding scene.
  • the face images of the first image category can be generated by using the preset face images in the image library, so that when the user is in the scene corresponding to the first image category later, the screen unlocking can be improved. Efficiency; compared with the new face image regenerated using the generated face image, the new face image generated by using the face image set by the user is relatively more accurate.
  • the set face image generates a face image of the first image category
  • the face image set by the user may be a face image of the first image category or a face image of other image categories.
  • a face image of a specified image category may also be preferentially selected, or a combination of the two may be preferentially selected to preferentially select a face image of a specified image category and set by the user.
  • the face image of the first image category is a face image generated according to the first preset face image in the image library, at this time, it means that there is already a face image in the image library. There is a deviation between the face image generated according to the first preset face image and the face image collected by the electronic device in the scene corresponding to the first image category. Therefore, the image library other than the first preset face image can be used.
  • the second preset face image generates a face image of the first image category, so that when the electronic device is in the scene corresponding to the first image category later, the screen unlocking efficiency can be improved; compared with using the generated face image to regenerate The new face image generated by using the face image set by the user is relatively more accurate. Therefore, the above-mentioned second preset face image is preferably the face image set by the user in the image library.
  • the second preset face image may be a face image of the first image category, or may be a face image of other image categories.
  • the user only needs to input at least one face image of at least one image category into the electronic device, that is, face images of other image categories can be automatically generated, and the user does not need to be in physical scenes corresponding to different image categories.
  • Adding a face image on the basis of improving the screen unlocking efficiency, further reduces the user's setting operations for the face image, and improves the user experience.
  • Image categories of images may include: dry fingerprints, wet fingerprints, cold fingerprints, bright light fingerprints, dark light fingerprints, cracked fingerprints, and others.
  • the face image in FIG. 3C can be replaced with a fingerprint image.
  • the method can include:
  • Step 501 The electronic device receives the unlock request operation, and acquires a first fingerprint image.
  • Step 502 The electronic device sequentially compares the first fingerprint image with the fingerprint images in the image library. If the comparison result between the first fingerprint image and a fingerprint image in the image library is consistent, the electronic device unlocks the screen successfully; If the comparison results of all fingerprint images in the image library are inconsistent, the electronic device fails to unlock the screen, and step 503 is executed.
  • Step 503 The electronic device determines the first image category to which the first fingerprint image belongs.
  • Step 504 The electronic device counts the number of failed unlocking of the first image category.
  • Step 505 The electronic device determines whether the number of unlocking failures corresponding to the first image category exceeds the first threshold, and if so, executes Step 506, and if not, the branch flow ends.
  • Step 506 The electronic device receives the unlocking request operation, obtains a second unlocking image, unlocks the screen according to the second unlocking image, and displays the first prompt information.
  • the judging step in step 505 may also be performed after the electronic device unlocks the screen, and when the judging result exceeds the first threshold, the first prompt information is displayed.
  • the face image in FIG. 4 can be replaced with a fingerprint image.
  • the method shown in FIG. 6 differs from the method shown in FIG. 5 only in that step 506 is replaced with the following step 601 . specific,
  • Step 601 The electronic device receives the unlock request operation, obtains a second unlock image, unlocks the screen according to the second unlock image, and generates a fingerprint image of the first image category according to a preset fingerprint image in an image library.
  • the above embodiment can also be applied to the screen unlocking mode of iris unlocking, the only difference is that the above-mentioned face is replaced with iris, and the image category for face images is replaced with the image category for iris images, for example
  • the image category of the iris image may be similar to the image category of the face image, including: normal, dark light, strong light, backlight, occlusion, and others.
  • FIG. 7 is a structural diagram of an embodiment of an electronic device of the present application.
  • the electronic device 700 may include: a processor 710, a camera 720, a sensor module 730, and a display screen 740, wherein,
  • the camera 720 is used to collect the user's face image or iris image; the sensor module 730 may be a fingerprint sensor, used to collect the user's fingerprint image; the display screen 740 is used to display the image or video to the user; the processor 710 is used to execute FIG. 2 and the method provided by any of the embodiments of FIGS. 3C to 6 .
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, the computer causes the computer to execute any one of the implementations of FIG. 2 and FIG. 3C to FIG. 6 of the present application. method provided by the example.
  • An embodiment of the present application further provides a computer program product, the computer program product includes a computer program, which, when running on a computer, enables the computer to execute the method provided in any of the embodiments of FIG. 2 , FIG. 3C to FIG. 6 of the present application.
  • “at least one” refers to one or more, and “multiple” refers to two or more.
  • “And/or”, which describes the association relationship of the associated objects means that there can be three kinds of relationships, for example, A and/or B, which can indicate the existence of A alone, the existence of A and B at the same time, and the existence of B alone. where A and B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • “At least one of the following” and similar expressions refer to any combination of these items, including any combination of single or plural items.
  • At least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, where a, b, c may be single, or Can be multiple.
  • any function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution, and the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (Read-Only Memory; hereinafter referred to as: ROM), Random Access Memory (Random Access Memory; hereinafter referred to as: RAM), magnetic disk or optical disk and other various A medium on which program code can be stored.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk or optical disk and other various A medium on which program code can be stored.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

L'invention concerne un procédé de déverrouillage d'écran et un dispositif électronique. Le procédé consiste à : obtenir une première image de déverrouillage ; si le déverrouillage échoue en fonction de l'image de déverrouillage, déterminer une première catégorie d'images à laquelle appartient la première image de déverrouillage, la première catégorie d'images servant à décrire un scénario dans lequel se situe le dispositif électronique lors de l'obtention de la première image de déverrouillage ; compter le nombre d'échecs de déverrouillage de la première catégorie d'images ; obtenir une seconde image de déverrouillage ; et déverrouiller un écran en fonction de la seconde image de déverrouillage, obtenir le nombre d'échecs de déverrouillage de la première catégorie d'images, puis afficher des premières informations d'invite lorsque le nombre d'échecs de déverrouillage de la première catégorie d'images dépasse un premier seuil, les premières informations d'invite servant à inviter un utilisateur à mettre à jour l'image de déverrouillage de la première catégorie d'images. Selon la présente demande, l'efficacité de déverrouillage d'écran du dispositif électronique dans des conditions difficiles peut être améliorée, et l'expérience utilisateur est améliorée.
PCT/CN2022/083580 2021-04-23 2022-03-29 Procédé de déverrouillage d'écran et dispositif électronique WO2022222702A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110444268.4 2021-04-23
CN202110444268.4A CN115329299A (zh) 2021-04-23 2021-04-23 屏幕解锁方法和电子设备

Publications (1)

Publication Number Publication Date
WO2022222702A1 true WO2022222702A1 (fr) 2022-10-27

Family

ID=83723616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083580 WO2022222702A1 (fr) 2021-04-23 2022-03-29 Procédé de déverrouillage d'écran et dispositif électronique

Country Status (2)

Country Link
CN (1) CN115329299A (fr)
WO (1) WO2022222702A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350548A1 (en) * 2015-06-01 2016-12-01 Light Cone Corp. Unlocking a portable electronic device by performing multiple actions on an unlock interface
CN107633169A (zh) * 2017-09-13 2018-01-26 深圳市金立通信设备有限公司 一种终端解锁方法、终端及计算机可读存储介质
CN109829279A (zh) * 2019-01-11 2019-05-31 Oppo广东移动通信有限公司 解锁事件处理方法及相关设备
CN109919866A (zh) * 2019-02-26 2019-06-21 Oppo广东移动通信有限公司 图像处理方法、装置、介质及电子设备
CN110334496A (zh) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 一种解锁控制方法、终端及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350548A1 (en) * 2015-06-01 2016-12-01 Light Cone Corp. Unlocking a portable electronic device by performing multiple actions on an unlock interface
CN107633169A (zh) * 2017-09-13 2018-01-26 深圳市金立通信设备有限公司 一种终端解锁方法、终端及计算机可读存储介质
CN109829279A (zh) * 2019-01-11 2019-05-31 Oppo广东移动通信有限公司 解锁事件处理方法及相关设备
CN109919866A (zh) * 2019-02-26 2019-06-21 Oppo广东移动通信有限公司 图像处理方法、装置、介质及电子设备
CN110334496A (zh) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 一种解锁控制方法、终端及计算机可读存储介质

Also Published As

Publication number Publication date
CN115329299A (zh) 2022-11-11

Similar Documents

Publication Publication Date Title
WO2020259452A1 (fr) Procédé d'affichage plein écran pour terminal mobile et appareil
CN113645351B (zh) 应用界面交互方法、电子设备和计算机可读存储介质
WO2020029306A1 (fr) Procédé de capture d'image et dispositif électronique
WO2019072178A1 (fr) Procédé de traitement de notification, et dispositif électronique
WO2021052139A1 (fr) Procédé d'entrée de geste et dispositif électronique
WO2021008589A1 (fr) Procédé d'exécution d'application et dispositif électronique
WO2021057571A1 (fr) Procédé de reconnaissance biométrique et dispositif électronique
CN112087649B (zh) 一种设备搜寻方法以及电子设备
WO2022007707A1 (fr) Procédé de commande de dispositif domestique, dispositif terminal et support de stockage lisible par ordinateur
WO2022206494A1 (fr) Procédé et dispositif de suivi de cible
CN114995715B (zh) 悬浮球的控制方法和相关装置
WO2021238740A1 (fr) Procédé de capture d'écran et dispositif électronique
CN114222020B (zh) 位置关系识别方法、设备及可读存储介质
WO2022152174A1 (fr) Procédé de projection d'écran et dispositif électronique
WO2022078116A1 (fr) Procédé de génération d'image à effet de pinceau, procédé et dispositif d'édition d'image et support de stockage
WO2022007757A1 (fr) Procédé d'enregistrement d'empreinte vocale inter-appareils, dispositif électronique et support de stockage
WO2021129453A1 (fr) Procédé de capture d'écran et dispositif associé
WO2021147483A1 (fr) Procédé et appareil de partage de données
WO2022222702A1 (fr) Procédé de déverrouillage d'écran et dispositif électronique
CN113970965A (zh) 消息显示方法和电子设备
WO2022222705A1 (fr) Procédé de commande de dispositif et dispositif électronique
WO2022206709A1 (fr) Procédé de chargement de composants pour application et appareil associé
CN116048236B (zh) 通信方法及相关装置
WO2024067328A1 (fr) Procédé de traitement de message
CN115291780A (zh) 一种辅助输入方法、电子设备及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790811

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22790811

Country of ref document: EP

Kind code of ref document: A1