WO2021037157A1 - 图像识别方法及电子设备 - Google Patents

图像识别方法及电子设备 Download PDF

Info

Publication number
WO2021037157A1
WO2021037157A1 PCT/CN2020/111801 CN2020111801W WO2021037157A1 WO 2021037157 A1 WO2021037157 A1 WO 2021037157A1 CN 2020111801 W CN2020111801 W CN 2020111801W WO 2021037157 A1 WO2021037157 A1 WO 2021037157A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
image information
image
camera
facial
Prior art date
Application number
PCT/CN2020/111801
Other languages
English (en)
French (fr)
Inventor
袁江峰
陈国乔
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021037157A1 publication Critical patent/WO2021037157A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the embodiments of the present application relate to image processing technology, and in particular, to an image recognition method and electronic equipment.
  • face recognition technology has been widely used.
  • face recognition technology is applied in scenarios such as terminal unlocking, payment, access control recognition, and gate entrance.
  • the user's facial image is usually collected, and the collected facial image is matched with the pre-stored facial image to identify whether it is the target user.
  • a method of overall brightness compensation or a method of partial brightness compensation is usually used to increase the brightness of the user's facial image taken against the light.
  • the brightness of the entire picture is usually adjusted based on a preset target brightness value to increase the brightness value of the partial image. Since the entire picture usually includes other objects with higher brightness (such as white walls, light-colored objects, etc.), using this method, it is usually impossible to compensate when the brightness is compensated to a certain value, so that the user's facial image is still too dark.
  • Perform face recognition In the local brightness compensation method, the user's facial contour is usually detected from the image, and then the brightness compensation is performed on the part where the user's facial contour is located. However, when the image is too dark and the user's facial contour is blurred, it is usually impossible to detect the user's facial contour, resulting in the inability to perform face recognition.
  • the user's facial image can still be recognized even when the captured facial image is too dark, and the recognition success rate is improved.
  • an embodiment of the present application provides an image recognition method.
  • the method includes: in response to receiving an instruction to perform facial recognition, an electronic device collects image information through a camera, and the image information is used to perform identity verification on a target object; Based on the image information, the electronic device determines whether the facial contour is detected; in response to the inability to detect the facial contour, the electronic device determines the first brightness value of the part of the image information located in the preset area in the image information Parameters; based on the first brightness value parameter, the electronic device adjusts the camera parameters of the camera; based on the adjusted camera parameters, the electronic device collects image information through the camera again for identity verification; Then, based on the reacquired image information, repeat the previous steps.
  • the generated image is too dark due to the low pixel brightness value of the captured image, so that the user’s facial contour cannot be detected by pre-generating the preset
  • the brightness adjustment of the image coordinate area can make the user's facial image still be recognized even when the facial image is too dark, which improves the success rate of facial recognition.
  • the preset area is a preset image coordinate area with a high probability that a face object appears in the image based on big data statistics.
  • the image information of the preset area is part of the image information in the entire image information collected.
  • the collected facial image information is usually located in the preset image coordinate area. This area is usually separated from the surrounding environment with strong contrast. Therefore, when the pixel brightness value of the area is increased to the target brightness, if there is a face object, it is usually possible to detect the facial contour and extract the facial features for face recognition and living body detection. As a result, the probability of successful face recognition of the user in a backlit scene or a dark scene is greatly improved.
  • the first brightness value parameter in this application may be a brightness value, or a parameter used to characterize brightness or a parameter related to a brightness value (for example, a gray value, a sensitivity value), etc.
  • the first brightness value parameter is an average pixel brightness value.
  • the method further includes: in response to detecting the facial contour, the electronic device determines whether a feature point for facial recognition is detected from the image information; When the feature point for facial recognition cannot be detected, the electronic device determines the second brightness value parameter of the facial contour part in the image information; based on the second brightness value parameter, the electronic device adjusts The camera parameters of the camera; based on the adjusted camera parameters, the electronic device executes the step of re-collecting image information through the camera.
  • the electronic device when the collected image is not bright enough so that the electronic device can detect the facial contour but cannot extract the facial features, it is necessary to continue to increase the brightness of the facial contour part.
  • the brightness value parameters of the facial contour part of the image By determining the brightness value parameters of the facial contour part of the image to brighten this part, it is possible to prevent the surrounding environment part (such as a white wall with strong contrast) in the image from participating in the brightness adjustment, thereby improving the speed of face recognition.
  • the method further includes: in response to detecting the facial contour, the electronic device compares the image information with a pre-stored face image to determine whether to compare Success; in response to a successful comparison, perform anti-counterfeiting authentication based on the image information; in response to passing the anti-counterfeiting authentication, pass the identity verification.
  • the method further includes: in response to detecting the feature points for facial recognition, the electronic device performs the comparison between the image information and the pre-stored facial image. Compare, determine whether the comparison is successful; in response to the successful comparison, perform anti-counterfeiting authentication based on the image information; and in response to passing the anti-counterfeiting authentication, pass the identity verification.
  • the image formed by the collected image information can be directly compared with the pre-stored face image, or when the facial feature points used for face recognition are detected , To compare. Determined according to the needs of the application scenario.
  • the method further includes: in response to no successful comparison, determining a third brightness value parameter of the facial contour part in the image information, based on the third brightness value parameter, Adjusting camera parameters; based on the adjusted camera parameters, the electronic device executes the step of re-collecting image information through the camera.
  • the method further includes: based on the reacquired image information, so The electronic device determines whether a facial contour is detected; in response to the fact that the facial contour cannot be detected from the re-acquired image information, the electronic device determines the first part of the part of the image information located in the preset area in the re-acquired image information Four brightness value parameters; in response to detecting a facial contour from the re-collected image information, the electronic device determines whether a feature point for facial recognition is detected in the re-collected image information, and responds to Feature points for facial recognition cannot be detected in the re-acquired image information, the electronic device determines the fifth brightness value parameter of the facial contour part in the re-acquired image information; in response to the re-acquisition Feature points for facial recognition are detected in the collected image information, and the electronic device compares the re-
  • the preset area is a rectangular area.
  • the rectangular area enclosed by the preset area includes a first side and a second side, and the length of the first side is less than the length of the second side; wherein , The ratio of the first side to the second side is 4:5.
  • the present application provides an electronic device including one or more processors, a memory, and a camera, the memory and the camera are coupled to the processor, the memory is used to store information, and the processor Executing the instructions in the memory allows the electronic device to execute the image recognition method as described in the first aspect.
  • the present application provides a computer-readable storage medium, which stores instructions, and when the instructions are run on a computer, they are used to execute any one of the image recognition methods in the first aspect.
  • this application provides a computer program or computer program product, which when the computer program or computer program product is executed on a computer, enables the computer to implement the image recognition method in any one of the above-mentioned first aspects.
  • Figure 1a is a schematic diagram of a face image collected under backlight conditions
  • FIG. 1b is a schematic diagram of a face image after brightness adjustment is performed on the face image shown in FIG. 1a in the prior art
  • Figure 2a is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • 2b is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the software structure of an electronic device provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of an image recognition method provided by an embodiment of the present application.
  • Fig. 5a is a schematic diagram of a preset image location area provided by an embodiment of the present application.
  • FIG. 5b is another schematic diagram of another preset image location area provided by an embodiment of the present application.
  • 6a-6c are schematic diagrams of application scenarios of an image recognition method provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a method for determining a location area of a preset image provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of another image recognition method provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe the specific order of the objects.
  • first target object and the second target object are used to distinguish different target objects, rather than to describe the specific order of the target objects.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present application should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • facial recognition technology can be applied to terminal screen unlocking, facial recognition payment, identity verification (such as access control identity verification, gate identity verification) and so on.
  • identity verification such as access control identity verification, gate identity verification
  • the image recognition method provided in the embodiments of the present application can be applied to various scenarios of face recognition technology.
  • the electronic device can detect the high probability distribution of the face object pre-determined by using big data when the photographed face object is too dark to detect the facial features of the face.
  • the image area is locally brightened.
  • the features of the face object can be clearly presented, so that the electronic device can perform effective face recognition based on the detected facial features, and improve the identity of the user in the backlight or dark environment The accuracy of the verification, thereby improving the user experience.
  • FIG. 2a is a schematic diagram of the external structure of an electronic device 200 provided by an embodiment of the present application.
  • the electronic device 200 may include a camera 201.
  • the camera 201 may be a red green blue (RGB) camera.
  • the RGB camera is a visible light camera, which is used to collect the user's face image information.
  • the camera 201 may also be other cameras such as a dual-pass camera, an all-pass camera, and the like.
  • the dual-pass camera means that the camera can collect both visible light images and infrared light images.
  • the all-pass camera refers to images that can collect visible light images, infrared light images, and other wavelengths of light.
  • an ambient light sensor 202 may also be included.
  • the ambient light sensor 202 is used to sense the brightness of the ambient light. It can cooperate with the camera 201 to collect user image information.
  • the ambient light sensor 202 of the electronic device 200 may first sense the light of the external environment Information, the electronic device 200 adjusts the parameters of the camera 201 based on the sensed light information of the external environment, and then collects user image information. Based on the collected user image information, when the user's facial contour cannot be detected, the electronic device 200 iteratively brightens the preset image location area until the user's facial features are detected, thereby identifying and authenticating the user based on the user's facial features.
  • the iterative brightening in the embodiment of this application may specifically be: based on the brightness value parameter of the image information of the preset area, adjusting the camera parameters, and reacquiring the image, so that the brightness value of the image information of the preset area in the reacquired image Improve; if the reacquired image still cannot be identified, further adjust the camera parameters based on the preset expected brightness value parameter of the image information in the reacquired image, and reacquire the image again, so that the image brightness is further improved.
  • the schematic diagram of the external structure of the electronic device 200 shown in FIG. 2a may be a partial schematic diagram of the front side of the electronic device 200.
  • the aforementioned camera 201 and ambient light sensor 202 are placed on the front of the electronic device 200.
  • the electronic device shown in FIG. 2a may further include a second camera, and the second camera may be disposed on the back of the electronic device 200.
  • the schematic diagram of the external structure of the electronic device 200 shown in FIG. 2a may also be a partial schematic diagram of the back of the electronic device 200.
  • the aforementioned camera 201 and ambient light sensor 202 are placed on the back of the electronic device 200.
  • the electronic device shown in FIG. 2a may further include a second camera, and the second camera may be arranged on the front of the electronic device 200.
  • the front side of the above-mentioned electronic device 200 refers to the side of the electronic device 200 displaying a graphical user interface (such as the main interface of the electronic device 200, that is, the desktop), that is, the side where the display panel is located is usually called the front side; and the back side of the electronic device 200 is related to The front faces the opposite side.
  • a graphical user interface such as the main interface of the electronic device 200, that is, the desktop
  • the back side of the electronic device 200 is related to The front faces the opposite side.
  • the front side of an electronic device refers to the side facing the user under normal use by the user; and the side facing away from the user is called the back side.
  • the electronic device in the embodiment of the present application may be a mobile phone, a notebook computer, a wearable electronic device (such as a smart watch), a tablet computer, an augmented reality (AR), and a virtual reality (VR) including the above-mentioned RGB camera and ambient light sensor.
  • a wearable electronic device such as a smart watch
  • AR augmented reality
  • VR virtual reality
  • the following embodiments do not specifically limit the specific form of the electronic equipment.
  • FIG. 2b shows a schematic structural diagram of an electronic device 200 according to an embodiment of the present application.
  • the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, antenna 1, antenna 2, mobile communication module 251, and wireless communication module 252 , Audio module 270, speaker 270A, receiver 270B, microphone 270C, earphone interface 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295 and so on.
  • a processor 210 an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, antenna 1, antenna 2, mobile communication module 251, and wireless communication module 252 , Audio module 270, speaker 270A, receiver 270B, microphone 270C, earphone interface 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295 and so on.
  • the sensor module 280 may include a gyroscope sensor 280A, an acceleration sensor 280B, a proximity light sensor 280G, a fingerprint sensor 280H, a touch sensor 280K, and a hinge sensor 280M (Of course, the electronic device 200 may also include other sensors, such as temperature sensors, pressure sensors, Distance sensor, magnetic sensor, ambient light sensor, air pressure sensor, bone conduction sensor, etc. (not shown in the figure).
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 200.
  • the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 210 may include one or more processing units.
  • the processor 210 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (Neural-network Processing Unit, NPU) Wait.
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 200. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in the processor 210 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 210. If the processor 210 needs to use the instruction or data again, it can be directly called from the memory. Repeated access is avoided, the waiting time of the processor 210 is reduced, and the efficiency of the system is improved.
  • the display screen 294 is used to display images, videos, and the like.
  • the display screen 294 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 200 may include one or N display screens 294, and N is a positive integer greater than one.
  • the camera 293 (a front camera or a rear camera, or a camera can be used as a front camera or a rear camera) is used to capture still images or videos.
  • the camera 293 may include photosensitive elements such as a lens group and an image sensor, where the lens group includes a plurality of lenses (convex lens or concave lens) for collecting the light signal reflected by the object to be photographed and transmitting the collected light signal to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • the camera 293 may include 1 to N cameras.
  • the 1 to N other cameras may include RGB cameras, and may also include infrared cameras and the like.
  • the internal memory 221 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 210 executes various functional applications and signal processing of the electronic device 200 by running instructions stored in the internal memory 221.
  • the internal memory 221 may include a storage program area and a storage data area.
  • the storage program area can store operating system, application program (such as camera application, WeChat application, etc.) codes and so on.
  • the data storage area can store data created during the use of the electronic device 200 (such as images and videos collected by a camera application) and the like.
  • the internal memory 221 may also store the code of the anti-mistouch algorithm provided in the embodiment of the present application.
  • the code of the anti-mistouch algorithm stored in the internal memory 321 is executed by the processor 210, the touch operation during the folding or unfolding process can be shielded.
  • the internal memory 221 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the code used to implement the algorithm for video editing can also be stored in an external memory.
  • the processor 210 may run the algorithm code stored in the external memory through the external memory interface 220 to implement editing of the video.
  • the function of the sensor module 280 is described below.
  • the gyroscope sensor 280A may be used to determine the movement posture of the electronic device 200.
  • the angular velocity of the electronic device 200 around three axes ie, x, y, and z axes
  • the gyroscope sensor 280A can be used to detect the current motion state of the electronic device 200, such as shaking or static.
  • the acceleration sensor 280B can detect the magnitude of the acceleration of the electronic device 200 in various directions (generally three axes). That is, the gyroscope sensor 280A can be used to detect the current motion state of the electronic device 200, such as shaking or static.
  • the proximity light sensor 380G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone emits infrared light through light-emitting diodes. Mobile phones use photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the phone. When insufficient reflected light is detected, the mobile phone can determine that there is no object near the mobile phone.
  • the gyroscope sensor 280A (or the acceleration sensor 280B) may send the detected motion state information (such as angular velocity) to the processor 210.
  • the processor 210 determines whether it is currently in the handheld state or the tripod state based on the motion state information (for example, when the angular velocity is not 0, it means that the electronic device 200 is in the handheld state).
  • the fingerprint sensor 280H is used to collect fingerprints.
  • the electronic device 200 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • Touch sensor 280K also called “touch panel”.
  • the touch sensor 280K may be disposed on the display screen 294, and a touch screen composed of the touch sensor 280K and the display screen 294 is also called a “touch screen”.
  • the touch sensor 280K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 294.
  • the touch sensor 280K may also be disposed on the surface of the electronic device 200, which is different from the position of the display screen 294.
  • the display screen 294 of the electronic device 200 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, a WeChat application, etc.).
  • the display screen 294 displays the interface of the camera application, such as a viewfinder interface.
  • the wireless communication function of the electronic device 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 251, the wireless communication module 252, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 200 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 251 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 200.
  • the mobile communication module 251 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 251 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 251 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 251 may be provided in the processor 210.
  • at least part of the functional modules of the mobile communication module 251 and at least part of the modules of the processor 210 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 270A, a receiver 270B, etc.), or displays an image or video through the display screen 294.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 310 and be provided in the same device as the mobile communication module 251 or other functional modules.
  • the wireless communication module 252 can provide applications on the electronic device 200 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 252 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 352 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 210.
  • the wireless communication module 252 may also receive the signal to be sent from the processor 210, perform frequency modulation and amplification, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 200 is coupled with the mobile communication module 251, and the antenna 2 is coupled with the wireless communication module 252, so that the electronic device 200 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), and quasi-zenith satellite system (quasi). -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 200 can implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, a headphone interface 270D, and an application processor. For example, music playback, recording, etc.
  • the electronic device 200 may receive key 290 input, and generate key signal input related to user settings and function control of the electronic device 200.
  • the electronic device 200 can use the motor 291 to generate a vibration notification (such as a vibration notification for an incoming call).
  • the indicator 292 in the electronic device 200 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 295 in the electronic device 200 is used to connect to the SIM card. The SIM card can be inserted into the SIM card interface 295 or pulled out from the SIM card interface 295 to achieve contact and separation with the electronic device 200.
  • the electronic device 200 can implement a display function through a GPU, a display screen 209, an application processor, and the like.
  • the GPU is a microprocessor for image processing and is connected to the display screen 209 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 203 may include one or more GPUs that execute program instructions to generate or change display information.
  • the electronic device 200 can implement a shooting function through an ISP, a camera 201, a video codec, a GPU, a display screen 209, and an application processor.
  • the ISP is mainly used to process the data fed back by the camera 200. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 200 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 200 may support one or more video codecs. In this way, the electronic device 200 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • the electronic device 200 may include modules that implement audio functions such as an audio module, a speaker, a receiver, a microphone, and a headphone interface.
  • the electronic device 200 may use the module that implements audio functions to perform music playback, video playback, and recording.
  • the electronic device 200 may include more or less components than those shown in FIG. 2b, which is not limited in the embodiment of the present application.
  • FIG. 3 shows a schematic diagram of a software system architecture of an electronic device 200 according to an embodiment of the present application.
  • the software system that the electronic device 200 can include includes multiple applications 301, a camera module 302, an ISP module 303, and a face recognition module 304.
  • multiple applications 301 may include payment applications, screen lock applications, face recognition software development kits (SDKs) for setting applications, and application locks.
  • SDKs face recognition software development kits
  • Each of the multiple applications 301 can trigger the electronic device 200 to perform face recognition in different scenarios.
  • the electronic device 200 When the electronic device 200 receives the face recognition request initiated by the multiple applications 301, it can start the camera module 302 and initialize the camera module 302 to collect user profile information.
  • the ISP module can transmit the initialization parameters of the camera module, such as exposure brightness parameters, to the camera module 302.
  • the image information collected by the camera module 302 is a raw image
  • the image information collected by the camera module 302 can be used for face recognition only after image processing.
  • the ISP module 303 may perform image processing (for example, noise reduction processing) on the image information collected by the camera module 302.
  • the face recognition module 304 may include a face detection module, a face comparison and anti-counterfeiting module, and so on.
  • the face recognition module 304 can perform face detection, living body detection (including the above-mentioned deep anti-counterfeiting authentication and infrared anti-counterfeiting authentication), feature extraction, feature comparison, and template management.
  • Face detection is to detect facial contour information, facial feature information, etc. in an image. If face contour information, face feature information, etc. are detected, the face comparison and anti-counterfeiting module can perform operations such as living body detection, feature extraction, and feature comparison.
  • Feature extraction refers to the extraction of facial features in image information.
  • Feature comparison refers to comparing the pre-stored face template with the face features extracted from the image information, and judging whether the face features extracted from the image information match the face template.
  • the face detection module can pre-write the preset image area.
  • the brightness adjustment sub-module can be set in the ISP module.
  • the face detection module cannot detect the contour of the face based on the image information collected by the camera module 302, it can return the preset image area to the brightness adjustment sub-module, so that the brightness adjustment sub-module can predict the captured image.
  • the brightness average value of the set image area is calculated, and then based on the calculation result, the shooting parameters (such as exposure) are reset and sent to the camera module 302, so that the camera module 302 continues to collect the user's face image information.
  • the brightness adjustment sub-module is set with a target brightness value of the image, and when the above-mentioned preset image area reaches the target brightness value, it will no longer increase. Therefore, the face recognition module 304 can perform subsequent face contour detection, face feature extraction, face comparison, etc. based on the target brightness value.
  • the face detection module and the brightness adjustment sub-module may be packaged together, that is, both are set in the ISP module.
  • FIG. 4 shows an image recognition method provided by an embodiment of the present application, and the method can be applied to the electronic device 200.
  • the electronic device 200 includes a camera 201.
  • the image recognition method may include S401-S408:
  • S401 In response to receiving an instruction to perform face recognition, collect image information through the camera.
  • facial recognition is substituted for facial recognition.
  • facial recognition can refer to face recognition.
  • the electronic device 200 can receive the user's operation on the electronic device. This operation is used to trigger the electronic device to perform a certain event (such as payment, unlocking, passing through the gate, opening the door). This event needs to be completed through face recognition.
  • a certain event such as payment, unlocking, passing through the gate, opening the door. This event needs to be completed through face recognition.
  • the electronic device 200 detects one of the foregoing operations, it can be determined that it has received an instruction to perform face recognition.
  • the electronic device In the scenario of passing through the ticket gate, when the user places the ID card and the ticket in the designated position in a preset placement manner, it can be determined that the user is operating the electronic device. After the electronic device detects the operation, it can trigger an instruction to perform face recognition.
  • the above-mentioned user's operation on the electronic device may be, for example, an operation of the user clicking "pay" in a payment application or scanning a two-dimensional code. Therefore, when the electronic device detects any one of the two operations, it can trigger an instruction to perform face recognition.
  • the electronic device In the scenario of unlocking the mobile phone, when the user taps the screen to wake up the screen, the electronic device detects the operation and can trigger the instruction for face recognition; in the scenario of unlocking the computer, when the user uses an external input/output device When a click operation (such as a keyboard or a mouse) is performed, the electronic device detects the operation and can trigger an instruction to perform face recognition.
  • a click operation such as a keyboard or a mouse
  • the above-mentioned scene of the instruction to trigger face recognition is illustrative, and is not specifically limited here. As long as the scene that triggers face recognition based on a certain operation of the user, it is applicable to this application.
  • the electronic device 200 After the electronic device 200 receives the instruction to perform face recognition, it can collect image information through the camera 201.
  • the image information is used to authenticate the target object.
  • the camera 201 may be an RGB camera.
  • the image information may include the RGB value of each pixel in the image, the gray value of the image, the pixel brightness value of the image, the coordinate position of each pixel in the image, and the like.
  • the image can be formed by the image information.
  • the camera 201 may be initialized with parameters. Specifically, the electronic device 200 may pre-store a first correspondence table between the brightness of the external environment and the camera parameters. After receiving the instruction to perform face recognition, the electronic device 200 may first use the ambient light sensor to obtain the brightness of the external light. Then, the acquired external illumination brightness is compared with the first correspondence table, and the camera parameters corresponding to the illumination brightness are queried.
  • the camera parameters may include, but are not limited to, exposure time, sensitivity, and so on. Then, the camera is initialized based on the queried camera parameters, so that the camera obtains image information based on the parameters.
  • S402 Determine whether the facial contour can be detected based on the image information.
  • the image information is acquired by the electronic device 200 by controlling the camera based on the brightness of the external environment.
  • the angle of light irradiation is different, and the brightness of the picture presented by the image information is also different.
  • the light directly shines on the user's face, the user's facial contours and facial features are clearer.
  • the electronic device 200 cannot detect the facial contour information.
  • the electronic device 200 may determine whether the facial contour can be detected from the image information.
  • the electronic device cannot obtain facial contour information.
  • the pixel brightness value of the face part is too low to extract the facial contour.
  • the electronic device 200 may first determine whether it is the first case. Specifically, the electronic device 200 may calculate the average pixel brightness value of the image information. When the electronic device 200 calculates that the average brightness value of the image information is lower than the preset threshold, it can be determined as the first case. At this time, the average pixel brightness value of the preset image coordinate area in the collected image information can be determined first, and then step S403 is performed.
  • the electronic device 200 calculates that the average brightness value of the image information is higher than the preset threshold, it can further determine the average pixel brightness value of the preset image coordinate area in the image information.
  • the average pixel brightness value of the preset image coordinate area in the image information is When the brightness value is lower than the preset threshold, it means that the brightness difference between the face part in the image and other parts in the image is too large, which causes the overall average pixel brightness value of the image to be too high. At this time, it can also be determined as the first case. Then, step S403 is executed.
  • the electronic device 200 calculates that the average pixel brightness value of the preset image coordinate area in the image information is higher than the preset threshold, and the facial contour cannot be detected from the image information at this time, the second situation can be determined at this time . It is necessary to re-execute the step of acquiring image information in S401. If the face object is not detected within a certain period of time, the authentication fails.
  • S404 may be performed.
  • S403 Adjust camera parameters based on the average pixel brightness value of the preset image coordinate area in the collected image information, and execute S406.
  • the electronic device 200 may also store a second correspondence table between image brightness values and camera parameters.
  • the electronic device 200 may compare the calculated average pixel brightness value of the preset image area with the image target brightness value. If the difference between the two is less than or equal to the preset threshold, the electronic device 200 may increase the image brightness value of the current preset image coordinate area in the next frame of image to the target brightness value. Then, the electronic device 200 can query the camera parameter corresponding to the increased brightness value from the second correspondence table. If the difference between the two is greater than the preset threshold, the electronic device 200 cannot increase to the target brightness value at one time. At this time, the electronic device 200 may increase the image brightness value based on the preset step size. Then, the electronic device 200 can query the camera parameter corresponding to the increased brightness value from the second correspondence table.
  • querying the camera parameters corresponding to the increased brightness value means finding the sensitivity parameter, white balance parameter, aperture size, exposure time, etc. corresponding to the brightness value.
  • the camera is controlled to reacquire image information under the camera parameters.
  • an image area may be preset in the electronic device 200.
  • the image area may be an area where the face object appears in the image with a high probability based on big data statistics.
  • the area may be a rectangular area or an area of other shapes such as a circle.
  • the area is a rectangular area.
  • the area marked by the rectangle ABCD is the preset image coordinate area.
  • the difference between Fig. 5a and Fig. 5b is that the length of the frame in the first direction of Fig. 5a is smaller than the length in the second direction, and the length of the frame in Fig. 5b in the first direction is greater than the length in the second direction. Therefore, FIG. 5a can be regarded as an image formed by the image information collected when the mobile phone is in a vertical screen, and FIG. 5b can be regarded as an image formed by the image information collected when the mobile phone is in a horizontal screen.
  • the ratio of the preset image coordinate area in the image can be set according to the needs of the application scenario, and the ratio between the length of the preset image coordinate area along the first direction and the length along the second direction can also be based on the application scenario. Need to be set.
  • the ratio between the length of the preset image coordinate area along the first direction and the length along the second direction may be 4:6, for example.
  • the ratio between the length of the preset image coordinate area along the first direction and the length along the second direction is 4:5.
  • the ratio between the length of side AB and the length of side AC is 4:5.
  • the boundary length of the image coordinate area is 320 mm
  • the boundary length of the image coordinate area is 400 mm. That is to say, no matter the image frame is increased or decreased in the collected image information, the side length of the image coordinate area remains unchanged, and it can be 320mm along the first direction and 400mm along the second direction. Into a rectangular area.
  • the boundary length of the image coordinate area is 320 mm, along the second direction as shown in FIG. 5a or along the second direction as shown in FIG. 5b.
  • the boundary length of the image coordinate area is 400 mm.
  • the side length of the image coordinate area changes with the horizontal or vertical screen of the mobile phone, and the direction of the longer border in the image coordinate area is the same as the direction of the long side of the image frame.
  • the direction of the shorter border in the image coordinate area is the same as the direction of the short side of the image frame.
  • the first boundary and the second boundary of the image coordinate area form a first vertex
  • the first boundary and the third boundary form a second vertex
  • the first side and the second side of the image formed by the image information The third vertex is formed.
  • the first edge and the third edge of the image formed by the image information form the fourth vertex.
  • the first boundary coincides with the first edge.
  • the distance between the first vertex and the third vertex is equal to the distance between the second vertex and the fourth vertex.
  • the distance between the vertices is equal.
  • the first boundary AB coincides with the first edge EF of the entire image, and the distance between the first vertex A and the third vertex E is equal to the second vertex B and The distance between the fourth vertices F.
  • S404 Based on the image information, it is determined whether a feature point that can perform face recognition can be detected.
  • face recognition needs to extract multiple feature points of the face.
  • the electronic device 200 after the electronic device 200 detects the contour of the human face, it can further detect and extract the feature points of the human face in the image information.
  • the result of feature point detection on the face part can include two cases. The first situation is that the face part is sufficiently clear, and the feature points for face recognition can be extracted from the image information.
  • the electronic device 200 may perform step S405.
  • the electronic device 200 may perform step S406.
  • the aforementioned key points may include features that match the features of human organs such as the nose, mouth, and eyes. That is, the electronic device 200 can determine whether the key points for face recognition can be detected by determining whether the image information includes the image features of the human nose, mouth, and eyes.
  • S405 Perform face recognition and anti-counterfeiting verification, and determine whether the identity verification can be passed.
  • the electronic device 200 When the electronic device 200 detects the key points of the face object, the user can be authenticated. Authenticating users can usually include face verification and anti-counterfeiting verification.
  • the electronic device 200 may pre-store human face image information used for user identity verification.
  • the facial image information may include feature information of the facial image pre-recorded by the electronic device 200.
  • the electronic device 200 can compare the image information with the feature information of the human face image entered in advance to determine whether the image information matches.
  • the electronic device 200 may compare the facial features extracted from the second image information with pre-entered facial features, and determine whether the extracted facial features match the pre-entered facial features.
  • the electronic device 200 may perform further anti-counterfeiting authentication on the face object in the second image information.
  • the anti-counterfeiting certification is used to detect whether it is a living body. Prevent the user’s photo, model, etc. from being used for identity verification on behalf of the user.
  • the step S408 of passing the identity verification can be performed, that is, the face recognition is successful, so that a certain event (such as payment, unlocking, passing through the gate, opening the door) can be performed.
  • step S406 may be executed.
  • the electronic device 200 when the electronic device 200 can detect the user's facial contour from the image information, in order to increase the detection speed of facial features and avoid the large brightness difference between the surrounding environment and the human face in the image, multiple times are required. Iteration can detect the facial feature points used for face recognition. At this time, the electronic device can determine the average pixel brightness value of the facial contour part, so that it can brighten the subsequently collected image only for the average pixel brightness value of the facial contour part.
  • step S407 Based on the adjusted camera parameters, collect image information through the camera again, and then perform step S402.
  • steps S402 to S408 may be repeatedly executed until the electronic device 200 can clearly detect the content of the preset image area or detect the face object.
  • the electronic device 200 can authenticate the user based on the detected image content.
  • step S402-step S408 exceeds the preset threshold, it can be determined that the user identity verification has failed. At this time, the electronic device will no longer perform a certain event (such as payment, unlocking, passing through the gate, opening the door, etc.) event).
  • a certain event such as payment, unlocking, passing through the gate, opening the door, etc.
  • the user tapped the phone screen to trigger an event in which the phone uses facial recognition to unlock the phone screen.
  • the component for executing the facial recognition instruction in the mobile phone controls the camera of the mobile phone to obtain image information.
  • the face part is too dark, and the user's facial contour cannot be detected from the image at this time.
  • Fig. 6a schematically shows an image formed by image information currently collected by the camera.
  • the brightness contrast between the area A and the area B is strong, that is, the brightness difference between the area A and the area B is too large. At this time, even if the average pixel brightness value of the entire image shown in FIG.
  • the mobile phone can determine the average pixel brightness value of the preset image coordinate area A in FIG. 6a.
  • the brightness value increases. It can be seen from FIG. 6b that, compared with the image shown in FIG. 6a, the average pixel brightness value of the preset image coordinate area A in FIG. 6b is increased.
  • the mobile phone can continue to determine whether the facial contour can be detected from Figure 6b.
  • the average pixel brightness value of the facial contour area C in Fig. 6b can be further determined. It is assumed here that the area B in FIG. 6b is a facial contour area. Then, based on the average pixel brightness value of the facial contour area C, continue to adjust the camera's sensitivity, exposure time and other parameters, and re-collect image information to generate the third frame image 6c. Compared with the image shown in FIG. 6b, the average pixel brightness value of the facial contour area in FIG. 6c is further improved. Furthermore, the mobile phone can clearly extract the facial features from Fig. 6c, and then perform the subsequent steps of face recognition and face anti-counterfeiting authentication.
  • the preset image area can be obtained through the steps shown in FIG. 7. This step includes:
  • each sample image in the sample image set includes a face object.
  • the sample image can be obtained based on direct light shining on the user's face, or can be obtained when the user's face is facing away from the light source.
  • the facial features of the user object can be presented more clearly.
  • usually only the facial contour of the user object can be presented.
  • the sample image can be acquired by the same electronic device as the electronic device 200.
  • the size and aspect ratio of the sample image are the same as the size and aspect ratio of the image captured by the camera of the electronic device 200.
  • the coordinate region information of the face object in the sample image may be obtained based on manual annotation, or may be obtained by detection using a contour detection algorithm.
  • the electronic device 200 may directly detect the coordinate region information of the face object in the sample image based on the contour detection algorithm.
  • the coordinate region information of the face object in the sample image may be manually marked.
  • the coordinate area of the face object in the sample image is usually a rectangular area. By determining the coordinate positions of the four vertices of the rectangle in the image, the coordinate area information of the face object in the sample image can be determined.
  • S703 Perform a coincidence degree calculation on the obtained coordinate area information of each sample image, and select an area with a coincidence degree greater than a preset threshold.
  • the position distribution of the face objects in the image all fall within a certain area range, then the area range is the area where the coincidence degree is greater than the preset threshold.
  • the coordinate regions of the face object in the image all fall in the A(0,20), B(0,80), C(70,20), D(70, 80)
  • the area enclosed by the four coordinate points ABCD in the image can be taken as the area where the degree of coincidence is greater than the preset threshold.
  • S704 Generate a preset image coordinate area based on the coordinate area whose coincidence degree is greater than the preset threshold.
  • the area whose coincidence degree is greater than the preset threshold selected in S703 is usually larger.
  • the maximum brightness value that can be achieved is determined based on the average brightness of the image. If the range of the image area is too large, the electronic device 200 may still fail to recognize the face when the selected image area reaches the maximum brightness value based on the set goal.
  • the electronic device 200 usually sets the brightness adjustment step size based on the difference between the calculated image brightness value and a predetermined target brightness value. The smaller the difference, the smaller the step size, and the larger the difference, the larger the step size. If the selected image area is too large, the surrounding environment will fall into the image area. When the brightness difference between the human face part and the surrounding environment part in the image is too large, the average brightness value of the image area calculated by the electronic device 200 is much higher than the average brightness value of the human face part. At this time, the difference between the average brightness value and the predetermined target brightness value is small, which causes the electronic device to increase the image area range with a small step. After the brightness is adjusted, the contours of the face may still not be detected, and it takes many iterations to recognize the face, which seriously affects the speed of face recognition.
  • test images including human face objects under preset environmental conditions can be obtained.
  • the preset environmental condition may be a dim backlight environment.
  • the face object overlaps the background image as much as possible.
  • the test image may be as shown in FIG. 1a, for example.
  • the pre-selected coordinate area with a degree of coincidence greater than a preset threshold can be adjusted for brightness.
  • the size of the coordinate area is adjusted.
  • the number of brightness adjustments is the number of iterative adjustments of the brightness when the pre-selected image coordinate area can recognize the facial features. If the number of times is greater than the preset number of times, it means that the selected image coordinate area is too large, and the image coordinate area is reduced by the preset ratio. Then, continue to adjust the brightness of the newly selected image coordinate area in the test image. If the number of times of brightness adjustment of the current coordinate area is less than the preset threshold, the current coordinate image area may be used as the aforementioned preset image coordinate area.
  • the user object characteristics can be presented as completely as possible in the image coordinate area, and at the same time, the number of iterations of image brightness is reduced as much as possible, thereby improving the The efficiency of successful face recognition.
  • FIG. 8 shows another image recognition method provided by an embodiment of the present application, and the method can be applied to the electronic device 200.
  • the electronic device 200 includes a camera 201.
  • the image recognition method may include S801-S807:
  • S801 In response to receiving an instruction to perform face recognition, collect image information through the camera.
  • S802 Based on the image information, determine whether facial contour information can be detected.
  • step S803 is executed; if the facial contour information is detected, step S804 is executed.
  • S803 Determine the average pixel brightness value of the preset image coordinate area in the collected image information, adjust camera parameters based on the average pixel brightness value, and execute S406.
  • S804 Perform face recognition and anti-counterfeiting verification to determine whether the identity verification can be passed.
  • the electronic device 200 after the electronic device 200 detects the facial contour information, it can directly perform face recognition. That is to say, here, when performing face recognition, facial key point features can be extracted, and the facial key point features can be compared with the facial features corresponding to the pre-entered face image.
  • step S805 may be performed.
  • facial key point features can be extracted from the image information, and the facial features corresponding to the pre-entered face image are successfully compared, further anti-counterfeiting verification can be performed. That is, live detection.
  • the step S707 of passing the identity verification can be performed, that is, the face recognition is successful, so that a certain event (such as payment, unlocking, passing through the gate, opening the door) can be performed.
  • step S806 Based on the adjusted parameters, collect image information through the camera again, and then perform step S802.
  • step S802 to step S807 may be repeatedly performed until the electronic device 200 can clearly detect the content of the preset image area or detect the face object.
  • the electronic device 200 can authenticate the user based on the detected image content.
  • step S802-step S807 exceeds the preset threshold, it can be determined that the user identity verification has failed. At this time, the electronic device will no longer perform a certain event (such as payment, unlocking, passing through the gate, opening the door) Wait.
  • a certain event such as payment, unlocking, passing through the gate, opening the door
  • steps S801, S802, S803, S805, S806, and S807 shown in this embodiment please refer to the description of steps S401, S402, S403, S406, S407, and S408 shown in FIG. This will not be repeated here.
  • FIG. 4 and FIG. 8 are only examples, and the embodiment of the present application may also perform other operations or variations of the operations in FIG. 4 and FIG. 8.
  • the steps in FIG. 4 and FIG. 8 may be performed in a different order from that shown in FIG. 4, and it is possible that not all the operations in FIG. 4 are to be performed.
  • the above-mentioned electronic device includes hardware structures and/or software modules corresponding to each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the embodiments of the present application.
  • the embodiment of the present application may divide the above-mentioned electronic device into functional modules according to the above-mentioned method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 9 shows a schematic diagram of a possible structure of the electronic device 900 involved in the foregoing embodiment.
  • the electronic device 900 may include: a processing module 901 and an RGB acquisition module 902.
  • the electronic device 900 may further include a display module, a communication module, and the communication module includes a Bluetooth module, a Wi-Fi module, and the like.
  • the processing module 901 is used to control and manage the actions of the electronic device 900.
  • the RGB acquisition module 902 is used to acquire an image of a target object under visible light.
  • the display module is used to display the image generated by the processing module 901 and the image collected by the RGB acquisition module 902.
  • the communication module is used to support communication between the electronic device 900 and other devices.
  • the processing module 901 is further configured to perform identity verification of the target object according to the image collected by the RGB collecting module 902.
  • the foregoing processing module 901 may be used to support the electronic device 900 to execute S402-S408, S701-S704, S802-807 in the foregoing method embodiment, and/or other processes used in the technology described herein.
  • the RGB collection module 902 may be used to support the electronic device 900 to collect image information under visible light, and/or other processes used in the technology described herein.
  • the unit modules in the above-mentioned electronic device 900 include but are not limited to the above-mentioned processing module 901 and the RGB collection module 902 and so on.
  • the electronic device 900 may also include a storage module.
  • the storage module is used to store the program code and data of the electronic device 900, and/or other processes used in the technology described herein.
  • the processing module 901 can be a processor or a controller, for example, a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), and an application-specific integrated circuit (Application-Specific Integrated Circuit). , ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • the processor may include an application processor and a baseband processor. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor may also be a combination for realizing computing functions, for example, including a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and so on.
  • the storage module may be a memory.
  • the processing module 901 is one or more processors (the processor 220 shown in FIG. 2b), the communication module includes a wireless communication module (the wireless communication module 252 shown in FIG. 2b), and the wireless communication module 252 includes BT ( Namely Bluetooth module), WLAN (such as Wi-Fi module)).
  • the wireless communication module may be called a communication interface.
  • the storage module may be a memory (internal memory 221 as shown in FIG. 2b).
  • the display module may be a display screen (the display screen 294 shown in FIG. 2b).
  • the aforementioned RGB collection module 902 may be 1-N cameras 293 shown in FIG. 2b.
  • the electronic device 900 provided in the embodiment of the present application may be the electronic device 200 shown in FIG. 2b. Wherein, one or more of the aforementioned processors, memories, display screens, cameras, etc. may be connected together, for example, connected via a bus.
  • the embodiment of the present application also provides a computer storage medium in which computer program code is stored.
  • the electronic device 900 executes any one of FIG. 4, FIG. 7 or FIG. 8
  • the relevant method steps in the figure implement the method in the foregoing embodiment.
  • the embodiments of the present application also provide a computer program product.
  • the computer program product runs on a computer, the computer can execute the relevant method steps in any one of the drawings in FIG. 4, FIG. 7 or FIG. Methods.
  • the electronic device 900, computer storage medium, or computer program product provided in the embodiments of the present application are all used to execute the corresponding method provided in the native language. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided above. The beneficial effects of the method will not be repeated here.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be divided. It can be combined or integrated into another device, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Studio Devices (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本申请实施例提供了一种图像识别方法和电子设备,该方法包括:响应于接收到进行人脸识别的指令,通过摄像头采集图像信息,所述图像信息用于对目标对象进行身份验证;基于所述图像信息,确定是否能够检测到面部轮廓;响应于无法检测到面部轮廓,确定所述图像信息中、预设图像坐标区域的平均像素亮度值,基于平均像素亮度值,调节摄像头参数;基于调节后的摄像头参数,重新通过所述摄像头采集图像信息。通过采用本申请所示的图像识别方法,可以使得所拍摄的面部图像过暗的情况下,仍然可以识别出用户面部图像,提高人脸识别成功率。

Description

图像识别方法及电子设备
本申请要求于2019年8月30日提交中国专利局、申请号为201910816996.6、申请名称为“图像识别方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理技术,尤其涉及一种图像识别方法及电子设备。
背景技术
随着人工智能技术的发展,人脸识别技术得到广泛的应用。通常,人脸识别技术应用在例如终端解锁、支付、门禁识别、闸机入口等场景中。
在人脸识别技术中,通常采集用户面部图像,将采集的面部图像与预先存储的面部图像进行匹配以识别是否为目标用户。
在利用拍摄设备进行用户面部图像采集的过程中,当周围环境过暗或在逆光环境下采集图像时,所拍摄的用户面部图像过暗,导致用户的面部图像模糊不清,难以进行图像识别。
相关技术中,通常采用整体亮度补偿的方法或局部亮度补偿的方法,以提高逆光拍摄的用户面部图像的亮度。在整体亮度补偿的方法中,通常是基于预设的目标亮度值,对整幅画面进行亮度调节以提高局部图像亮度值。由于整幅画面中通常包括其他亮度较高的物体(例如白墙、浅色物体等),利用此方法,通常在亮度补偿到一定值时无法再进行补偿,从而使得用户面部图像仍然过暗无法进行人脸识别。在局部亮度补偿的方法中,通常从图像中检测出用户面部轮廓,然后对用户面部轮廓所在的部分进行亮度补偿。然而,当图像过暗导致用户面部轮廓模糊时,通常无法检测出用户面部轮廓,从而导致无法进行人脸识别。
发明内容
通过采用本申请所示的图像识别方法,可以使得所拍摄的面部图像过暗的情况下,仍然可以识别出用户面部图像,提高识别成功率。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请实施例提供一种图像识别方法,该方法包括:响应于接收到进行面部识别的指令,电子设备通过摄像头采集图像信息,所述图像信息用于对目标对象进行身份验证;基于所述图像信息,所述电子设备确定是否检测到面部轮廓;响应于无法检测到所述面部轮廓,所述电子设备确定所述图像信息中位于预设区域的部分图像信息的第一亮度值参数;基于所述第一亮度值参数,所述电子设备调节所述摄像头的摄像头参数;基于调节后的摄像头参数,所述电子设备重新通过所述摄像头采集图像信息,以用于进行身份验证;然后基于重新采集的图像信息,重复前面的步骤。
在基于面部识别的身份验证场景中,在由于所采集到的图像的像素亮度值过低,导致所生成的图像过暗,从而在无法检测到用户面部轮廓的情况下,通过对预先生成预设图像坐标 区域进行亮度调节,可以使得面部图像过暗的情况下,仍然可以识别出用户面部图像,提高人脸识别成功率。
在本申请中,该预设区域是基于大数据统计得出的、人脸对象在图像中高概率出现的预设图像坐标区域。该预设区域的图像信息是所采集的整幅图像信息中的部分图像信息。当用户触发人脸识别事件时,所采集到的面部图像信息通常会位于该预设图像坐标区域。该区域通常与周围对比度强烈的环境区域分割开来。从而,当该区域的像素亮度值提高到目标亮度时,如果存在人脸对象,通常可以检测出面部轮廓、提取出面部特征,以进行人脸识别和活体检测。由此,大大提高了背光场景或光线较暗的场景下,对用户人脸识别成功的概率。本申请中的第一亮度值参数可以是亮度值,也可以是用于表征亮度的参数或者跟亮度值相关的参数(例如灰度值、感光度值)等。
优选的,第一亮度值参数为平均像素亮度值。
基于第一方面,在一些可能的实现方式中,方法还包括:响应于检测到所述面部轮廓,所述电子设备确定从所述图像信息中是否检测到用于进行面部识别的特征点;响应于无法检测到所述用于进行面部识别的特征点,所述电子设备确定所述图像信息中、面部轮廓部分的第二亮度值参数;基于所述第二亮度值参数,所述电子设备调节所述摄像头的摄像头参数;基于调节后的摄像头参数,所述电子设备执行重新通过所述摄像头采集图像信息的步骤。
本申请中,当所采集到的图像还未足够亮,使得电子设备可以检测出面部轮廓,但无法提取面部特征时,需要对面部轮廓部分继续提高亮度。通过确定图像中面部轮廓部分的亮度值参数,以针对该部分进行提亮,可以避免图像中周围环境部分(例如对比度强烈的白墙)参与到亮度调节中来,从而提高人脸识别速度。
基于第一方面,在一些可能的实现方式中,方法还包括:响应于检测到所述面部轮廓,所述电子设备将所述图像信息与预先存储的人脸图像进行比对,确定是否比对成功;响应于比对成功,基于所述图像信息,进行防伪认证;响应于所述防伪认证通过,通过所述身份验证。
基于第一方面,在一些可能的实现方式中,该方法还包括:响应于检测到所述用于进行面部识别的特征点,所述电子设备将所述图像信息与预先存储的人脸图像进行比对,确定是否比对成功;响应于比对成功,基于所述图像信息,进行防伪认证;响应于所述防伪认证通过,通过所述身份验证。
在本申请中,可以在检测到面部轮廓时,直接对采集到的图像信息形成的图像与预先存储的人脸图像进行比对,也可以在检测到用于进行人脸识别的面部特征点时,进行比对。根据应用场景的需要确定。
基于第一方面,在一些可能的实现方式中,方法还包括:响应于没有比对成功,确定所述图像信息中、面部轮廓部分的第三亮度值参数,基于所述第三亮度值参数,调节摄像头参数;基于调节后的摄像头参数,所述电子设备执行重新通过所述摄像头采集图像信息的步骤。
基于第一方面,在一些可能的实现方式中,所述基于调节后的摄像头参数,所述电子设备重新通过所述摄像头采集图像信息之后,所述方法还包括:基于重新采集的图像信息,所述电子设备确定是否检测到面部轮廓;响应于从所述重新采集的图像信息中无法检测到面部轮廓,所述电子设备确定所述重新采集的图像信息中位于预设区域的部分图像信息的第四亮度值参数;响应于从所述重新采集的图像信息中检测到面部轮廓,所述电子设备确定从所述 重新采集的图像信息中是否检测到用于进行面部识别的特征点,响应于从所述重新采集的图像信息中无法检测到用于进行面部识别的特征点,所述电子设备确定所述重新采集的图像信息中、面部轮廓部分的第五亮度值参数;响应于从所述重新采集的图像信息中检测到用于进行面部识别的特征点,所述电子设备将所述重新采集的图像信息与预先存储的人脸图像信息进行比对,确定是否比对成功;基于所述第四亮度值参数或者所述第五亮度值参数,所述电子设备调节所述摄像头的摄像头参数;基于调节后的摄像头参数,所述电子设备执行重新通过所述摄像头采集图像信息的步骤。
基于第一方面,在一些可能的实现方式中,所述预设区域为矩形区域。
基于第一方面,在一些可能的实现方式中,所述预设区域所围成的矩形区域包括第一边和第二边,所述第一边的长度小于所述第二边的长度;其中,所述第一边与所述第二边的比值为4:5。
第二方面,本申请提供一种电子设备,包括一个或多个处理器、存储器以及摄像头,所述存储器和所述摄像头耦合至所述处理器,所述存储器用于存储信息,所述处理器执行所述存储器中的指令,使得所述电子设备执行如上述第一方面所述的图像识别方法。
第三方面,本申请提供一种计算机可读存储介质,计算机可读存储介质存储有指令,当指令在计算机上运行时,用于执行上述第一方面中任一的图像识别方法。
第四方面,本申请提供一种计算机程序或计算机程序产品,当计算机程序或计算机程序产品在计算机上被执行时,使得计算机实现上述第一方面中任一的图像识别方法。
应当理解的是,本申请的第二至四方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1a是在逆光条件下采集的人脸图像的一个示意图;
图1b是现有技术中针对图1a所示的人脸图像进行亮度调节后的人脸图像的一个示意图;
图2a是本申请实施例提供的一个电子设备的结构示意图;
图2b是本申请实施例提供的一个电子设备的硬件结构示意图;
图3是本申请实施例提供的一个电子设备的软件结构示意图;
图4是本申请实施例提供的一个图像识别方法的流程图;
图5a是本申请实施例提供的一个预设图像位置区域的示意图;
图5b是本申请实施例提供的另外一个一个预设图像位置区域的示意图;
图6a-图6c是本申请实施例提供的一个图像识别方法的应用场景示意图;
图7是本申请实施例提供的预设图像位置区域的确定方法的一个流程图;
图8是本申请实施例提供的又一个图像识别方法的流程图;
图9是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对 象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
随着人工智能技术以及图像处理技术的发展,人脸识别技术得到广泛的发展和应用。通常,人脸识别技术可以应用于终端屏幕解锁、面部识别支付、身份验证(例如门禁身份验证、闸机身份验证)等。本申请实施例提供的图像识别方法,可以应用于人脸识别技术的各种场景中。
当前的人脸识别技术中,针对正常光照条件(例如光线直接照射至人脸上或者外部环境光照亮度较高)采集人脸图像,通常可以清晰的采集出面部特征,从而具有较高的人脸识别率。然而,针对逆光场景条件下采集的人脸图像,如图1a所示,其采集到的人脸图像亮度较低,无法检测出其面部特征,需要对图1a所示的人脸图像进行图像提亮以检测出面部特征。当对图1a进行整体提亮时,通常计算出图1a的平均亮度值,也即是对图1a的全部像素值求和后取平均。从图1a中可以看出,由于人脸对象所在的区域与背景区域之间的亮度差距较大,也即像素值差距较大,对图1a进行整体亮度补偿后的,得到如图1b所示的图像。在图1b中,人脸对象所在的区域的亮度仍然较低,无法检测出面部特征。通过本申请实施例的方案,电子设备可以在所拍摄的人脸对象过暗以至于于无法检测出人脸面部特征的情况下,对利用大数据预先确定出的、人脸对象高概率分布的图像区域进行局部提亮,经过反复迭代提亮,人脸对象的特征可以清晰的呈现,从而电子设备可以基于检测出的人脸特征进行有效的人脸识别,提高背光或黑暗环境下对用户身份校验的准确度,进而提高用户体验。
请参见图2a,图2a是的本申请实施例提供的电子设备200的外部结构示意图。在如图2a所示,电子设备200可以包括摄像头201。
摄像头201可以为红绿蓝(red green blue,RGB)摄像头。RGB摄像头为可见光摄像头,其用于采集用户的人脸图像信息。摄像头201还可以为其他诸如双通摄像头、全通摄像头等。其中,双通摄像头是指该摄像头既可以采集可见光图像,也可以采集红外光图像。全通摄像头是指可以采集可见光图像、红外光图像以及其它波长的光的图像。
在图2a所示的电子设备200中,还可以包括环境光传感器202。环境光传感器202用于感知环境光亮度。其可以与摄像头201相互配合,采集用户图像信息。
具体的,当用户触发人脸识别操作(例如唤醒电子设备200屏幕、在安装于电子设备200的支付类应用中点击支付操作)时,电子设备200的环境光传感器202可以首先感测外部环境光亮信息,电子设备200基于感测到的外部环境光亮信息,调节摄像头201的参数,然后采集用户图像信息。电子设备200基于采集到的用户图像信息,在无法检测出用户面部轮廓时,对预先设置的图像位置区域迭代提亮,直到检测出用户面部特征,从而基于用户面部特征对用户进行识别认证。本申请实施例中的迭代提亮具体可以是:基于预设区域的图像信息的亮度值参数,调节摄像头参数,重新获取图像,以使得重新获取的图像中、预设区域的图像信息的亮度值提高;如果重新获取的图像还无法进行识别,则基于重新获取的图像中的预设预 期中的图像信息的亮度值参数,进一步调节摄像头参数,并再次重新获取图像,使得图像亮度进一步提高。
需要注意的是,在一些实施例中,图2a所示的电子设备200的外部结构示意图可以为电子设备200的正面的局部示意图。也就是说,上述摄像头201、环境光传感器202置在电子设备200的正面。此外,在图2a所示的电子设备中,还可以包括第二摄像头,该第二摄像头可以设置于电子设备200的背面。
在一些实施中,图2a所示的电子设备200的外部结构示意图还可以为电子设备200的背面的局部示意图。也就是说,上述摄像头201、环境光传感器202置在电子设备200的背面。此外,在图2a所示的电子设备中,还可以包括第二摄像头,该第二摄像头可以设置于电子设备200的正面。
上述电子设备200的正面是指电子设备200显示图形用户界面(如电子设备200的主界面,即桌面)的一面,即显示面板所在的面通常称为正面;而电子设备200的背面则是与正面的朝向相反的一面。通常的,电子设备的正面指的是:在被用户正常使用状态下,朝向用户的一面;而背离用户的一面称为背面。
本申请实施例电子设备可以为包括上述RGB摄像头、环境光传感器的手机、笔记本电脑、可穿戴电子设备(如智能手表)、平板电脑、增强现实(augmentedreality,AR)、虚拟现实(virtual reality,VR)设备或车载设备、门禁设备、闸机设备等,以下实施例对该电子设备的具体形式不做特殊限制。
请参考图2b,其示出本申请实施例提供的一种电子设备200的结构示意图。
电子设备200可以包括处理器210,外部存储器接口220,内部存储器221,USB接口230,充电管理模块240,电源管理模块241,电池242,天线1,天线2,移动通信模块251,无线通信模块252,音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,传感器模块280,按键290,马达291,指示器292,摄像头293,显示屏294,以及SIM卡接口295等。其中传感器模块280可以包括陀螺仪传感器280A,加速度传感器280B,接近光传感器280G、指纹传感器280H,触摸传感器280K、转轴传感器280M(当然,电子设备200还可以包括其它传感器,比如温度传感器,压力传感器、距离传感器、磁传感器、环境光传感器、气压传感器、骨传导传感器等,图中未示出)。
可以理解的是,本申请实施例示意的结构并不构成对电子设备200的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是电子设备200的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数 据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。
显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备200可以包括1个或N个显示屏294,N为大于1的正整数。
摄像头293(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频。通常,摄像头293可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的原始图像。摄像头293可以包括1~N个摄像头。该1~N个其他摄像头可以包括RGB摄像头,还可以包括红外摄像头等。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行电子设备200的各种功能应用以及信号处理。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用,微信应用等)的代码等。存储数据区可存储电子设备200使用过程中所创建的数据(比如相机应用采集的图像、视频等)等。
内部存储器221还可以存储本申请实施例提供的防误触算法的代码。当内部存储器321中存储的防误触算法的代码被处理器210运行时,可以对折叠或者展开过程中的触摸操作进行屏蔽。
此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的用于实现视频编辑的算法的代码还可以存储在外部存储器中。这种情况下,处理器210可以通过外部存储器接口220运行存储在外部存储器中算法的代码,实现对视频进行编辑。
下面介绍传感器模块280的功能。
陀螺仪传感器280A,可以用于确定电子设备200的运动姿态。在一些实施例中,可以通过陀螺仪传感器280A确定电子设备200围绕三个轴(即,x,y和z轴)的角速度。即陀螺仪传感器280A可以用于检测电子设备200当前的运动状态,比如抖动还是静止。
加速度传感器280B可检测电子设备200在各个方向上(一般为三轴)加速度的大小。即陀螺仪传感器280A可以用于检测电子设备200当前的运动状态,比如抖动还是静止。
接近光传感器380G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机通过发光二极管向外发射红外光。手机使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机附近有物体。当检测到不充分的反射光时,手机可以确定手机附近没有物体。
陀螺仪传感器280A(或加速度传感器280B)可以将检测到的运动状态信息(比如角速度)发送给处理器210。处理器210基于运动状态信息确定当前是手持状态还是脚架状态(比如,角速度不为0时,说明电子设备200处于手持状态)。
指纹传感器280H用于采集指纹。电子设备200可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器280K,也称“触控面板”。触摸传感器280K可以设置于显示屏294,由触摸传感器280K与显示屏294组成触摸屏,也称“触控屏”。触摸传感器280K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏294提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器280K也可以设置于电子设备200的表面,与显示屏294所处的位置不同。
示例性的,电子设备200的显示屏294显示主界面,主界面中包括多个应用(比如相机应用、微信应用等)的图标。用户通过触摸传感器280K点击主界面中相机应用的图标,触发处理器210启动相机应用,打开摄像头293。显示屏294显示相机应用的界面,例如取景界面。
电子设备200的无线通信功能可以通过天线1,天线2,移动通信模块251,无线通信模块252,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块251可以提供应用在电子设备200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块251可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块251可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块251还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块251的至少部分功能模块可以被设置于处理器210中。在一些实施例中,移动通信模块251的至少部分功能模块可以与处理器210的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器270A,受话器270B等)输出声音信号,或通过显示屏294显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器310,与移动通信模块251或其他功能模块设置在同一个器件中。
无线通信模块252可以提供应用在电子设备200上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块252可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块352经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器210。无线通信模块252还可以从处理器210接收待发送的信号,对其进行调频、放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备200的天线1和移动通信模块251耦合,天线2和无线通信模块252耦合,使得电子设备200可以通过无线通信技术与网络以及其他设备通信。所述无 线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS))和/或星基增强系统(satellite based augmentation systems,SBAS)。
另外,电子设备200可以通过音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,以及应用处理器等实现音频功能。例如音乐播放,录音等。电子设备200可以接收按键290输入,产生与电子设备200的用户设置以及功能控制有关的键信号输入。电子设备200可以利用马达291产生振动提示(比如来电振动提示)。电子设备200中的指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。电子设备200中的SIM卡接口295用于连接SIM卡。SIM卡可以通过插入SIM卡接口295,或从SIM卡接口295拔出,实现和电子设备200的接触和分离。
电子设备200可以通过GPU、显示屏209,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏209和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器203可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
电子设备200可以通过ISP,摄像头201,视频编解码器,GPU,显示屏209以及应用处理器等实现拍摄功能。
ISP主要用于处理摄像头200反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备200在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备200可以支持一种或多种视频编解码器。这样,电子设备200可以播放或录制多种编码格式的视频,例如:动态图像专家组(movingpicture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
电子设备200可以包括诸如音频模块,扬声器,受话器,麦克风,耳机接口等实现音频功能的模块,电子设备200可以利用该实现音频功能的模块进行音乐播放、视频播放、录音等。
应理解,在实际应用中,电子设备200可以包括比图2b所示的更多或更少的部件,本申请实施例不作限定。
请参考图3,其示出本申请实施例提供的一种电子设备200的软件系统架构示意图。如图3所示,电子设备200可以的软件系统包括多个应用301、摄像头模块302、ISP模块303和人脸识别模块304。
如图3所示,多个应用301可以包括支付类应用、锁屏类应用、设置应用和应用锁等应 用的人脸识别软件开发工具包(software development kit,SDK)。该多个应用301中的各个应用在不同的场景中可以触发电子设备200进行人脸识别。
电子设备200接收到上述多个应用301发起的人脸识别请求时,可以启动摄像头模块302,并初始化摄像头模块302,以采集用户头像信息。其中,ISP模块可以向摄像头模块302传输摄像头模块的初始化参数,例如曝光亮度参数等。
由于摄像头模块302采集的图像信息是原生(raw)图像,因此,摄像头模块302采集的图像信息经过图像处理后,才可以用于进行人脸识别。如图3所示,ISP模块303可以对摄像头模块302采集的图像信息进行图像处理(例如降噪处理)。
人脸识别模块304可以包括人脸检测模块、人脸比对和防伪模块等。人脸识别模块304可以执行人脸检测、活体检测(包括上述深度防伪认证和红外防伪认证)、特征提取、特征比对和模板管理等。人脸检测是检测图像中的人脸轮廓信息、人脸特征信息等。如果检测出人脸轮廓信息、人脸特征信息等,人脸比对和防伪模块才可以执行活体检测、特征提取、特征比对等操作。特征提取是指提取出图像信息中的人脸特征。特征比对是指对比预先存储的人脸模板和从图像信息中提取的人脸特征,判断从图像信息中提取的人脸特征与人脸模板是否匹配。人脸检测模块可以预先写入预设的图像区域。
亮度调节子模块可以设置于ISP模块中。当人脸检测模块基于摄像模块302采集到的图像信息,无法检测出人脸轮廓时,可以将预设的图像区域返回给亮度调节子模块,以使亮度调节子模块对采集到的图像中预设的图像区域进行亮度平均值计算,然后基于计算结果,重新设置拍摄参数(例如曝光度)发送给摄像头模块302,从而摄像头模块302继续采集用户人脸图像信息。通常,亮度调节子模块设置有图像的目标亮度值,当上述预设的图像区域达到目标亮度值以后,将不再增加。从而,人脸识别模块304可以基于该目标亮度值进行后续的人脸轮廓检测、人脸特征提取、人脸比对等。
在一些实施例中,为了提高对图像的亮度调节速度,人脸检测模块可以与亮度调节子模块封装在一起,也即均设置于ISP模块中。
通常用于进行亮度处理的算法模块与用于人脸识别的算法模块之间无法直接进行数据传输,在用于亮度处理的算法模块与用于人脸识别的算法之间通常设置有用于二者通信的接口框架。为了提高人脸轮廓检测的灵活性,上述人脸检测模块还可以设置于接口框架中。
需要说明的是,图3所示的软件模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时还可以有另外的划分方式。
请继续参考图4,图4示出了本申请实施例提供的一种图像识别方法,该方法可以应用于电子设备200。该电子设备200包括摄像头201。该图像识别方法可以包括S401-S408:
S401:响应于接收到进行人脸识别的指令,通过摄像头采集图像信息。
本实施例中以人脸识别替代面部识别。一般而言,面部识别可以指人脸识别。
其中,电子设备200可以接收用户对电子设备的操作。该操作用于触发电子设备执行某一事件(例如支付、解锁、通过闸机、开门)。该事件需要通过人脸识别完成。当电子设备200检测到上述某一操作时,可以确定为接收到进行人脸识别的指令。
在通过验票闸机的场景中,当用户将身份证和车票以预设放置的方式放置于指定位置时,可以确定用户对电子设备进行操作。电子设备检测到该做操后,可以触发进行人脸识别的指令。
在开启人脸识别支付的场景中,上述用户对电子设备的操作例如可以为用户在支付类应用中点击“支付”操作,或者扫描二维码的操作。从而,电子设备检测到该两种操作中的任意一种操作时,可以触发进行人脸识别的指令。
在手机解锁的场景中,当用户对屏幕进行点击操作以唤醒屏幕时,电子设备检测到该操作,可以触发进行人脸识别的指令;在电脑解锁的场景中,当用户利用外部输入/输出设备(例如键盘、鼠标)进行点击操作时,电子设备检测到该操作,可以触发进行人脸识别的指令。
上述触发人脸识别的指令的场景为示意性的,在此不做具体限定。只要基于用户的某一操作而触发人脸识别的场景,均适用于本申请。
电子设备200在接收到进行人脸识别的指令后,可以通过摄像头201采集图像信息。该图像信息用于对目标对象进行身份验证。摄像头201可以为RGB摄像头。从而,该图像信息可以包括图像中各像素的RGB值、图像的灰度值、图像的像素亮度值、各像素点在图像中的坐标位置等。通过图像信息可以形成图像。
在一种可能的实现方式中,在采集图像信息之前,可以对摄像头201进行参数初始化。具体的,电子设备200中可以预先存储有外部环境光照亮度与摄像头参数之间的第一对应关系表。在接收到进行人脸识别的指令后,电子设备200可以首先利用环境光传感器获取外部光照亮度。然后,将获取到的外部光照亮度与第一对应关系表进行比较,查询出与光照亮度对应的摄像头参数。该摄像头参数可以包括但不限于曝光时间、感光度等。然后基于查询出的摄像头参数,来初始化摄像头,以使摄像头基于该参数获取图像信息。
S402:基于图像信息,确定是否能够检测到面部轮廓。
其中,图像信息是电子设备200基于外部环境光照亮度控制摄像头获取的。光线照射的角度不同,图像信息呈现的画面亮度也不相同。当光线直接照射在用户面部时,用户面部轮廓以及面部特征均较为清晰。
然而,当用户背对光源、且用户周围参照物的颜色过暗时,所拍摄出的面部头像与周围参照物融为一体,如图1a所示。从而,电子设备200无法检测出面部轮廓信息。
电子设备200可以确定是否能够从图像信息中检测到面部轮廓。
通常,电子设备无法获取面部轮廓信息包括两种情况。第一,摄像头所采集的图像信息中、人脸部分的像素亮度值过低,以至于无法提取出面部轮廓。第二,摄像头采集的图像信息中没有人脸对象。
电子设备200可以首先确定是否为第一种情况。具体的,电子设备200可以计算图像信息的平均像素亮度值。当电子设备200计算出图像信息的平均亮度值低于预设阈值时,可以确定为第一种情况。此时,可以首先确定出所采集的图像信息中、预设图像坐标区域的平均像素亮度值,然后执行步骤S403。
当电子设备200计算出图像信息的平均亮度值高于预设阈值时,可以进一步确定图像信息中、预设图像坐标区域的平均像素亮度值,当图像信息中、预设图像坐标区域的平均像素亮度值低于预设阈值时,说明图像中人脸部分与图像中其他部分的亮度差距过大,而导致图像的整体平均像素亮度值过高,此时也可以确定为第一种情况。然后,执行步骤S403。
当电子设备200计算出图像信息中、预设图像坐标区域的平均像素亮度值高于预设阈值,并且此时仍然无法从图像信息中检测到面部轮廓时,此时可以确定为第二种情况。需要重新执行S401中的采集图像信息的步骤。如果在特定时间内均未检测到人脸对象,则认证失败。
当电子设备200能够从采集到的图像信息中检测到面部轮廓时,可以执行S404。
S403,基于采集的图像信息中、预设图像坐标区域的平均像素亮度值,调节摄像头参数,执行S406。
在本实施例中,电子设备200中还可以存储有图像亮度值与摄像头参数之间的第二对应关系表。电子设备200可以将计算出的预设图像区域的平均像素亮度值与图像目标亮度值进行比较。若二者的差值小于或等于预设阈值,电子设备200可以将下一帧图像中、当前预设图像坐标区域的图像亮度值提高至目标亮度值。然后,电子设备200可以从该第二对应关系表中查询出与提高后的亮度值对应的摄像头参数。若二者之间的差值大于预设阈值时,电子设备200无法一次提高到目标亮度值。此时,电子设备200可以基于预设步长提高图像亮度值。然后,电子设备200可以从该第二对应关系表中查询出与提高后的亮度值对应的摄像头参数。
这里,查询出与提高后的亮度值对应的摄像头参数也即查找出与亮度值对应的感光度参数、白平衡参数、光圈大小、曝光时间等。从而,控制摄像头在该摄像头参数下重新获取图像信息。
具体的,在电子设备200中可以预先设置有一个图像区域。该图像区域可以是基于大数据统计得出的、人脸对象在图像中高概率出现的区域,该区域可以为矩形区域,也可以为诸如圆形等其他形状的区域。
优选的,该区域为矩形区域。如图5a-图5b所示。在图5a或5b中,矩形ABCD标记出的区域即为预设图像坐标区域。图5a和图5b不同的是,图5a画幅的第一方向长度小于第二方向长度,图5b画幅的第一方向长度大于第二方向长度。因此,图5a可以看作在手机竖屏的情况下所采集的图像信息形成的图像,图5b可以看作在手机横屏的情况下所采集的图像信息形成的图像。
这里,预设图像坐标区域在图像中的比例大小可以根据应用场景的需要设定,预设图像坐标区域沿第一方向的长度与沿第二方向的长度之间的比值也可以基于应用场景的需要设定。其中,预设图像坐标区域沿第一方向的长度与沿第二方向的长度之间的比值例如可以为4:6。
在一种可能的实现方式中,预设图像坐标区域沿第一方向的长度与沿第二方向的长度之间的比值为4:5。以图5a和图5b为例,矩形ABCD中,AB边的长度与AC边的长度之间的比值为4:5。
可选的,沿如图5a或图5b所示的第一方向,图像坐标区域的边界长度为320mm,沿如图5a或图5b所示的第二方向,图像坐标区域的边界长度为400mm。也即是说,无论所采集的图像信息中、图像画幅增大或缩小,图像坐标区域的边长不变,均可以是沿第一方向的边长320mm、沿第二方向的边长400mm围成的矩形区域。
可选的,沿如图5a所示的第一方向或者沿如图5b所示的第二方向,图像坐标区域的边界长度为320mm,沿如图5a所示的第二方向或者沿如图5b所示的第一方向,图像坐标区域的边界长度为400mm。也即是说,以用户视角看手机屏幕时,图像坐标区域的边长随着手机的横屏或竖屏而改变,图像坐标区域中较长的边界的方向与图像画幅的长边的方向相同,图像坐标区域中较短的边界的方向与图像画幅的短边的方向相同。
在一种可能的实现方式中,图像坐标区域的第一边界与第二边界形成第一顶点,第一边界与第三边界形成第二顶点;图像信息形成的图像的第一边与第二边形成第三顶点,图像信息形成的图像的第一边与第三边形成第四顶点,第一边界与第一边重合,第一顶点与第三顶点之间的距离和第二顶点与第四顶点之间的距离相等。
继续以图5a和图5b为例,图像坐标区域ABCD中,第一边界AB与整幅图像的第一边EF重合,第一顶点A与第三顶点E之间的距离等于第二顶点B与第四顶点F之间的距离。
S404,基于图像信息,确定是否可以检测到可进行人脸识别的特征点。
通常,人脸识别需要提取人脸部位的多个特征点。在本实施例中,当电子设备200检测出人脸轮廓后,可以进一步对图像信息中人脸部分进行特征点检测和提取。这里,对人脸部分进行特征点检测的结果可以包括两种情况。第一种情况为,人脸部分足够清晰,可以从图像信息中提取出用于进行人脸识别的特征点。此时,电子设备200可以执行步骤S405。
另一种情况为,人脸部分不够清晰,导致从图像信息中仅能够检测出部分特征点。所检测出的特征点由于数目较少,不足以进行人脸识别和验证。此时,电子设备200可以执行步骤S406。
上述关键点可以包括与人的鼻子、嘴巴和眼睛等器官的特征匹配的特征。即电子设备200可以通过判断图像信息中是否包括人的鼻子、嘴巴和眼睛等器官的图像特征,来确定是否可以检测出用于人脸识别的关键点。
S405,进行人脸识别和防伪验证,确定是否可以通过身份验证。
当电子设备200检测出人脸对象的关键点时,可以对用户进行身份验证。对用户进行身份验证通常可以包括人脸比对验证和防伪验证。
具体的,电子设备200中可以预先存储有用于用户身份验证的人脸图像信息。该人脸图像信息可以包括电子设备200预先录入的人脸图像的特征信息。电子设备200可以对比图像信息与预先录入的人脸图像的特征信息,判断该图像信息是否匹配。
电子设备200可以将从第二图像信息中提取的人脸特征与预先录入的人脸特征进行比对,判断提取的人脸特征与预先录入的人脸特征是否匹配。
具体的,如果第二图像信息与预先录入的图像匹配,电子设备200可以对第二图像信息中的人脸对象进行进一步防伪认证。该防伪认证用于检测是否为活体。防止通过用户的照片、模型等代替用户本人进行身份验证。当防伪认证通过后,则可以执行身份验证通过的步骤S408,也即人脸识别成功,从而可以进行某一事件(例如支付、解锁、通过闸机、开门)等。
如果图像信息与原始图像不匹配,有可能是因为图像信息形成的图像中,人脸对象的面部两侧区域亮度差距较大,导致电子设备200在进行特征点检测时,只检测出亮度较高的面部区域的特征点,此时电子设备可能认为可以从图像信息中提取出满足用于人脸识别的全部特征点。而在进行人脸对比验证时,由于人脸对象的某一侧面(例如右侧面)区域亮度暗导致特征匹配不准确。此时,可以执行步骤S406。
S406,确定采集的图像信息中、面部轮廓部分的平均像素亮度值,基于平均像素亮度值,调节摄像头参数,然后执行步骤S407。
本实施例中,当电子设备200可以从图像信息中检测出用户面部轮廓时,为了提高面部特征的检测速度,避免图像中由于周围环境与人脸之间的亮度差值较大,需要多次迭代才可以检测出用于进行人脸识别的人脸特征点。此时,电子设备可以确定出面部轮廓部分的平均像素亮度值,从而可以仅针对面部轮廓部分的平均像素亮度值,对后续采集的图像进行画面提亮。
S407,基于调节后的摄像头参数,重新通过摄像头采集图像信息,然后执行步骤S402。
也即是说,通过摄像头获取到的图像的像素亮度值过低而无法获取到面部轮廓信息时,或者在获取到面部轮廓信息但无法进行特征点检测时,或者是由于特点较少导致与原始图像 比对后未通过认证时,可以重新调节摄像头参数(例如提高曝光时间,提高感光度),从而对所拍摄的物体、对象等进行亮度补偿,继续获取新的图像信息。
本申请实施例可以反复执行步骤S402-步骤S408,直到电子设备200可以清楚的检测出预设图像区域的内容、或者检测到人脸对象。从而,电子设备200可以基于检测出的图像内容对用户进行身份验证。
需要说明的是,当步骤S402-步骤S408迭代执行的次数超过预设阈值时,可以确定用户身份验证失败,此时电子设备将不在进行某一事件(例如支付、解锁、通过闸机、开门等事件)。
下面以电子设备为手机作为示示例,结合图6a-图6c,以具体的场景对本申请各实施例所示的面部识别方法进行阐述。
首先,用户通过点击手机屏幕,触发了手机通过面部识别以对手机屏幕解锁的事件。此时,手机中用于执行面部识别指令的部件控制手机的摄像头获取图像信息。在该应用场景中,所获取到的图像信息形成的图像中,人脸部分过暗,此时无法从图像中检测到用户面部轮廓。如图6a所示,图6a示意性的示出了摄像头当前采集的图像信息形成的图像。在图6a中,区域A与区域B之间的亮度对比强烈,也即区域A与区域B之间的亮度差值过大。此时,即使图6a所示的整幅图像的平均像素亮度值达到目标值,仍然无法从该图像中检测出面部轮廓。这里假定区域A为预设图像坐标区域。然后,手机可以确定图6a中预设图像坐标区域A的平均像素亮度值。接着,基于平均像素亮度值调节摄像头的感光度、曝光时间等参数,重新采集图像信息以生成第二帧图像6b,从而使得所获取的第二帧图像中,预设图像坐标区域A的平均像素亮度值提高。从图6b中可以看出,与图6a所示的图像相比,图6b中预设图像坐标区域A的平均像素亮度值提高。此时,手机可以继续确定是否可以从图6b中检测到面部轮廓。当手机从图6b中检测到面部轮廓,但无法从图6b中提取出面部特征点时,可以进一步确定图6b中面部轮廓区域C的平均像素亮度值。这里假定图6b中的区域B为面部轮廓区域。然后,基于面部轮廓区域C的平均像素亮度值,继续调节摄像头的感光度、曝光时间等参数,重新采集图像信息以生成第三帧图像6c。与图6b所示的图像相比,图6c中面部轮廓区域的平均像素亮度值进一步提高。进而,手机可以清楚的从图6c中提取出面部特征,然后进行后续人脸识别与人脸防伪认证步骤。
这里需要说明的是,通过参数调节后的摄像头获取第二帧图像、第三帧图像时,与前一帧图像相比,整幅图像的亮度值均变化。从图6a-图6c中可以看出,背景区域B的亮度也逐渐提高。但在本申请中,不需要关注除了预设图像区域或者面部轮廓区域之外的区域的像素亮度。也即是说,在一些场景中存在某种情况,预设图像坐标区域或者面部轮廓区域可以正常检测面部轮廓或提取面部特征,而此时预设图像坐标区域或者面部轮廓区域之外的区域的亮度过曝导致该其他区域的对象模糊。
在一种可能的实现方式中,预设图像区域可以通过图7所示的步骤得到。该步骤包括:
S701:获取样本图像集合。
其中,样本图像集合中的每一个样本图像均包括人脸对象。样本图像可以为基于光线直射用户面部时获取到的,也可以为用户面部背对光源时获取到的。光线直射用户面部时获取到的样本图像中,可以较为清晰的呈现用户对象的面部特征。而用户面部背对光源时获取到的样本图像中,通常仅可以呈现用户对象的面部轮廓。
值得注意的是,为了提高所确定出的图像坐标区域的准确性,样本图像的获取可以采用 与电子设备200相同的电子设备获取。或者,样本图像的尺寸以及长宽比例与电子设备200的摄像头所采集的图像的尺寸以及长宽比例相同。
基于S701获取到的样本图像结合,在S702中:确定人脸对象在样本图像中的坐标区域信息。
该人脸对象在样本图像中的坐标区域信息可以是基于人工标注得到的,也可以是利用轮廓检测算法检测得到的。
具体的,当样本图像是基于光线直射用户面部时获取到的时,电子设备200可以基于轮廓检测算法直接检测出人脸对象在样本图像中的坐标区域信息。当样本图像是基于用户面部背对光源时获取到的时,为了提高人脸对象标记的准确性,人脸对象在样本图像中的坐标区域信息可以是人工标注的。
为了便于电子设备后续的计算以及对图像亮度的处理,人脸对象在样本图像中的坐标区域通常为矩形区域。通过确定矩形的四个顶点在图像中的坐标位置,即可确定出人脸对象在样本图像中的坐标区域信息。
S703:对所得到的各样本图像的坐标区域信息进行重合度计算,选取出重合度大于预设阈值的区域。
在超过预设数目个样本图像中,人脸对象在图像中的位置分布均落在某一区域范围内,则该区域范围即为重合度大于预设阈值的区域。
举例来说,假设有100个样本图像。其中95个样本图像中,人脸对象在图像中的坐标区域均落在如图5a所示的A(0,20)、B(0,80)、C(70,20)、D(70,80)该四个坐标点所围成矩形区域范围内,可以将ABCD四个坐标点在图像中围成的区域作为上述重合度大于预设阈值的区域。
S704:基于重合度大于预设阈值的坐标区域,生成预设图像坐标区域。
为了尽可能的使人脸对象落在图像的某一区域范围内,S703选取出的重合度大于预设阈值的区域通常范围较大。而电子设备200对图像进行亮度调节时所能达到的最大亮度值是基于图像的平均亮度决定。如果图像区域范围过大,电子设备200基于设定的目标使得所选取出的图像区域达到最大亮度值时,有可能仍然无法对人脸进行识别。
再例如,电子设备200通常基于计算出的图像亮度值与预定目标亮度值之间的差值,来设置亮度调节步长。差值越小,步长越小,差值越大,步长越大。如果所选取出的图像区域范围过大,导致周围环境部分落入图像区域范围内。当图像中人脸部分与周围环境部分的亮度差距过大时,电子设备200计算出的图像区域范围的平均亮度值远远高于人脸部分的平均亮度值。而此时的平均亮度值与预定目标亮度值之间的差值较小,导致电子设备以较小的步长提高图像区域范围。当亮度调节后,有可能仍然无法检出出人脸轮廓,需要迭代多次才有可能对人脸进行识别,从而严重影响人脸识别速度。
为了兼顾电子设备200所能处理的最大亮度值以及其亮度调节速度,因此,还需要对重合度大于预设阈值的坐标区域的大小进行调节,从而根据最终的调节结果得到最终的预设图像坐标区域。
为此,首先可以获取预设环境条件下,包括人脸对象的测试图像。为了尽可能的对预设图像区域进行优化,该预设环境条件可以为昏暗的背光环境。该测试图像中,人脸对象与背景图像尽可能的重叠。该测试图像例如可以为如图1a所示。
然后,可以对测试图像中、预先选取出的重合度大于预设阈值的坐标区域进行亮度调节。
响应于亮度调节次数大于预设阈值,调整坐标区域的大小。该亮度调节次数即为:预先选取出的图像坐标区域能够识别出人脸特征时、亮度迭代调节的次数。该次数如果大于预设次数,则说明所选取出的图像坐标区域过大,以预设比例减小该图像坐标区域。然后,继续对测试图像中、重新选取出的图像坐标区域进行亮度调节。如果当前坐标区域的亮度调节次数小于预设阈值,则可以将当前的坐标图像区域作为上述预设图像坐标区域。
通过S701-S704所确定出的预设图像坐标区域,可以使得该图像坐标区域内尽可能完整的呈现用户对象特征,同时还尽可能减少图像亮度的迭代次数,从而提高背光场景或昏暗场景下、人脸识别成功的效率。
请继续参考图8,图8示出了本申请实施例提供的又一个图像识别方法,该方法可以应用于电子设备200。该电子设备200包括摄像头201。该图像识别方法可以包括S801-S807:
S801:响应于接收到进行人脸识别的指令,通过摄像头采集图像信息。
S802:基于图像信息,确定是否能够检测到面部轮廓信息。
此时,若无法检测到面部轮廓信息,则执行步骤S803;若检测到面部轮廓信息,则执行步骤S804。
S803:确定采集的图像信息中、预设图像坐标区域的平均像素亮度值,基于平均像素亮度值,调节摄像头参数,执行S406。
S804:进行人脸识别和防伪验证,确定是否可以通过身份验证。
在本实施例中,当电子设备200检测到面部轮廓信息后,可以直接进行人脸识别。也即是说,这里,可以在进行人脸识别时提取面部关键点特征,将面部关键点特征与预先录入的人脸图像对应的人脸特征进行比对。当图像面部轮廓部分亮度过低导致无法从图像信息中提取面部关键点特征、或者所提取的面部关键点特征较少时,可以执行步骤S805。
若可以从图像信息中提取到面部关键点特征,并与预先录入的人脸图像对应的人脸特征比对成功后,可进一步进行防伪验证。也即进行活体检测。当防伪认证通过后,则可以执行身份验证通过的步骤S707,也即人脸识别成功,从而可以进行某一事件(例如支付、解锁、通过闸机、开门)等。
S805:确定采集的图像信息中、面部轮廓部分的平均像素亮度值,基于平均像素亮度值,调节摄像头参数,然后执行步骤S806。
S806:基于调节后的参数,重新通过摄像头采集图像信息,然后执行步骤S802。
本申请实施例可以反复执行步骤S802-步骤S807,直到电子设备200可以清楚的检测出预设图像区域的内容、或者检测到人脸对象。从而,电子设备200可以基于检测出的图像内容对用户进行身份验证。
需要说明的是,当步骤S802-步骤S807迭代执行的次数超过预设阈值时,可以确定用户身份验证失败,此时电子设备将不在进行某一事件(例如支付、解锁、通过闸机、开门)等。
本实施例所示的步骤S801、S802、S803、S805、S806以及S807的具体实现以及带来的有益效果可以参考图4所示的步骤S401、S402、S403、S406、S407以及S408的描述,在此不再赘述。
从图8中可以看出,与图4所示的图像识别方法不同的时,本实施例中省去了S404中单独进行面部特征点检测的步骤,在检测到面部轮廓信息时直接进行面部特征提取以及面部特征比对,从而可以提高图像识别方法的灵活性。
值得注意的是,图4、图8所示的各方法步骤仅是示例,本申请实施例还可以执行其他操作或者图4、图8中的各个操作的变形。此外,图4、图8中的各个步骤可以按照与图4呈现的不同的顺序来执行,并且有可能并非要执行图4中的全部操作。
可以理解的是,上述电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图9示出了上述实施例中所涉及的电子设备900的一种可能的结构示意图。该电子设备900可以包括:处理模块901和RGB采集模块902。可选的,该电子设备900还可以包括显示模块、通信模块,该通信模块包括蓝牙模块和Wi-Fi模块等。
其中,处理模块901用于对电子设备900的动作进行控制管理。RGB采集模块902用于采集可见光下目标对象的图像。
显示模块用于显示处理模块901生成的图像和RGB采集模块902采集的图像。
通信模块用于支持电子设备900与其他设备的通信。处理模块901还用于根据RGB采集模块902采集的图像进行目标对象的身份验证。
具体的,上述处理模块901可以用于支持电子设备900执行上述方法实施例中的S402–S408,S701-S704,S802-807,和/或用于本文所描述的技术的其它过程。RGB采集模块902可以用于支持电子设备900采集可见光下的图像信息,和/或用于本文所描述的技术的其它过程。
当然,上述电子设备900中的单元模块包括但不限于上述处理模块901和RGB采集模块902等。例如,电子设备900中还可以包括存储模块。存储模块用于保存电子设备900的程序代码和数据,和/或用于本文所描述的技术的其它过程。
其中,处理模块901可以是处理器或控制器,例如可以是中央处理器(Cen tral Processing Unit时,CPU),数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(FieldProgrammable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。处理器可以包括应用处理器和基带处理器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。存储模块可以是存储器。
例如,处理模块901为一个或多个处理器(如图2b所示的处理器220),通信模块包括无线通信模块(如图2b所示的无线通信模块252,该无线通信模块252包括BT(即蓝牙模块)、WLAN(如Wi-Fi模块))。无线通信模块可以称为通信接口。存储模块可以为存储器(如图2b所示的内部存储器221)。显示模块可以为显示屏(如图2b所示的显示屏294)。上述RGB采集模块902可以为图2b所示的1-N个摄像头293。本申请实施例所提供的电子设备900可以 为图2b所示的电子设备200。其中,上述一个或多个处理器、存储器、显示屏和摄像头等可以连接在一起,例如通过总线连接。
本申请实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序代码,当上述处理器执行该计算机程序代码时,电子设备900执行图4、图7或图8中任一附图中的相关方法步骤实现上述实施例中的方法。
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行图4、图7或图8中任一附图中的相关方法步骤实现上述实施例中的方法。
其中,本申请实施例提供的电子设备900、计算机存储介质或者计算机程序产品均用于执行土文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要丽将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的相合或直接相合或通信连接可以是通过一些接口,装置或单元的间接相合或通信连接,可以是电性,机械或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以使用硬件的形式实现,也可以使用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不同限于此,任何在本申请揭露的技术范围内的变化或苔换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (11)

  1. 一种图像识别方法,其特征在于,所述方法包括:
    响应于接收到进行面部识别的指令,电子设备通过摄像头采集图像信息,所述图像信息用于对目标对象进行身份验证;
    基于所述图像信息,所述电子设备确定是否检测到面部轮廓;
    响应于无法检测到所述面部轮廓,所述电子设备确定所述图像信息中位于预设区域的部分图像信息的第一亮度值参数;
    基于所述第一亮度值参数,所述电子设备调节所述摄像头的摄像头参数;
    基于调节后的摄像头参数,所述电子设备重新通过所述摄像头采集图像信息,以用于进行身份验证。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于检测到所述面部轮廓,所述电子设备确定从所述图像信息中是否检测到用于进行面部识别的特征点;
    响应于无法检测到所述用于进行面部识别的特征点,所述电子设备确定所述图像信息中、面部轮廓部分的第二亮度值参数;
    基于所述第二亮度值参数,所述电子设备调节所述摄像头的摄像头参数;
    基于调节后的摄像头参数,所述电子设备执行重新通过所述摄像头采集图像信息的步骤。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于检测到所述面部轮廓,所述电子设备将所述图像信息与预先存储的人脸图像信息进行比对,确定是否比对成功;
    响应于比对成功,基于所述图像信息,进行防伪认证;
    响应于所述防伪认证通过,通过所述身份验证。
  4. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    响应于检测到所述用于进行面部识别的特征点,所述电子设备将所述图像信息与预先存储的人脸图像信息进行比对,确定是否比对成功;
    响应于比对成功,基于所述图像信息,进行防伪认证;
    响应于所述防伪认证通过,通过所述身份验证。
  5. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    响应于没有比对成功,确定所述图像信息中、面部轮廓部分的第三亮度值参数;
    基于所述第三亮度值参数,调节摄像头参数;
    基于调节后的摄像头参数,所述电子设备执行重新通过所述摄像头采集图像信息的步骤。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述基于调节后的摄像头参数,所述电子设备重新通过所述摄像头采集图像信息之后,所述方法还包括:
    基于重新采集的图像信息,所述电子设备确定是否检测到面部轮廓;
    响应于从所述重新采集的图像信息中无法检测到面部轮廓,所述电子设备确定所述重新 采集的图像信息中位于预设区域的部分图像信息的第四亮度值参数;
    响应于从所述重新采集的图像信息中检测到面部轮廓,所述电子设备确定从所述重新采集的图像信息中是否检测到用于进行面部识别的特征点;响应于从所述重新采集的图像信息中无法检测到用于进行面部识别的特征点,所述电子设备确定所述重新采集的图像信息中、面部轮廓部分的第五亮度值参数;响应于从所述重新采集的图像信息中检测到用于进行面部识别的特征点,所述电子设备将所述重新采集的图像信息与预先存储的人脸图像信息进行比对,确定是否比对成功;
    基于所述第四亮度值参数或者所述第五亮度值参数,所述电子设备调节所述摄像头的摄像头参数;
    基于调节后的摄像头参数,所述电子设备执行重新通过所述摄像头采集图像信息的步骤。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述预设区域为矩形区域。
  8. 根据权利要求7所述的方法,其特征在于,所述预设区域所围成的矩形区域包括第一边和第二边,所述第一边的长度小于所述第二边的长度;其中,
    所述第一边与所述第二边的比值为4:5。
  9. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器、存储器以及摄像头,所述存储器和所述摄像头耦合至所述处理器,所述存储器用于存储信息,所述处理器执行所述存储器中的指令,使得所述电子设备执行如权利要求1-8中任一所述的图像识别方法。
  10. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-8中任一项所述的图像识别方法。
  11. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-8中任一项所述的图像识别方法。
PCT/CN2020/111801 2019-08-30 2020-08-27 图像识别方法及电子设备 WO2021037157A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910816996.6 2019-08-30
CN201910816996.6A CN112446252A (zh) 2019-08-30 2019-08-30 图像识别方法及电子设备

Publications (1)

Publication Number Publication Date
WO2021037157A1 true WO2021037157A1 (zh) 2021-03-04

Family

ID=74684643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111801 WO2021037157A1 (zh) 2019-08-30 2020-08-27 图像识别方法及电子设备

Country Status (2)

Country Link
CN (1) CN112446252A (zh)
WO (1) WO2021037157A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065487A (zh) * 2021-04-09 2021-07-02 深圳市汇顶科技股份有限公司 指纹识别的方法、装置和电子设备
CN113923372A (zh) * 2021-06-25 2022-01-11 荣耀终端有限公司 曝光调整方法及相关设备
CN114694191A (zh) * 2022-03-11 2022-07-01 北京极豪科技有限公司 图像处理方法、计算机程序产品、设备以及存储介质
CN114863510A (zh) * 2022-03-25 2022-08-05 荣耀终端有限公司 一种人脸识别方法和装置
WO2023015958A1 (zh) * 2021-08-11 2023-02-16 荣耀终端有限公司 一种人脸识别方法和装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861743B (zh) * 2023-02-20 2023-06-02 上海励驰半导体有限公司 基于车载台架的人脸识别装置测试方法、系统及车机

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006197499A (ja) * 2005-01-17 2006-07-27 Canon Inc 撮像装置
CN101115140A (zh) * 2006-07-25 2008-01-30 富士胶片株式会社 图像摄取系统
CN106357987A (zh) * 2016-10-19 2017-01-25 浙江大华技术股份有限公司 一种曝光方法和装置
CN108288044A (zh) * 2018-01-31 2018-07-17 广东欧珀移动通信有限公司 电子装置、人脸识别方法及相关产品

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107872613A (zh) * 2016-09-23 2018-04-03 中兴通讯股份有限公司 一种利用双摄像头进行人脸识别的方法、装置及移动终端
CN108171032A (zh) * 2017-12-01 2018-06-15 平安科技(深圳)有限公司 一种身份鉴定方法、电子装置及计算机可读存储介质
CN110163160A (zh) * 2019-05-24 2019-08-23 北京三快在线科技有限公司 人脸识别方法、装置、设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006197499A (ja) * 2005-01-17 2006-07-27 Canon Inc 撮像装置
CN101115140A (zh) * 2006-07-25 2008-01-30 富士胶片株式会社 图像摄取系统
CN106357987A (zh) * 2016-10-19 2017-01-25 浙江大华技术股份有限公司 一种曝光方法和装置
CN108288044A (zh) * 2018-01-31 2018-07-17 广东欧珀移动通信有限公司 电子装置、人脸识别方法及相关产品

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065487A (zh) * 2021-04-09 2021-07-02 深圳市汇顶科技股份有限公司 指纹识别的方法、装置和电子设备
CN113923372A (zh) * 2021-06-25 2022-01-11 荣耀终端有限公司 曝光调整方法及相关设备
CN113923372B (zh) * 2021-06-25 2022-09-13 荣耀终端有限公司 曝光调整方法及相关设备
WO2023015958A1 (zh) * 2021-08-11 2023-02-16 荣耀终端有限公司 一种人脸识别方法和装置
CN114694191A (zh) * 2022-03-11 2022-07-01 北京极豪科技有限公司 图像处理方法、计算机程序产品、设备以及存储介质
CN114863510A (zh) * 2022-03-25 2022-08-05 荣耀终端有限公司 一种人脸识别方法和装置
CN114863510B (zh) * 2022-03-25 2023-08-01 荣耀终端有限公司 一种人脸识别方法和装置

Also Published As

Publication number Publication date
CN112446252A (zh) 2021-03-05

Similar Documents

Publication Publication Date Title
WO2021037157A1 (zh) 图像识别方法及电子设备
JP7195422B2 (ja) 顔認識方法および電子デバイス
US9986171B2 (en) Method and apparatus for dual exposure settings using a pixel array
WO2020088290A1 (zh) 一种获取深度信息的方法及电子设备
KR102524498B1 (ko) 듀얼 카메라를 포함하는 전자 장치 및 듀얼 카메라의 제어 방법
WO2018121428A1 (zh) 一种活体检测方法、装置及存储介质
CN114092364B (zh) 图像处理方法及其相关设备
US20190354662A1 (en) Apparatus and method for recognizing an object in electronic device
CN115601244B (zh) 图像处理方法、装置和电子设备
CN112840634B (zh) 用于获得图像的电子装置及方法
CN113741681B (zh) 一种图像校正方法与电子设备
US20200322530A1 (en) Electronic device and method for controlling camera using external electronic device
CN111144365A (zh) 活体检测方法、装置、计算机设备及存储介质
US20240119566A1 (en) Image processing method and apparatus, and electronic device
US20210117708A1 (en) Method for obtaining face data and electronic device therefor
CN112087649B (zh) 一种设备搜寻方法以及电子设备
CN114090102B (zh) 启动应用程序的方法、装置、电子设备和介质
WO2021179186A1 (zh) 一种对焦方法、装置及电子设备
CN113592751B (zh) 图像处理方法、装置和电子设备
US20170111569A1 (en) Face detection method and electronic device for supporting the same
US20220103795A1 (en) Electronic device and method for generating images by performing auto white balance
CN115150542B (zh) 一种视频防抖方法及相关设备
WO2020077544A1 (zh) 一种物体识别方法和终端设备
CN114390195B (zh) 一种自动对焦的方法、装置、设备及存储介质
WO2022179412A1 (zh) 识别方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20856546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20856546

Country of ref document: EP

Kind code of ref document: A1