WO2023016005A1 - 数据获取方法及装置 - Google Patents

数据获取方法及装置 Download PDF

Info

Publication number
WO2023016005A1
WO2023016005A1 PCT/CN2022/092485 CN2022092485W WO2023016005A1 WO 2023016005 A1 WO2023016005 A1 WO 2023016005A1 CN 2022092485 W CN2022092485 W CN 2022092485W WO 2023016005 A1 WO2023016005 A1 WO 2023016005A1
Authority
WO
WIPO (PCT)
Prior art keywords
tof
data
frame
camera
image
Prior art date
Application number
PCT/CN2022/092485
Other languages
English (en)
French (fr)
Inventor
袁江峰
廖川
周俊伟
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22789143.9A priority Critical patent/EP4156674A4/en
Priority to US17/966,142 priority patent/US20230052356A1/en
Publication of WO2023016005A1 publication Critical patent/WO2023016005A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • the present application relates to the field of electronic information, in particular to a data acquisition method and device.
  • Facial recognition is widely used in electronic devices to identify authorized users. For example, for the face unlock function, it is judged whether to unlock the screen according to whether the face recognition is passed or not.
  • the present application provides a data acquisition method and device, aiming at solving the problem of how to realize facial recognition safely, accurately and quickly.
  • the first aspect of the present application provides a data acquisition method, including: acquiring the first frame of time-of-flight TOF data, the first frame of TOF data includes projection off data and infrared data, and the projection off data is when the TOF camera closes the TOF
  • the TOF data collected under the condition of light source it is determined that there is a data block satisfying the preset condition in the infrared data, and the preset condition includes that the number of data points in the data block whose value is greater than the first threshold is greater than the second threshold, According to the difference between the infrared data and the projection off data, the TOF data used to generate the first frame of TOF image is acquired.
  • the projection off data is the TOF data collected by the TOF camera when the TOF light source is turned off, so the difference between the infrared data and the projection off data can be corrected.
  • Higher-quality first-frame TOF data is used for facial recognition, which not only has higher security, but also has higher accuracy and execution speed.
  • the acquiring the first frame of TOF data includes: after the TOF camera turns off the TOF light source, collecting the projection off data; and after the TOF camera turns on the TOF light source, collecting the infrared data. It is easy to implement by controlling the turning off and turning on of the TOF light source of the TOF camera, respectively collecting the projection off data in the form of infrared data.
  • the method before the collecting the projection off data, further includes: collecting depth data for generating a depth image when the TOF camera turns on the TOF light source.
  • the infrared image generated by the depth image and the infrared data can be used for anti-counterfeiting identification in facial recognition, thus improving the security of facial recognition.
  • the timing of controlling the TOF camera to turn off the TOF light source is determined according to the first exposure time used to collect the depth data, which can not only ensure that the depth data is collected, but also minimize the time spent on collecting the first frame of TOF data. delay.
  • the timing of controlling the TOF camera to turn on the TOF light source is determined according to the exposure time for collecting the projection off data, which can not only ensure that the projection off data is collected, but also minimize the time for collecting the first frame of TOF data. delay.
  • the exposure time for collecting the projection off data is the first exposure time for collecting the depth data, or the second exposure time for collecting the infrared data.
  • the TOF camera turning off the TOF light source includes: controlling the TOF light source to be turned off through the TOF sensor of the TOF camera; the TOF camera turning on the TOF light source includes: using the TOF camera's The TOF sensor controls the TOF light source to be turned on. Using the TOF sensor to control the turning off and on of the TOF light source, that is, controlling the hardware through hardware, can obtain faster execution speed.
  • the method further includes: determining that the data block does not exist in the infrared data, and processing the collected first frame of TOF data into a first frame of TOF image. There is no data block that meets the preset conditions in the infrared data, which means that the infrared data does not have overexposure and can be directly processed into a TOF image.
  • the method further includes: using the TOF data of the generated first frame of TOF image to generate a first frame of TOF image; using the first frame of TOF image to perform facial recognition to obtain a recognition result.
  • using the first frame of TOF image to obtain the face recognition result can improve the execution speed of face recognition.
  • the first frame of TOF data includes: a facial recognition frame.
  • the method before the acquiring the first frame of TOF data, the method further includes: according to a safety indication frame collected before collecting the first frame of TOF data, determining that human eyes are safe when the TOF light source is turned on. Use the facial recognition frame for processing to obtain the TOF data used to generate the first frame of TOF image, and before processing, use the safety indicator frame to determine the safety of the human eye, and realize the premise of using TOF data by dividing the type of TOF data To ensure the safety of human eyes.
  • the method further includes: determining that human eyes are not safe when the TOF light source is turned on according to the safety indication frame, and controlling the TOF camera to be turned off, so as to avoid damage to human eyes by infrared light of the TOF camera.
  • the specific implementation manner of acquiring the first frame of TOF data is: storing the first frame of TOF data collected by the TOF camera in a trusted execution environment TEE through a kernel layer.
  • the specific implementation of determining that there is a data block satisfying the preset condition in the infrared data, and obtaining the TOF data used to generate the first frame of TOF image according to the difference between the infrared data and the projection off data is as follows: The TEE determines that there is a data block satisfying the preset condition in the infrared data, and acquires TOF data for generating a first frame of TOF image according to the difference between the infrared data and the projection off data.
  • the above specific implementation manner realizes the purpose of processing TOF data in the TEE to improve the security of TOF data.
  • the second aspect of the present application provides a data acquisition method, including: using exposure parameters to collect TOF data at the first moment, and using the exposure parameters to collect the second moment when the interval from the first moment is within a preset range.
  • One frame of TOF data the first moment is earlier than the second moment. Because the interval between the first moment and the second moment is within the preset range, the interval between the second moment and the first moment can be set so that the interval between the second moment and the first moment is not long, so that the first frame of TOF data is collected at the same time as the first moment.
  • the external environment of the TOF data is similar, so the first frame of TOF data is collected using the exposure parameters of the first moment at the second moment to improve the possibility of adapting the exposure parameters to the environment, thereby obtaining the first frame of TOF data with higher quality.
  • TOF data for facial recognition it is beneficial to improve the accuracy and speed of facial recognition.
  • the exposure parameters to collect the second TOF data before using the exposure parameters to collect the second TOF data, it also includes: before acquiring the second moment, the last time of collecting TOF data is the first moment; determining the second moment The interval from the first moment is within the preset range. Taking the last acquisition time before the second time as the first time can ensure the minimum calculation cost, so it is beneficial to save computing power resources.
  • the exposure parameter is an exposure parameter adjusted according to an automatic exposure AE result.
  • the exposure parameters adjusted according to the AE results have a high degree of adaptation to the ambient light, so it is beneficial to obtain higher-quality TOF data.
  • the third aspect of the present application provides a data acquisition method, including: acquiring the light intensity of the environment where the electronic device is located, and using the exposure time corresponding to the light intensity to acquire the first frame of TOF data. Based on the light intensity of the environment where the electronic device is located, the exposure time for collecting the first frame of TOF data is obtained. Therefore, it is beneficial to collect TOF data whose brightness meets the requirements of facial recognition in the first frame, thereby improving the accuracy and accuracy of facial recognition. speed.
  • the process of obtaining the exposure duration corresponding to the light intensity includes: confirming a target interval in each preset light intensity interval, and the target interval is an interval to which the light intensity of the environment where the electronic device is located belongs; The exposure duration corresponding to the target interval is acquired.
  • each light intensity interval includes an indoor light intensity interval and an outdoor light intensity interval.
  • the division of the indoor light intensity interval and the outdoor light intensity interval lays the foundation for the refinement of the distinction granularity.
  • the larger the value in each light intensity interval the shorter the corresponding exposure time, so as to follow the principle that the longer the exposure time, the higher the brightness of the image, and ensure the rationality of controlling the image quality through the exposure time.
  • a fourth aspect of the present application provides an electronic device, including: a TOF camera, a memory, and a processor.
  • the TOF camera is used to collect the first frame of TOF data
  • the first frame of TOF data includes projection off data and infrared data
  • the projection off data is TOF data collected by the TOF camera when the TOF light source is turned off.
  • the memory is used to store program codes.
  • the processor is used to run the program code to realize the data acquisition methods provided by the first aspect, the second aspect and the third aspect.
  • a fifth aspect of the present application provides a chip system, including: at least one processor and an interface, the interface is used to receive code instructions and transmit them to the at least one processor; the at least one processor runs the code Instructions to implement the data acquisition method provided by the first aspect, the second aspect or the third aspect.
  • the sixth aspect of the present application provides a readable storage medium on which a program is stored, and when the program is read and run by a computing device, the data acquisition method provided by the first aspect, the second aspect or the third aspect is realized .
  • Figure 1a is an example diagram of a scene using an electronic device for facial recognition
  • Figure 1b is an example diagram of a scene where electronic devices are used for facial recognition under strong light
  • FIG. 2 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software framework for realizing facial recognition by an electronic device disclosed in an embodiment of the present application
  • FIG. 4 is a flow chart of a facial recognition method disclosed in an embodiment of the present application.
  • FIG. 5 is a flow chart of another face recognition method disclosed in the embodiment of the present application.
  • FIG. 6 is a flow chart of another face recognition method disclosed in the embodiment of the present application.
  • FIG. 7 is a flow chart of a data processing method disclosed in an embodiment of the present application.
  • FIG. 8 is a flowchart of another data processing method disclosed in the embodiment of the present application.
  • Fig. 9 is a specific implementation flowchart of S819 in Fig. 8;
  • FIG. 10 is a flowchart of a data acquisition method disclosed in the embodiment of the present application.
  • FIG. 11 is a flowchart of another data acquisition method disclosed in the embodiment of the present application.
  • Fig. 12 is a flowchart of another data acquisition method disclosed in the embodiment of the present application.
  • FIG. 13 is a schematic diagram of another software framework for electronic equipment to realize facial recognition.
  • Figure 1a is a scene where facial recognition is applied to electronic devices.
  • a user uses an electronic device such as a mobile phone to implement a function that requires user authorization, the user spontaneously or according to the prompt information sent by the electronic device points the camera of the electronic device at the face.
  • Electronic devices are currently usually equipped with RGB (Red Green Blue) cameras. After the electronic device collects the image through the RGB camera, it compares the image with the stored face template and conducts anti-counterfeiting identification. After obtaining the identification result, it executes the task according to the identification result.
  • RGB Red Green Blue
  • the user will turn the screen of the mobile phone towards the face, so that the front camera of the mobile phone can collect the facial image.
  • the mobile phone After the mobile phone collects the image, it performs facial recognition on the image, and if the recognition is passed, the screen is unlocked, and if the recognition fails, the screen remains locked.
  • TOF data can generate depth images and infrared images, so it has better anti-counterfeiting performance (i.e. protection against planar attacks and headmask attacks).
  • TOF data is less affected by light, so the quality of TOF data is higher, so a higher threshold value can be used for face comparison, thereby reducing the possibility of counterfeiting of similar faces. It can be seen that using TOF data for facial recognition can improve security.
  • the current mainstream hardware platforms only support the processing of RGB data, but not the processing of TOF data.
  • the image signal processor image signal processor, ISP
  • ISP image signal processor
  • the TOF data such as Camera Raw Data
  • the cost of redesigning the hardware platform in order to process the TOF data collected by the TOF camera is too high. Therefore, it is necessary to use a software framework to process the TOF data collected by the TOF camera. Compared with hardware, software is more vulnerable to attacks. In this case, even if TOF data is used for facial recognition, there are still security holes.
  • the following embodiments of the present application provide a facial recognition method, which is applied to electronic devices, with the purpose of improving the security of facial recognition.
  • FIG. 2 is an example diagram of an electronic device, including: a TOF camera 1 , a processor 2 , a memory 3 , and an I/O subsystem 4 .
  • the TOF camera 1 is used to collect TOF data.
  • the TOF camera 1 is set as a front camera of the electronic device, and is used to collect TOF data in front of a display screen (not shown in FIG. 2 ) of the electronic device. For example, TOF data of a human face located in front of a display screen of an electronic device is collected.
  • the TOF camera 1 includes a TOF sensor 11 , a TOF sensor controller 12 , a TOF light source 13 and a TOF light source controller 14 .
  • the TOF light source controller 14 is controlled by the TOF sensor controller 12 to realize the control of the TOF light source 13 .
  • the TOF light source 13 emits infrared (IR) light under the control of the TOF light source controller 14 .
  • the TOF sensor 11 is used for sensing infrared (IR) light reflected by an object such as a human face to collect TOF data.
  • the TOF sensor controller 12 and the TOF light source controller 14 are arranged in the I/O subsystem 4 and communicate with the processor 2 through the I/O subsystem 4 .
  • the memory 3 can be used to store computer-executable program codes.
  • the memory 3 may include a program storage area and a data storage area.
  • the program storage area may store program codes required for implementing an operating system, a software system, at least one function, and the like.
  • the data storage area can store data acquired, generated, and used during the use of the electronic device.
  • all or part of the memory 3 may be integrated in the processor 2 as an internal memory of the processor 2 .
  • the memory 3 is an external memory relative to the processor 2, and communicates with the processor 2 through an external memory interface of the processor 2.
  • the memory 3 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash memory (universal flash storage, UFS) and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash memory (universal flash storage, UFS) and the like.
  • the processor 2 may include one or more processing units, for example: the processor 2 may include an application processor (application processor, AP), a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP) )wait.
  • application processor application processor, AP
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • the processor 2 may include a security area processor 21, and the data related to facial recognition may be stored in the security area processor 21, and the security area processor 21 executes the process by calling the program code in the memory 3. This data is processed to improve the security of facial recognition.
  • the safe area processor 21 is used to store the TOF data collected by the TOF camera 1, and use the TOF data to generate a TOF image, and use the TOF image to perform facial recognition processing to obtain a recognition result.
  • the processor 2 can use the recognition result to perform tasks such as face unlocking or face payment.
  • the electronic device may further include an ambient light sensor (not shown in FIG. 2 ), configured to sense the light intensity of the environment where the ambient electronic device is located. It can be understood that the ambient light sensor communicates with the processor 2 through an ambient light sensor controller (not shown in FIG. 2 ) provided in the I/O subsystem 4 .
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device.
  • the electronic device may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components may be realized in hardware, software, or a combination of software and hardware.
  • the operating system implemented by the processor 2 by running the code stored in the memory 3 can be an iOS operating system, an Android open source operating system, a Windows operating system, a Hongmeng operating system, and the like.
  • the Android open source operating system will be used as an example for illustration.
  • the processor 2 of the electronic device runs the program code stored in the memory 3, which can realize the software framework based on the Android open source operating system shown in Figure 3:
  • the software framework includes a trusted execution environment (Trusted Execution Environment, TEE) and a general execution environment (Rich Execution Environment, REE) that can execute rich instructions.
  • TEE Trusted Execution Environment
  • REE General execution environment
  • the TEE can be implemented by the secure area processor 21 in the processor 2 by running program codes.
  • REE includes various layers of the Android open source operating system, including but not limited to: application layer, application framework layer, hardware abstraction layer, and kernel layer.
  • the application layer can consist of a series of application packages. As shown in FIG. 3 , the application package may include a payment application and a lock screen application. Various applications can initiate task requests in different scenarios to trigger facial recognition.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a camera service (Camera service) and a face service (Face service). Camera service is used to implement camera functions. Face service is used to realize the facial recognition function.
  • the hardware abstraction layer is used to abstract hardware. It hides the hardware interface details of a specific platform, provides a virtual hardware platform for the operating system, makes it hardware-independent, and can be transplanted on various platforms.
  • the camera hardware abstraction layer (Camera HAL3) in the hardware abstraction layer is used to control the camera in response to the instructions of the Camera service.
  • the face application client (Face Client application, Face CA, also known as the face client application) accesses the trusted application of the TEE by calling the API (Application Programming Interface, Application Programming Interface) of the TEE client located in the REE, thereby using the TEE and the available security features provided by the messaging app.
  • the kernel layer is the layer between hardware and software. Hardware driver software is usually set at the kernel layer.
  • one of the functions of the ISP-Lite in the kernel layer is to drive the ISP-Lite (part of the ISP) in the processor to convert the TOF data (such as TOF camera Raw Data) collected by the TOF sensor into the image processing commonly used Data format, such as TOF raw data (Raw Data).
  • TOF data such as TOF camera Raw Data
  • TOF raw data TOF raw data
  • the TOF camera driver in the kernel layer is used to drive the TOF camera.
  • the TEE includes various applications, modules and units for facial recognition based on TOF data.
  • the role of the Face Trusted Application is to schedule each module and unit in the TEE and communicate with the REE.
  • the data processing module is used to process data such as Raw Data.
  • the face recognition module is used to implement functions such as face recognition using face templates and face template management.
  • the electronic device can process the TOF data into a TOF image in the TEE, and use the TOF image in the TEE for facial recognition to obtain the recognition result, that is, on the basis of using the TOF data for facial recognition to improve security , data processing and identification are all implemented in TEE, which can further improve security.
  • the facial recognition method performed by the electronic device disclosed in the embodiment of the present application includes the following steps:
  • the application can initiate a task in response to at least one instruction, at least one operation of the user, or at least one sensor signal, and send a task request to the facial service.
  • the user presses the power button to trigger an unlock task, and the unlock application sends an unlock request to the Face service.
  • S403 transmit the image request to the camera service (Camera service) of the application framework layer through the Face CA to respond to the task request.
  • the camera service Camera service
  • the ISP-Lite at the kernel layer stores the received and processed TOF data in the first secure buffer storage unit (Secure Buffer) of the TEE with the first storage information.
  • the first storage information indicates a storage address.
  • ISP-Lite stores the received and processed TOF data into TEE, which is equivalent to directly storing TOF data from the hardware (the ISP of the processor) into the security area, thus reducing the possibility of being attacked.
  • the first stored information is transmitted in the REE, in order to ensure the security of the first stored information, optionally, the first stored information may be encrypted information, that is, ciphertext.
  • an example of encrypted information is a file descriptor (File Descriptor, FD), which is used to describe the storage location and the reading method.
  • FD1 the ciphertext of the first stored information
  • Camera HAL3 obtains the calibration data pre-configured in REE, and transmits them to TEE together with FD1.
  • Camera HAL3 can obtain part of the calibration data from the REE storage unit, and another part of the calibration data from the TOF camera.
  • the storage information of TOF data is transmitted to TEE, which lays the foundation for processing TOF data in TEE.
  • ISP-Lite stores the received and processed TOF data into TEE, which ensures the security of TOF data.
  • the second stored information may not be encrypted.
  • the face recognition module performs face comparison on the TOF image by calling the pre-configured face template to obtain the comparison result, and uses the TOF image for anti-counterfeiting identification to obtain the anti-counterfeiting result.
  • the recognition result can be determined according to the comparison result and the anti-counterfeit result. For example, if the comparison result is the comparison pass, and the anti-counterfeit result is the anti-counterfeit pass, then the recognition result is pass.
  • the TOF image includes a depth image and an infrared image
  • a higher comparison threshold can be used when comparing the infrared image with the face module, so the accuracy is higher, which can reduce the similarity of the face. possibility of counterfeiting.
  • the depth image and the infrared image can be used for anti-counterfeiting identification at the same time, it has high anti-counterfeiting performance (that is, the performance of preventing plane attack and head mask attack is high).
  • the lock screen application Take the lock screen application at the application layer as an example. After the lock screen application initiates an unlock request, it receives the recognition result, and indicates that the recognition is passed based on the recognition result, and executes the unlocking task. Task.
  • the facial recognition method described in this embodiment uses TOF data for facial recognition, so it has high security. Furthermore, the facial recognition based on TOF data is realized in TEE, so the security of TOF data, data processing process and recognition process can be guaranteed, thereby further improving the security of facial recognition.
  • the transmission of data storage information, recognition results, and calibration data between REE and TEE is realized, which has better compatibility.
  • the calibration data belongs to the data of the camera domain, not the data of the face domain, in some implementations, the calibration data can also be transferred from the Camera service to the Face CA via the Face service.
  • the TOF camera collects a set of TOF data once, and continuously exposes for multiple times. After collecting multiple sets of TOF data, it stops for the first time, continues exposure for multiple times, and then stops for the first time, and so on... ...
  • Multiple sets of TOF data collected continuously are called a frame of TOF data, and multiple sets of TOF data collected again after a first time interval are called another frame of TOF data.
  • ISP-Lite also sequentially stores each frame of TOF data in the first safety buffer storage unit at the first time interval, and also stores the TOF data at the first time interval. , and sequentially transmit the storage information of each frame of TOF data to Face CA.
  • Face CA also sequentially transmits the storage information of each frame of TOF data to Face TA at the first time interval. Therefore, Face TA also sequentially receives the storage information of each frame of TOF data at the first time interval.
  • S406 in FIG. 4 can be storing any frame of TOF data
  • S407-S411 can be the FD transmitting any frame of TOF data
  • S413-S419 is the processing flow for this frame of TOF data.
  • S420-S422 may be a processing flow for the recognition result of one frame of TOF data, or a processing flow for recognition results of multiple frames of TOF data.
  • TOF cameras collect TOF data by projecting infrared light, it is necessary to pay attention to the safety of human eyes during the collection process. And because the quality of the TOF image is related to the accuracy of the recognition results, and the exposure parameters of the TOF camera are directly related to the quality of the TOF image, it is necessary to tune the exposure parameters of the TOF camera.
  • the process shown in FIG. 4 is further improved, and the data processing module can generate different parameters for adjusting the TOF camera according to the type of the received TOF data.
  • Figure 5 shows the process of generating and feeding back automatic exposure (Automatic Exposure, AE) results, and controlling the camera based on the AE results.
  • AE Automatic Exposure
  • the TOF camera divides the collected TOF data into safety indication frames and facial recognition frames.
  • the safety indication frame carries the human eye safety flag bit, which is used to indicate whether the infrared light emitted by the TOF camera is safe for human eyes.
  • the face recognition frame is a TOF data frame used for face recognition.
  • the type of the TOF data frame is indicated by at least one item of a numerical value, a character, or a character string in the TOF data frame.
  • the TOF data of the first frame to the fourth frame are safety indication frames
  • the TOF data of subsequent frames are facial recognition frames. Include the human eye safety flag bit in the safety indication frame.
  • the AE results are used to adjust the exposure parameters of the TOF camera.
  • the AE result includes but is not limited to the exposure time and the physical gain of the TOF sensor.
  • One way to generate AE results is: extract TOF raw data (Raw Data), that is, the facial area in the original image, calculate the brightness value of the facial area, and compare it with the pre-configured target brightness value to obtain the exposure time and the TOF sensor. physical gain.
  • One AE result can be generated for each frame of TOF data, or one AE result can be generated for multiple frames of TOF data.
  • the latest AE result may not be transmitted to save TEE resources.
  • Camera HAL3 can send AE results to the TOF sensor controller through the TOF camera driver, and the TOF sensor controller controls the TOF sensor to collect TOF data according to the AE results.
  • Figure 6 shows the process of generating and feeding back safety signs and controlling the camera according to the safety signs. Here, only the differences between Figure 6 and Figure 4 are explained:
  • the safety mark is used to indicate whether the TOF light source is safe for human eyes.
  • the generation method of the safety mark is: after receiving the TOF data frame (that is, the safety indication frame) carrying the human eye safety mark, extracting the human eye safety mark, and judging the human eye according to the extracted human eye mark Whether it is safe, generate a security ID.
  • the human eye safety sign bit indicates safety, and then obtains the safety sign indicating safety. If the eye-safe sign bit indicates unsafe, a safety sign indicating unsafe is obtained.
  • the eye safety flags carried by each safety indication frame are the same, for example, in the first frame safety indication frame to the fourth safety indication frame, the eye safety flag The flag bits all indicate eye-safe, or both indicate eye-safe. In this case, multiple obtained security identifiers are the same, therefore, only one security identifier may be transmitted to save TEE resources.
  • the human eye flag bits in the multi-frame safety indication frames are different. Therefore, it is also possible to transmit only the last security identifier, that is, the security identifier determined according to the last received security indication frame, so as to save TEE resources.
  • Adjusting the TOF camera includes but is not limited to: turning off the TOF camera if the safety sign indicates that the human eye is unsafe, or reducing the emission intensity of the TOF light source. If the safety sign indicates that human eyes are safe, then the TOF data frame is collected and marked as a TOF data frame for face recognition (ie, face recognition frame).
  • Camera HAL3 can send a shutdown command to the TOF sensor controller through the TOF camera driver to turn off the TOF camera.
  • the TOF Raw Data is processed into a TOF image in the TEE, and the TOF image is used for facial recognition, so the facial recognition has higher security.
  • the transmission of TOF data storage information to the TEE is realized, which not only lays the foundation for the processing in the TEE, but also realizes compatibility with the Android system.
  • the above-mentioned framework and process described in this embodiment can also implement facial recognition using TOF data, thereby improving security.
  • the human eye safety protection and the adjustment of the exposure parameters can be realized, so that the electronic equipment can have better performance on the basis of improving the security of facial recognition.
  • the Android operating system is taken as an example for illustration in the above FIGS. 4-6
  • the facial recognition method described in the embodiment of the present application is not limited to the Android operating system.
  • the operation of the TEE is not limited to the Android operating system, so the functions of each application, module, and unit in the TEE can also be implemented in other operating systems.
  • the operation of REE is not limited to the Android operating system. Therefore, the transmission path for transmitting TOF data and calibration data to TEE, the transmission path for task requests, and the feedback path for recognition results can all be adapted to different operating systems. It can realize the transmission of TOF data and calibration data from REE to TEE, and the task request can trigger the collection of TOF data, and the recognition results can be used to perform tasks.
  • Improving image quality and increasing execution speed are a pair of contradictions: to improve image quality, it is necessary to iterate and denoise the image to make the image converge. The more processing means, the slower the processing speed.
  • TOF imaging has the following characteristics:
  • the TOF data collected by the TOF camera is less affected by ambient light, so in most scenarios, the quality of the first frame of TOF data collected by the TOF camera (such as the first frame of facial recognition frame) can meet the requirements of recognition accuracy. Therefore, in most scenarios, the first frame of TOF data can be used to obtain more accurate facial recognition results.
  • the face that is trying to unlock is the face that has been recorded in the electronic device, that is to say, the face that is trying to unlock can unlock the screen.
  • the first frame of TOF data can Achieve unlocking.
  • the collected TOF data of the third or fourth frame achieves convergence (that is, even if the exposure parameters are adjusted later, TOF data no longer change), then the TOF data of the third or fourth frame collected can be unlocked.
  • Figure 7 is a data processing method disclosed in the embodiment of the present application. Compared with Figure 4, Figure 5 or Figure 6, the improvement lies in the application and modules in the TEE, and the functions and data transmission methods of each module in the REE are the same as those in Figure 7. 4. Figure 5 or Figure 6 is the same and will not be repeated here.
  • the process shown in Figure 7 starts after Face TA receives FD1 and calibration data, including the following steps:
  • the Face TA After receiving the FD of the first frame of TOF data (abbreviated as FD1), the Face TA transmits the FD1 to the data processing module.
  • FD1 the FD of the first frame of TOF data
  • the first frame of TOF data is the first frame of TOF data collected by the TOF camera for the Camera HAL3 to respond to the image request, control the TOF camera drive, and drive the TOF camera.
  • the TOF data frames collected by the TOF camera are respectively identified as safety instruction frames and face recognition frames.
  • the safety indication frame carries the human eye safety flag bit, which is used to indicate whether the infrared light emitted by the TOF camera is safe for human eyes.
  • the facial recognition frame is the TOF data frame used for facial recognition. In this case, the first frame of TOF data in this step is the first frame of facial recognition frame.
  • the data processing module uses FD1 to read the first frame of TOF data from the first security buffer storage unit, and reads the calibration data from the calibration data storage unit.
  • the data processing module uses the first frame of TOF data and the calibration data to generate a TOF image, and generate an automatic exposure (Automatic Exposure, AE) result.
  • AE Automatic Exposure
  • TOF images include depth images and infrared images.
  • the AE results are used by the TOF camera to adjust the exposure parameters for collecting TOF data.
  • AE results include but not limited to exposure time and physical gain of TOF sensor.
  • One way to generate AE results is to extract the facial area in the TOF original image (Raw Data), calculate the brightness value of the facial area, and compare it with the pre-configured target brightness value to obtain the exposure time and the physical gain of the TOF sensor.
  • the TOF image obtained by processing the first frame of TOF data is referred to as the first frame of TOF image.
  • the AE result obtained from the first frame of TOF data is called the first AE result.
  • the data processing module stores the first frame of TOF image in the second safety buffer storage unit.
  • the data processing module transmits the storage information of the first frame of TOF image and the first AE result to the Face TA.
  • Face TA transmits the first AE result to the TOF camera.
  • the path for the first AE result to be transmitted to the TOF camera combined with the example shown in Figure 4, Figure 5 or Figure 6 is: Face TA, Face CA, Camera service, Camera HAL3, and the TOF sensor controller of the TOF camera, and then the TOF The sensor controller uses the first AE result to adjust parameters of the TOF camera, including but not limited to exposure time.
  • the TOF data frame to which the first AE result can be applied is related to the duration of processing the TOF data by the processor. Taking the 30ms acquisition interval of the TOF camera as an example, assuming that the total time spent by the processor to generate the first TOF image and the first AE result is also 30ms, the first AE result can be applied to the third frame of TOF data collected by the TOF camera at the earliest.
  • Generating and transmitting AE results is an optional step.
  • the purpose is to improve the quality of the subsequent TOF data collected, thereby obtaining TOF images with better quality and more accurate recognition results.
  • the first frame of TOF data can be acquired using pre-configured fixed exposure parameters.
  • Face TA transmits the storage information of the first frame of TOF image to the face recognition module, so as to trigger the face recognition module to perform face recognition.
  • the facial recognition module uses the storage information of the first frame of TOF image to read the first frame of TOF image from the second security buffer storage unit.
  • the facial recognition module uses the first frame of TOF image to perform facial recognition, and obtains a first recognition result.
  • the face recognition module calls the face template to perform face comparison on the first frame of TOF images.
  • face comparison includes facial comparison and anti-counterfeiting recognition. If both the face comparison and the anti-counterfeiting recognition pass, the recognition result of the face recognition indicates pass.
  • the facial recognition module transmits the first recognition result to Face TA.
  • Face TA judges whether the first recognition result indicates that the recognition is passed, if yes, execute S712, if not, execute S713.
  • the Face TA transmits the first recognition result to the Face CA.
  • Face CA transmits the first recognition result to the initiator of the task, such as unlocking the application.
  • Face TA judges whether the end condition is satisfied, if yes, execute S712, if not, execute S714.
  • the end condition can be set in advance.
  • the end condition includes: the duration of executing the task reaches the second duration threshold.
  • An example of the second duration threshold is 5 seconds.
  • the end condition is optional, and it may not be judged whether the end condition is satisfied, and if the first recognition result indicates that the recognition fails, S714 is executed.
  • the purpose of setting the end condition is to avoid unnecessary time-consuming during task execution: if the facial recognition still fails after a certain period of time, it can basically be determined that the facial recognition result is determined to be failed, for example, the person who tried to unlock The face is not pre-stored in the electronic device. So there is no need to continue with facial recognition. Instead, the facial recognition results should be fed back to the task as soon as possible to reduce the delay in task execution and ensure a better user experience.
  • Face TA receives a frame of TOF data from the TOF camera again.
  • the time-consuming to generate the first frame of TOF image and the first AE result is also 30ms as an example.
  • the frame of TOF data received again from the TOF camera is collected by the TOF camera.
  • the seventh frame of TOF data is also collected.
  • the seventh frame of TOF data is a frame of TOF data collected after adjusting the exposure parameters using the result of the first AE.
  • the subsequent process to be executed by Face TA can refer to S701-S714, that is, the first frame of TOF data in S701-S714 as the processing object is replaced with the received frame of TOF data again, as shown in the seventh
  • the processing result of the first frame of TOF data can be adaptively replaced with the processing result of a frame of TOF data received again (for example, replacing the first frame of TOF image with the second frame of TOF image), and no longer repeat.
  • the processing flow of the image data in the prior art includes the step of iteratively processing the image data to obtain converged image data, the purpose is Obtain images of a quality that meets the requirements for recognition accuracy.
  • RGB data because RGB data is easily affected by ambient light, the quality of the first frame of RGB data collected usually cannot meet the requirements of recognition accuracy. Therefore, it is necessary to iteratively process the RGB data to obtain a converged RGB data, and then use the converged RGB data to perform facial recognition to ensure the accuracy of the recognition results.
  • RGB data is based on different iterative algorithms, which can converge at about 10 frames at a faster rate, and at 30-40 frames at a slower rate. Therefore, the consensus of those skilled in the art is that the quality of the first frame of RGB data is likely to be poor due to the influence of ambient light, and it is meaningless to directly use it for face recognition.
  • the first frame of TOF data converges, and it can be seen that the first frame of TOF data has a high probability of obtaining a more accurate recognition result. Therefore, after the electronic device collects the first frame of TOF data, the first frame of TOF image generated by the first frame of TOF data is used to obtain the facial recognition result. Therefore, it is possible to obtain a smaller processing delay. That is to say, based on the above principle of TOF imaging, the step of iterating image data to obtain converged image data is omitted in exchange for an increase in image data processing speed.
  • the inventor found in the research that the interval between collecting TOF data by a TOF camera and the duration of a TOF image generated by a processor is usually 30ms, and the duration of face recognition is usually 150ms. Therefore, the duration of using TOF data to obtain the recognition result It is: 30ms+150ms 180ms. Under the condition that other conditions are equal, this value can explain that the method described in this embodiment can make the user feel the speed of unlocking obviously, so the user experience can be improved.
  • the TOF camera collects TOF data at intervals of 30ms and transmits it to the processor.
  • the processor receives the first After the frame of TOF data, according to the process described in this embodiment, the first frame of TOF data is processed into the first frame of depth image and the first frame of infrared image, and the first frame of depth image and the first frame of infrared image are used to obtain the first frame of depth image and the first frame of infrared image.
  • the processor When the processor generates the first frame of TOF image and calculates the recognition result, because the processor is occupied, although the TOF camera is still transmitting TOF data frames to the processor, the processor can no longer receive subsequent TOF Data frame, that is to say, except the first frame of TOF data received, other TOF data frames are discarded.
  • this embodiment does not consider the flow of the TOF data iteration process, and can just fit the above-mentioned scenario of one processor. That is to say, in the scenario where only one processor is used for TOF data processing and face recognition, if it is necessary to realize recognition processing and perform iteration of TOF data to wait for the convergence of TOF data, the processor string line processing, so it takes too long.
  • the method described in this embodiment can shorten the time-consuming because the iterative step of TOF data is not performed, and because the TOF data converges in the first frame in most cases, it can shorten the time-consuming situation of facial recognition. In this case, it can also ensure that the recognition results are more accurate.
  • the data method described in this embodiment based on the high probability of convergence of the first frame of TOF data, ensures that facial recognition can be performed accurately and quickly in most cases, and because TOF data is used for facial recognition
  • the security is high, so it can achieve the purpose of safe, accurate and fast facial recognition.
  • the inventor further found that in outdoor scenes, because there is also light with a wavelength close to the infrared light emitted by the TOF light source in natural light, the first frame of TOF data collected may be overexposed. , the quality of the first frame of TOF data is not enough to obtain accurate recognition results. As in the previous example, if the first frame cannot be unlocked, the required unlocking delay is at least 360 ms, and there is room for further reduction of this delay.
  • S801-S806 are the same as S701-S706, see FIG. 8 for details, and will not be repeated here.
  • S807-S812 are the processing flow of the second frame of TOF data after receiving the storage information of the second frame of TOF data, which is the same as the processing flow of the processing of the first frame of TOF data. Please refer to FIG. 8 for details, and details will not be repeated here.
  • the second frame of TOF data may be a frame of TOF data collected by the TOF camera using exposure parameters adjusted based on the first AE result, or the exposure parameters of the first frame of TOF data may be the same. Which of the above situations depends on the interval at which the TOF camera collects TOF data frames (such as the aforementioned first time interval) and the duration from when the processor receives the TOF data frames to when it feeds back the AE results.
  • the first AE result cannot be used for the acquisition of the second frame of TOF data.
  • S813-S818 are the processing flow of the third frame of TOF data after receiving the storage information of the third frame of TOF data, which is the same as the processing flow of the processing of the first frame of TOF data. Please refer to FIG. 8 for details, and details will not be repeated here.
  • Face TA invokes the facial recognition module to perform facial recognition.
  • Face TA transmits the storage information of the third frame of TOF image to the facial recognition module.
  • the facial recognition module After the facial recognition module reads the third frame of TOF image using the storage information of the third frame of TOF image, it uses the third frame of TOF image to perform face recognition, and obtains a recognition result of the third frame of TOF image.
  • the facial recognition module reads the third frame of TOF image from the second safety buffer storage unit, which will not be described in detail in FIG. 9 .
  • the recognition result of the third frame image is referred to as the third recognition result.
  • the face recognition module transmits the third recognition result to Face TA.
  • Face TA judges whether the third recognition result indicates that the recognition is passed, if yes, execute S905, if not, execute S906.
  • the Face TA transmits the third identification result to the Face CA.
  • the third frame of TOF data is most likely to be the data collected after adjusting the exposure parameters using the AE result, the quality is most likely to be the best. Therefore, using the third frame of TOF image for face recognition can further reduce time-consuming.
  • Face TA transmits the storage information of the first frame of TOF image and the storage information of the second frame of TOF image to the facial recognition module.
  • the facial recognition module obtains the first recognition result and the second recognition result.
  • the first recognition result is the face recognition result of the first frame of TOF image.
  • the second recognition result is the result of face recognition performed on the second frame of TOF images.
  • the order of performing facial recognition using the first frame of TOF images and the second frame of TOF images is not limited.
  • the face recognition module transmits the first recognition result and the second recognition result to Face TA.
  • the Face TA transmits the passed recognition result to the TOF CA.
  • Face TA judges whether the end condition is satisfied, if yes, execute S911, and if no, execute the process shown in FIG. 8 again.
  • the duration threshold in the end condition may be shorter than the duration threshold in the above embodiment, because in this embodiment, multiple frames of TOF data have been collected and processed.
  • an example of the duration threshold is 3 seconds.
  • the sorting of the received TOF data frames For example, when S801 is executed again, the TOF camera may collect the ninth frame of TOF data, but in this execution of the process shown in FIG. 8 , the processor receives the first frame of TOF data. Therefore, the "first frame of TOF data” mentioned in S801 refers to the first frame of TOF data in the process shown in Figure 8 this time, rather than the first frame of TOF data actually collected by the TOF camera.
  • the Face TA transmits the identification result that fails to be identified to the TOF CA.
  • S819 can also be implemented in other ways, such as sequentially using the first frame of TOF images, the second frame of TOF images, and the third frame of TOF images for facial recognition, etc., or, Only use the third frame of TOF image for recognition, if the recognition result indicates that it fails, re-execute the process shown in Figure 8 to save memory and other resources.
  • Other implementation modes are not listed here one by one.
  • the method described in this embodiment is based on the above-mentioned principle of TOF imaging (2) TOF data converges within 2-3 frames, and 3 frames of TOF data are processed continuously to obtain recognition results. processing delay.
  • the TOF camera collects TOF data at an interval of 30ms and transmits it to the processor. After the first frame of TOF data, the received TOF data is processed to generate the first TOF image and the first AE result.
  • the electronic device is in an outdoor environment, there is a high probability that it will not be unlocked based on the first frame of TOF data, so after the processor generates the first TOF image and transmits the first AE result, it will not perform face recognition, but continue to receive and process TOF Data Frame. Because the time for generating TOF images and AE results is 30ms, which is equivalent to the time for TOF cameras to collect and transmit TOF data, so after processing the first frame of TOF data, you can continue to receive and process the second frame of TOF data. After the second frame of TOF data, you can continue to receive and process the third frame of TOF data.
  • the first AE result generated by the first frame of TOF data can be applied to the third frame of TOF data at the earliest, so the third frame of TOF data will most likely be successfully unlocked.
  • the shortest time it takes to obtain the recognition result through the third frame of TOF data is: 30ms*3+150ms.
  • the unlocking of the seventh frame is changed to the third frame Unlocking, so it can increase the speed and reduce the delay.
  • the duration of two TOF data processing (generating TOF images and AE results) is increased by 60ms. Therefore, in the data processing method described in this embodiment, by sacrificing the unlocking speed of the first frame of TOF data, the unlocking speed in the outdoor strong light scene is improved.
  • the second frame of TOF data and the AE result of the second frame of TOF data may not be processed, that is, no execution S807-S812 to save resources.
  • the second frame of TOF image is no longer involved in obtaining the recognition result.
  • the second frame of TOF data may only be received without processing.
  • the second frame of TOF data may also be discarded without being received.
  • the number of continuously processed TOF data frames is related to the time consumption of generating and feeding back the AE results, and the first time the TOF camera collects data frames. related to time intervals.
  • a data processing method includes the following steps:
  • Face CA obtains the intensity value of ambient light.
  • Face CA can also send a light intensity request to obtain the intensity value of the ambient light. It is understandable that the light intensity request can be sent to the corresponding driver of the kernel layer through the application framework layer and the corresponding module of the hardware abstraction layer, and the corresponding driver of the kernel layer drives the light sensor to sense the intensity value of the ambient light and feeds back to the Face ca.
  • Face CA transmits the intensity value of ambient light to Face TA.
  • Face TA judges whether the intensity value of the ambient light is greater than the preset first intensity threshold, if yes, executes the data processing flow shown in Figure 7, if not, executes the data processing flow shown in Figure 8.
  • the environment in which the electronic device is located is judged by the intensity value of the light, and the facial recognition result is obtained by using a process more suitable for the environment. Accuracy and speed of results.
  • the inventor further found that whether it is an outdoor or indoor scene, when the ambient light is extremely strong and the sensor of the TOF camera faces a strong light source, the quality of the first frame of TOF data cannot support accurate facial recognition results , Therefore, it is also necessary to converge the TOF data, thereby increasing the duration of facial recognition.
  • the user is located outdoors with strong sunlight, and the user faces away from the sun.
  • the user points the front camera of the mobile phone at the face in anticipation of face unlocking (assuming that the user's face has been stored in the mobile phone as a face module), and it will cost more than the indoor environment. It takes a lot of time to unlock.
  • the embodiment of the present application provides a data acquisition method, the purpose of which is to obtain higher-quality first-frame TOF data, so as to improve the first-frame TOF data under the premise of using TOF data for facial recognition to obtain higher security.
  • the accuracy of the recognition results can further achieve the purpose of quickly completing facial recognition.
  • Figure 10 is a data acquisition method disclosed in the embodiment of the present application, which is executed by the above-mentioned electronic device.
  • the process described in Figure 10 is driven from the Camera HAL3 through the TOF camera, and the TOF camera is driven to collect data and start to execute , including the following steps:
  • Camera HAL3 transmits an image output command to the TOF camera in response to the image request.
  • Camera HAL3 can transmit image output instructions to the TOF camera through the TOF camera driver at the kernel layer.
  • the TOF camera collects the first frame of TOF data including depth data, projection off data, and infrared data in response to the drawing command.
  • Depth data is data used to generate a depth image.
  • the projection off data is the TOF data collected by the TOF camera when the TOF light source is turned off.
  • Infrared data is data used to generate an infrared image.
  • the depth data and infrared data are TOF data collected by the TOF camera when the TOF light source is turned on.
  • the TOF sensor controller of the TOF camera transmits a drawing command to the TOF sensor, and the TOF sensor responds to the drawing command, collects depth data, and transmits a shutdown command to the TOF light source controller.
  • the TOF light source controller turns off the TOF light source in response to the shutdown instruction, and the TOF sensor collects projection shutdown data.
  • the TOF sensor transmits an opening instruction to the TOF light source controller, and the TOF light source controller responds to the opening instruction to turn on the TOF light source, and the TOF sensor collects infrared data.
  • an example of the first frame of TOF data is: 4 sets of depth data, a set of projection off data, and a set of infrared data.
  • a set of data can be understood as a two-dimensional array.
  • the TOF sensor first collects each set of depth data sequentially with the first exposure time, collects infrared data with the second exposure time, and collects projection off data with the first exposure time or the second exposure time.
  • the first exposure time and the second exposure time can be carried by Camera HAL3 in the drawing command or transmitted to the TOF camera separately, or can be transmitted to the TOF camera by other modules, or can be obtained by the TOF camera from the storage module in advance.
  • the timing for the TOF sensor to send a turn-off command to the TOF light source controller is related to the first exposure duration
  • the timing for the TOF sensor to send a turn-on command to the TOF light source controller is related to the exposure time for projecting off data
  • the TOF sensor may mark the TOF data collected within the first time range as projection off data.
  • the first time range can be determined according to the time interval between the time when the TOF sensor sends out the closing instruction and the time when it sends out the opening instruction.
  • the TOF sensor may mark TOF data collected before the first time range as depth data, and mark TOF data collected after the first time range as infrared data.
  • the order in which the TOF camera collects the depth data, the projection off data, and the infrared data is not limited.
  • the data may be collected in the order of projection off data, infrared data, and depth data, or may be collected in the order of infrared data, depth data, and projection off data.
  • the time when the TOF sensor transmits the closing command and the opening command to the TOF light source controller is adjusted according to the sequence.
  • the TOF sensor is used to control the TOF light source to be turned on or off, which has a higher execution speed.
  • the data processing module judges whether the infrared data in the first frame of TOF data contains the target data block. If yes, execute S1004. If not, execute S1005.
  • the specific way for the data processing module to obtain the first frame of TOF data is as shown in FIG. 4 , FIG. 5 or FIG. 6 , and will not be repeated here.
  • the infrared data is a two-dimensional array. It can be understood that the two-dimensional array includes some values arranged in rows and columns. Each value can be regarded as a data point.
  • the target data block is a data block that meets the following preset conditions: the value is greater than The first threshold has a number of data points greater than the second threshold.
  • the infrared data collected by the TOF sensor is a two-dimensional array, which is processed by ISP-Lite into infrared raw data, that is, infrared raw images. Therefore, it can be understood that the target data block is the target area in the infrared original image, and each value in the target data block is the brightness value of the corresponding pixel in the target area. Therefore, for the infrared original image, the target area is : An area where the number of pixels with luminance values greater than the first threshold is greater than the second threshold.
  • the TOF data collected by the TOF camera will usually be overexposed due to the strong light. Therefore, most of the pixels in the overexposed area in the infrared original image generated by the TOF data The brightness of the value is too high, which affects recognition, and the condition of the target area (data block) in this step is set based on this principle.
  • S1005. Process the first frame of TOF data into a TOF image.
  • processing the first frame of TOF data into a TOF image refers to processing the depth data in the first frame of TOF data into a depth image, and processing the infrared data in the first frame of TOF data into an infrared image.
  • the influence of ambient light on the infrared data is removed by subtracting the projection off data from the infrared data in the first frame of TOF data.
  • the quality of the first frame of TOF data is improved, and a higher-quality TOF image is further obtained, which is conducive to improving the accuracy and speed of facial recognition.
  • Fig. 11 is another kind of data acquisition method that the embodiment of the present application discloses, is carried out by Camera HAL3 shown in Fig. 3, comprises the following steps:
  • the interval length is the interval length between the time when the last TOF data is collected (referred to as the first moment) and the time when the first frame of TOF data is to be collected (referred to as the second moment).
  • Camera HAL3 drives the TOF camera to collect TOF data through the TOF camera driver at the kernel layer
  • Camera HAL3 can select the second moment according to the time when the image output command is sent to the TOF camera driver.
  • the moment when the camera driver sends the drawing command is added with a certain time delay as the second moment.
  • Camera HAL3 can directly use the current moment of the system as the second moment.
  • the first frame of TOF data refers to, as shown in FIG. 4 , FIG. 5 , or FIG. 6 , the first frame of TOF data collected by the TOF camera is triggered by an application sending a task request.
  • the last time may be the last frame of TOF data collected by the TOF camera triggered by the last task request sent by the application.
  • the exposure parameters used for the last collection of TOF data are likely to be the adjusted exposure parameters, and because the interval between the first moment and the second moment The duration is within the preset range, so it is likely to be in the same environment as the last acquisition. Therefore, the exposure parameters used in the last acquisition of TOF data are likely to be applicable to the current environment, so it is beneficial to obtain higher High-quality TOF data, which is conducive to obtaining higher-quality TOF images.
  • the exposure parameters used by the data are no longer suitable for the current environment, so it is meaningless to use the exposure parameters used for the last TOF data collection, so the pre-configured exposure parameters are used.
  • the data acquisition method described in this embodiment makes full use of the exposure parameters adjusted by the AE adjustment mechanism to improve the quality of the first frame of TOF data, and further obtain a higher-quality TOF image, which is conducive to improving the accuracy and speed of face recognition.
  • the exposure parameters used for the last collection of TOF data are only one implementation method, because the time of the last collection of TOF data is the latest time from the second moment, so it is different from the last time of collection of TOF data.
  • the purpose of data comparison is to save computing resources.
  • the comparison is not limited to the last collected TOF data, as long as it is any collection before the second moment, it can be compared with the second moment. Therefore, the conditions met at the first moment can be summarized as: earlier than the second moment and the interval between the second moment and the second moment is within a preset range.
  • Fig. 12 is another kind of data acquisition method that the embodiment of the present application discloses, is carried out by Camera HAL3 shown in Fig. 3, comprises the following steps:
  • the ambient light sensor on the electronic device may be used to acquire the light intensity of the environment where the electronic device is located.
  • the corresponding relationship between multiple light intensity intervals and exposure time is pre-configured, and the corresponding relationship satisfies the following principles:
  • the light intensity interval includes the indoor light intensity interval and the outdoor light intensity interval.
  • the pre-configured fixed exposure time may not be suitable for outdoor environments. Therefore, the exposure time used in indoor environments is different from that used in outdoor environments. Therefore, light intensity intervals need to be used to reflect this difference.
  • the exposure time must be shortened to avoid the problem that the brightness of the image is too high and the clarity is reduced.
  • L represents the intensity value of the light
  • t represents the exposure time
  • the granularity of the intensity range of the outdoor environment in the above example can be adjusted, and the smaller the granularity, the finer the control of the exposure time, which is more conducive to improving the quality of the image, and thus more conducive to improving the efficiency of image processing. speed.
  • the Camera HAL3 can be driven by the TOF camera to transmit the exposure time corresponding to the light intensity to the TOF sensor controller of the TOF camera.
  • the exposure time for collecting TOF data is obtained, so it is beneficial to collect TOF data whose brightness meets the facial recognition requirements in the first frame, thereby helping to improve facial recognition. Accuracy and speed of recognition.
  • FIGS. 7-12 are not limited to the software framework shown in FIG. 3 , and can also be applied in the software framework shown in FIG. 13 .
  • TEE is not set, that is to say, the processing of TOF data and facial recognition are all executed in REE, specifically, it can be executed in Face CA. Therefore, the TOF data can be directly transmitted between the various modules, rather than the storage information of the TOF data. That is, the difference between Figure 13 and Figure 3 is: after ISP-Lite receives the TOF data collected by the TOF camera, it transmits TOF data to Face CA via Camera HAL3 and Camera service. Face CA processes TOF data into TOF images, and uses TOF images for facial recognition.
  • each step is performed by Face CA: After Face CA receives the first frame of TOF data, it processes the first frame of TOF data into the first frame of TOF image, and obtains the first AE As a result, the first AE result is transmitted to Camera HAL3, and the first frame of TOF image is used for face recognition to obtain the first recognition result.
  • the first recognition result indicates that the recognition is passed, and the first recognition result is transmitted to the Face service.
  • the first The recognition result indicates that it is not passed, and the seventh frame of TOF data is received again.
  • the data processing module can be replaced by Face CA, and will not be described again.
  • FIGS. 7-12 are not limited to the Android operating system.
  • a module with the same function as the data module can implement the steps performed by the above data module
  • a module with the same function as Camera HAL3 can implement the steps performed by the above Camera HAL3.
  • the embodiment of the present application also discloses a chip system, including: at least one processor and an interface, the interface is used to receive code instructions and transmit them to at least one processor, and at least one processor runs code instructions to realize the above-mentioned facial recognition At least one of a method, a data acquisition method, and a data processing method.
  • the embodiment of the present application also discloses a computer-readable storage medium, on which program code is stored.
  • program code When the program code is executed by a computer device, the facial recognition method, data acquisition method, and data processing described in the above-mentioned embodiments are realized. at least one of the methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供了一种数据获取方法及装置,获取包括投射关闭数据以及红外数据的第一帧TOF数据,并在确定红外数据中存在数值大于第一阈值的数据点的数量大于第二阈值的数据块后,依据红外数据与投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据。因为数值大于第一阈值的数据点的数量大于第二阈值的数据块,表示过曝光的数据块,而投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据,所以红外数据与投射关闭数据之差能够修正过曝光,从而提高首帧TOF数据的质量。质量较高的首帧TOF数据被用于面部识别,不仅具有更高的安全性,还具有较高的准确性和执行速度。

Description

数据获取方法及装置
本申请要求于2021年08月12日提交中国专利局、申请号为202110925831.X、发明名称为“数据获取方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子信息领域,尤其涉及一种数据获取方法及装置。
背景技术
面部识别被广泛用于电子设备对授权用户的识别。例如,对于人脸解锁功能,依据面部识别是否通过,判决是否解锁屏幕。
鉴于上述场景,在安全性方面,需要考虑恶意攻击问题。在准确性方面,需要考虑因图像质量不高而导致识别结果不准确的问题,在用户体验方面,需要考虑时延问题。
可见,需要解决的问题是如何安全、准确、以及快速地实现面部识别。
发明内容
本申请提供了一种数据获取方法及装置,目的在于解决如何安全、准确、以及快速地实现面部识别的问题。
为了实现上述目的,本申请提供了以下技术方案:
本申请的第一方面提供一种数据获取方法,包括:获取第一帧飞行时间TOF数据,所述第一帧TOF数据包括投射关闭数据以及红外数据,所述投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据,确定所述红外数据中存在满足预设条件的数据块,所述预设条件包括所述数据块中数值大于第一阈值的数据点的数量大于第二阈值,依据所述红外数据与所述投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据。因为满足预设条件的数据块为过曝光的数据块,投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据,所以,红外数据减去投射关闭数据之差,能够修正过曝光,以提高第一帧TOF数据的质量。质量较高的首帧TOF数据被用于面部识别,不仅具有更高的安全性,还具有较高的准确性和执行速度。
可选的,所述获取第一帧TOF数据包括:在所述TOF相机关闭所述TOF光源后,采集所述投射关闭数据;以及在所述TOF相机开启所述TOF光源后,采集所述红外数据。通过控制TOF相机的TOF光源的关闭和开启,分别采集投射关闭数据以红外数据的方式,易于实施。
可选的,在所述采集所述投射关闭数据之前,还包括:在所述TOF相机开启所述TOF光源的情况下,采集用于生成深度图像的深度数据。深度图像和红外数据生成的红外图像能够用于面部识别中的防伪识别,因此能够提高面部识别的安全性。
可选的,控制所述TOF相机关闭所述TOF光源的时机,依据采集所述深度数据使用的第一曝光时长确定,不仅能够确保采集到深度数据,还能够尽量降低采集第一帧TOF数据的时延。
可选的,控制所述TOF相机开启所述TOF光源的时机,依据采集所述投射关闭数据 的曝光时长确定,不仅能够确保采集到投射关闭数据,还能够尽量降低采集第一帧TOF数据的时延。
可选的,所述采集所述投射关闭数据的曝光时长为采集所述深度数据使用的第一曝光时长,或者采集所述红外数据的第二曝光时长。
可选的,所述TOF相机关闭所述TOF光源,包括:通过所述TOF相机的TOF传感器,控制所述TOF光源关闭;所述TOF相机开启所述TOF光源,包括:通过所述TOF相机的TOF传感器,控制所述TOF光源开启。使用TOF传感器控制TOF光源的关闭以及开启,即通过硬件控制硬件,能够获得更快的执行速度。
可选的,还包括:确定所述红外数据中不存在所述数据块,将采集的所述第一帧TOF数据处理为第一帧TOF图像。红外数据中不存在满足预设条件的数据块,说明红外数据不存在过曝,可以直接处理为TOF图像。
可选的,还包括:使用所述生成第一帧TOF图像的TOF数据,生成第一帧TOF图像;使用所述第一帧TOF图像进行面部识别,得到识别结果。使用第一帧TOF图像得到面部识别结果,能够提高面部识别的执行速度。
可选的,所述第一帧TOF数据包括:面部识别帧。
可选的,在所述获取第一帧TOF数据之前,还包括:依据在采集所述第一帧TOF数据之前采集的安全指示帧,确定人眼在所述TOF光源开启的情况下安全。使用面部识别帧进行处理,得到用于生成第一帧TOF图像的TOF数据,并在处理之前,使用安全指示帧确定人眼的安全,通过对于TOF数据的类型的划分,实现在使用TOF数据前提下,确保人眼的安全。
可选的,还包括:依据所述安全指示帧确定人眼在所述TOF光源开启的情况下不安全,控制所述TOF相机关闭,能够避免TOF相机的红外光对人眼的伤害。
可选的,所述获取第一帧TOF数据的具体实现方式为:通过内核层,将所述TOF相机采集的所述第一帧TOF数据存储至可信执行环境TEE。所述确定所述红外数据中存在满足预设条件的数据块,依据所述红外数据与所述投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据的具体实现方式为:在所述TEE,确定所述红外数据中存在满足所述预设条件的数据块,依据所述红外数据与所述投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据。上述具体实现方式,实现在TEE处理TOF数据,以提高TOF数据的安全性的目的。
本申请的第二方面提供一种数据获取方法,包括:在第一时刻利用曝光参数采集TOF数据,在与所述第一时刻的间隔时长在预设范围的第二时刻,利用曝光参数采集第一帧TOF数据,所述第一时刻早于所述第二时刻。因为第一时刻与第二时刻的间隔时长在预设范围,所以,可以通过设置预设范围,使得第二时刻与第一时刻的间隔不长,从而采集第一帧TOF数据与第一时刻采集TOF数据的外部环境相近,所以,第二时刻使用第一时刻的曝光参数采集第一帧TOF数据,以提高曝光参数与环境的适配的可能性,从而获得质量较高的首帧TOF数据,在使用TOF数据进行面部识别以获得更高的安全性的前提下,有利于提高面部识别的准确性和速度。
可选的,在所述利用所述曝光参数采集第二TOF数据之前,还包括:获取所述第二时 刻之前,最后一次采集TOF数据的时刻为所述第一时刻;确定所述第二时刻与所述第一时刻的间隔时长在所述预设范围内。将第二时刻之前最后一次采集的时刻作为第一时刻,能够确保最小的计算代价,所以有利于节省算力资源。
可选的,所述曝光参数为依据自动曝光AE结果调整后的曝光参数。依据AE结果调整后的曝光参数,与环境光的适配度较高,所以有利于得到较高质量的TOF数据。
本申请的第三方面提供一种数据获取方法,包括:获取所述电子设备所处环境的光线强度,使用所述光线强度对应的曝光时长采集第一帧TOF数据。以电子设备所处环境的光线强度作为依据,获取采集首帧TOF数据的曝光时长,所以,有利于在首帧采集到亮度满足面部识别要求的TOF数据,从而有利于提高面部识别的准确性和速度。
可选的,所述光线强度对应的曝光时长的获取过程包括:在预设的各个光线强度区间中,确认目标区间,所述目标区间为所述电子设备所处环境的光线强度所属的区间;获取所述目标区间对应的曝光时长。
可选的,所述各个光线强度区间包括室内光线强度区间以及室外光线强度区间。室内光线强度区间与室外光线强度区间的划分,为区分粒度的精细化奠定基础。
可选的,所述各个光线强度区间中的数值越大,对应的曝光时长越短,以遵循曝光时长越长则图像的亮度越高的原理,保证通过曝光时长控制图像质量的合理性。
本申请的第四方面提供一种电子设备,包括:TOF相机、存储器以及处理器。TOF相机用于采集第一帧TOF数据,所述第一帧TOF数据包括投射关闭数据以及红外数据,所述投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据。存储器用于存储程序代码。处理器用于运行所述程序代码,以实现第一方面、第二方面以及第三方面提供的数据获取方法。
本申请的第五方面提供一种芯片系统,包括:至少一个处理器以及接口,所述接口用于接收代码指令,并传输至所述至少一个处理器;所述至少一个处理器运行所述代码指令,以实现第一方面、第二方面或者第三方面提供的数据获取方法。
本申请的第六方面提供一种可读存储介质,其上存储有程序,在所述程序被计算设备读取并运行时,实现第一方面、第二方面或者第三方面提供的数据获取方法。
附图说明
图1a为使用电子设备进行面部识别的场景示例图;
图1b为强光下使用电子设备进行面部识别的场景示例图;
图2为本申请实施例公开的一种电子设备的结构示意图;
图3为本申请实施例公开的电子设备实现面部识别的软件框架示意图;
图4为本申请实施例公开的一种面部识别方法的流程图;
图5为本申请实施例公开的又一种面部识别方法的流程图;
图6为本申请实施例公开的又一种面部识别方法的流程图;
图7为本申请实施例公开的一种数据处理方法的流程图;
图8为本申请实施例公开的又一种数据处理方法的流程图;
图9为图8中S819的具体实现流程图;
图10为本申请实施例公开的一种数据获取方法的流程图;
图11为本申请实施例公开的又一种数据获取方法的流程图;
图12为本申请实施例公开的又一种数据获取方法的流程图;
图13为电子设备实现面部识别的又一种软件框架示意图。
具体实施方式
图1a为面部识别应用在电子设备的场景,用户在使用电子设备例如手机实现某项需要用户授权的功能时,用户自发或依据电子设备发出的提示信息,将电子设备的摄像头对准面部。电子设备目前通常配置RGB(Red Green Blue)相机。电子设备通过RGB相机采集到图像后,将图像与已存储的面部模板进行比对以及防伪识别,得到识别结果后,依据识别结果执行任务。
例如,用户预期使用人脸解锁屏幕,则将手机屏幕朝向人脸,以使得手机的前置相机采集到面部图像。手机在采集图像后,将图像进行面部识别,如果识别通过则解锁屏幕,如果识别不通过,则保持锁屏状态。
发明人在研究的过程中发现,电子设备在使用识别结果识别授权用户时,安全性有待提高:目前对于面部的恶意攻击主要可划分为平面攻击(如用照片冒充真实人脸)、头模面具攻击(如用3D模型冒充真实人脸)以及相似面部仿冒。
发明人在研究的过程中还发现,将飞行时间(Time-of-Flight,TOF)数据用于面部识别,具有如下特点:1、TOF数据可以生成深度图像以及红外图像,所以具有较好的防伪性能(即防止平面攻击和头模面具攻击)。2、TOF数据受光线的影响较小,所以TOF数据的质量较高,因此人脸比对可以使用较高的门限值,从而降低了相似面部的仿冒的可能性。可见,将TOF数据用于面部识别可以提高安全性。
然而,目前主流的硬件平台仅支持RGB数据的处理,而不支持TOF数据的处理,例如,目前主流的处理器中的图像信号处理器(image signal processor,ISP)仅支持将RGB相机采集的RGB相机原始数据(Camera Raw Data)数据处理为RGB原始数据(Raw Data),而无法对TOF相机采集的TOF数据(如Camera Raw Data)进行处理。而为了处理TOF相机采集的TOF数据重新设计硬件平台的代价过大。所以,需要借助软件框架处理TOF相机采集的TOF数据,而与硬件相比,软件更易受到攻击,在这种情况下,即使将TOF数据用于面部识别,仍然有安全漏洞。
本申请的以下实施例提供了一种面部识别方法,应用于电子设备,目的在于提高面部识别的安全性。
图2为电子设备的示例图,包括:TOF相机1,处理器2、存储器3、以及I/O子系统4。
TOF相机1用于采集TOF数据。在某些实现方式中,TOF相机1被设置为电子设备的前置相机,用于采集电子设备的显示屏(图2中未画出)前方的TOF数据。例如,采集位于电子设备的显示屏之前的人脸的TOF数据。
TOF相机1包括TOF传感器11、TOF传感器控制器12、TOF光源13以及TOF光源控制器14。
在某些实施方式中,TOF光源控制器14受TOF传感器控制器12的控制,实现对TOF光源13的控制。TOF光源13在TOF光源控制器14的控制下,发射红外(IR)光。TOF 传感器11用于感应红外(IR)光在物体例如人脸反射的光线,以采集TOF数据。
TOF传感器控制器12以及TOF光源控制器14设置在I/O子系统4中,通过I/O子系统4与处理器2通信。
存储器3可以用于存储计算机可执行的程序代码。具体的,存储器3可以包括程序存储区和数据存储区。其中,程序存储区可存储用于实现操作系统、软件系统、至少一个功能所需的程序代码等。数据存储区可存储电子设备使用过程中所获取、生成以及使用的数据等。
在某些实现方式中,存储器3的全部或一部分,可以集成在处理器2中,作为处理器2的内部存储器。在某些实现方式中,存储器3为相对于处理器2的外部存储器,通过处理器2的外部存储器接口与处理器2通信。
在某些实现方式中,存储器3可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
处理器2可以包括一个或多个处理单元,例如:处理器2可以包括应用处理器(application processor,AP),图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP)等。
在某些实现方式中,处理器2可以包括安全区域处理器21,面部识别涉及的数据可以存放在安全区域处理器21中,并由安全区域处理器21通过调用存储器3中的程序代码执行对这些数据的处理流程,以提高面部识别的安全性。
在某些实现方式中,安全区域处理器21用于存储TOF相机1采集的TOF数据,并使用TOF数据生成TOF图像,使用TOF图像进行面部识别处理,得到识别结果。处理器2可使用识别结果执行人脸解锁或人脸支付等任务。
可选的,电子设备还可以包括环境光传感器(图2中未画出),用于感应环境电子设备所处的环境的光线强度。可以理解的是,环境光传感器通过设置在I/O子系统4中的环境光传感器控制器(图2中未画出)与处理器2通信。
可以理解的是,本实施例示意的结构并不构成对电子设备的具体限定。在另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件、软件或软件和硬件的组合实现。
可以理解的是,处理器2通过运行存储器3中存储的代码,实现的操作系统可以为iOS操作系统、Android开源操作系统、Windows操作系统、鸿蒙操作系统等。在以下实施例中,将以Android开源操作系统为例进行说明。
电子设备的处理器2运行存储器3中存储的程序代码,可以实现图3所示的基于Android开源操作系统的软件框架:
软件框架包括可信执行环境(Trusted Execution Environment,TEE)以及可以执行富指令的通用执行环境(Rich Execution Environment,REE)。TEE可由处理器2中的安全区域处理器21通过运行程序代码实现。
REE中包括Android开源操作系统的各层,包括但不限于:应用程序层、应用程序框架层、硬件抽象层以及内核层。
应用程序层可以包括一系列应用程序包。如图3所示,应用程序包可以包括支付应用以及锁屏应用。各个应用可以在不同场景下发起任务请求,以触发面部识别。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图3所示,应用程序框架层可以包括相机服务(Camera service)以及面部服务(Face service)。Camera service用于实现相机功能。Face service用于实现面部识别功能。
硬件抽象层用于将硬件抽象化。它隐藏了特定平台的硬件接口细节,为操作系统提供虚拟硬件平台,使其具有硬件无关性,可在多种平台上进行移植。硬件抽象层中的相机硬件抽象层(Camera HAL3)用于响应Camera service的指令,对相机进行控制。面部应用客户端(Face Client application,Face CA,又称为面部客户应用)通过调用位于REE的TEE客户端的API(Application Programming Interface,应用编程接口)去访问TEE的可信应用,从而使用TEE及可信应用提供的安全功能。
内核层是硬件与软件之间的层。硬件的驱动软件通常设置在内核层。图3中,内核层中的ISP-Lite的作用之一为驱动处理器中的ISP-Lite(ISP的一部分),将TOF传感器采集的TOF数据(如TOF camera Raw Data)转换为图像处理常用的数据格式,例如TOF原始数据(Raw Data)。在以下实施例中,将ISP-Lite处理得到的数据以及TOF相机采集的TOF数据,统称为TOF数据。内核层中的TOF相机驱动用于驱动TOF相机。TEE中包括用于基于TOF数据进行面部识别的各个应用、模块以及单元。其中,面部可信应用(Face Trusted Application,Face TA)的作用为调度TEE中的各个模块以及单元,并与REE通信。数据处理模块用于处理Raw Data等数据。面部识别模块用于实现使用面部模板进行面部识别、以及面部模板管理等功能。
基于图3所示的框架,电子设备能够在TEE将TOF数据处理为TOF图像,并在TEE使用TOF图像进行面部识别,得到识别结果,即在使用TOF数据进行面部识别以提高安全性的基础上,数据处理以及识别均在TEE中实现,能够进一步提高安全性。
下面将结合图3,对本申请实施例公开的面部识别方法进行详细说明,如图4所示,本申请实施例公开的电子设备执行的面部识别方法包括以下步骤:
S401、通过应用程序层的应用,向应用程序框架层的面部服务(Face service)传输任务请求。
可以理解的是,应用可以响应于至少一种指令、用户的至少一种操作、或至少一种传感器信号等,发起任务,并向面部服务发送任务请求。以锁屏应用为例,用户按下电源键,触发解锁任务,解锁应用向Face service发送解锁请求。
S402、通过Face service向硬件抽象层的面部应用客户端(Face Client application,Face CA)传输任务请求。
S403、通过Face CA向应用程序框架层的相机服务(Camera service)传输图像请求,以响应任务请求。
S404、通过Camera service向硬件抽象层的相机硬件抽象层(Camera HAL3)传输图像请求。
S405、通过Camera HAL3响应图像请求,向内核层的TOF相机驱动传输出图指令, 以驱动TOF相机采集TOF数据。
S406、内核层的ISP-Lite将接收并处理后的TOF数据,以第一存储信息存储在TEE的第一安全缓冲存储单元(Secure Buffer)。第一存储信息指示存储地址。
由ISP-Lite将接收并处理后的TOF数据存入TEE,相当于TOF数据从硬件(处理器的ISP)直接被存入安全区,所以,降低了被攻击的可能性。
S407、通过内核层的ISP-Lite向硬件抽象层的Camera HAL3传输第一存储信息。
因为第一存储信息在REE中传输,所以,为了保证第一存储信息的安全性,可选的,第一存储信息可以为加密信息,即密文。本步骤中,加密信息的一种示例为文件描述符(File Descriptor,FD),用于描述存储位置以及读取方式。
为了便于描述,将第一存储信息的密文简称为FD1。
S408、通过Camera HAL3,向应用程序框架层的Camera service传输FD1以及标定数据。
因为后续TOF图像的生成需要用到标定数据,所以,Camera HAL3获取预先配置在REE中的标定数据,并跟FD1一并向TEE传输。
可选的,Camera HAL3可以从REE的存储单元获取一部分标定数据,可以从TOF相机获取另一部分标定数据。
S409、通过Camera service,向硬件抽象层的Face CA传输FD1以及标定数据。
S410、通过Face CA向TEE的面部可信应用(Face Trusted Application,Face TA)传输FD1以及标定数据。
可以理解的是,上述REE中各层的模块之间的信息交互,均可以遵循各层之间的通信协议。
基于现有的操作系统的框架,将TOF数据的存储信息传输至TEE,为在TEE中处理TOF数据奠定了基础。并且,由ISP-Lite将接收并处理后的TOF数据存入TEE,保证了TOF数据的安全。
S411、通过TEE的Face TA,将FD1向TEE的数据处理模块传输,并将标定数据存储至TEE的标定数据存储单元。
S412、通过数据处理模块,依据FD1从第一安全缓冲存储单元中读取TOF数据,并从标定数据存储单元读取标定数据。
S413、通过数据处理模块,使用TOF数据和TOF相机的标定数据,生成深度图像和红外图像。
S414、通过数据处理模块,以第二存储信息,将TOF图像存储在TEE的第二安全缓冲存储单元。
因为均在TEE实现,所以,第二存储信息可以不加密。
S415、通过数据处理模块,向Face TA传输第二存储信息。
S416、通过Face TA,向面部识别模块传输第二存储信息。
S417、通过面部识别模块,依据第二存储信息从第二安全缓冲存储单元读取TOF图像。
S418、通过面部识别模块对读取的TOF图像进行面部识别,得到识别结果。
具体的,面部识别模块通过调用预先配置的面部模板,对TOF图像进行面部比对,得 到比对结果,并且,使用TOF图像进行防伪识别,得到防伪结果。识别结果可以依据比对结果和防伪结果确定,例如,比对结果为比对通过,防伪结果为防伪通过,则识别结果为通过。
如前所述,因为TOF图像包括深度图像和红外图像,所以,将红外图像与人脸模块进行比对时可以使用较高的比对门限值,因此,准确性较高,从而能够降低相似面部仿冒的可能性。又因为深度图像和红外图像可同时被用于防伪识别,所以具有较高的防伪性能(即防止平面攻击和头模面具攻击的性能较高)。
S419、通过面部识别模块将识别结果向Face TA传输。
S420、通过TEE的Face TA,将识别结果向REE的Face CA传输。
S421、通过Face CA,将识别结果向应用程序框架层的Face service传输。
S422、通过Face service,将识别结果向应用程序层中发起任务请求的应用传输。
以应用程序层的锁屏应用为例,锁屏应用发起解锁请求后,接收到识别结果,并基于识别结果指示识别通过,执行解锁任务,基于识别结果指示识别不通过,执行保持锁屏状态的任务。
可以理解的是,应用程序层的其它应用发起任务请求,获得面部识别结果的流程类似,这里不再赘述。
本实施例所述的面部识别方法,使用TOF数据进行面部识别,所以具有较高的安全性。进一步的,在TEE实现基于TOF数据的面部识别,所以能够保障TOF数据、数据处理过程以及识别过程的安全,从而进一步提高面部识别的安全性。
并且,基于安卓操作系统的各层,实现数据的存储信息、识别结果以及标定数据等在REE与TEE之间的传输,具有较好的兼容性。
可以理解的是,因为标定数据属于相机域的数据,而非面部域的数据,所以,在某些实现方式中,标定数据还可以从Camera service经由Face service中转至Face CA。
可以理解的是,在标定数据被存储在TEE的标定数据存储单元后,只要处理器不掉电,则TEE的标定数据存储单元中的标定数据不会丢失,所以,不必再重新加载标定数据。当然,也可以重新加载标定数据,这里不做限定。标定数据与FD1一并传输仅为一种实现方式,也可以分开传输,传输路径如前所述。
可以理解的是,TOF相机曝光一次采集到一组TOF数据,连续曝光多次,采集到多组TOF数据后,停止第一时长,再次连续曝光多次,再停止第一时长,以此类推……。将连续采集的多组TOF数据称为一帧TOF数据,间隔第一时长后再次采集的多组TOF数据为另一帧TOF数据。在不考虑其它模块自身的处理时延以及传输时延的前提下,ISP-Lite也以第一时长间隔,依次将每帧TOF数据存储至第一安全缓冲存储单元,并也以第一时长间隔,依次将各帧TOF数据的存储信息传输至Face CA。Face CA也以第一时长间隔,依次将每帧TOF数据的存储信息传输至Face TA。因此,Face TA也以第一时长间隔依次接收每帧TOF数据的存储信息。图4中的S406可以为存储任意一帧TOF数据,S407-S411可以为传输任意一帧TOF数据的FD,S413-S419为针对这一帧TOF数据的处理流程。S420-S422可以为针对这一帧TOF数据的识别结果的处理流程,也可以为针对多帧TOF数据的识别结果的处理流程。
因为TOF相机通过投射红外光采集TOF数据,所以,有必要关注在采集过程中人眼的安全性。又因为TOF图像的质量关系到识别结果的准确性,而TOF相机的曝光参数又直接关系TOF图像的质量,所以,有必要对TOF相机的曝光参数进行调优。
本申请的实施例中,将图4所示的流程进一步改进,数据处理模块可以依据接收到的TOF数据的类型,生成不同的用于调整TOF相机的参数。
图5所示为生成并反馈自动曝光(Automatic Exposure,AE)结果、以及依据AE结果控制相机的流程,这里仅说明图5与图4的区别:
S413a、通过TEE的数据处理模块,响应于TOF数据为面部识别帧,生成深度图像和红外图像以及AE结果。
本申请的实施例中,TOF相机将采集的TOF数据划分为安全指示帧和面部识别帧。安全指示帧中携带人眼安全标志位,用于指示TOF相机发射的红外光对于人眼是否安全。面部识别帧是用于进行面部识别的TOF数据帧。在一些实现方式中,通过TOF数据帧中的某个数值、字符、字符串的至少一项,指示TOF数据帧的类型。
在一些实现方式中,TOF数据采集的TOF数据中,第一帧至第四帧TOF数据为安全指示帧,后续帧TOF数据为面部识别帧。在安全指示帧中包括人眼安全标志位。
AE结果用于调整TOF相机的曝光参数。可选的,AE结果包括但不限于曝光时长以及TOF传感器的物理增益。生成AE结果的一种方式为:提取TOF原始数据(Raw Data)即原始图像中的面部区域,计算面部区域的亮度值,并与预先配置的目标亮度值相比,得到曝光时长以及TOF传感器的物理增益。可以每帧TOF数据生成一个AE结果,也可以多帧TOF数据生成一个AE结果。
S415a、通过TEE的数据处理模块,将第二存储信息以及AE结果,向TEE的Face TA传输。
可以理解的是,在上一帧TOF数据的AE结果与最新得到的AE结果相同的情况下,可以不传输最新得到的AE结果,以节省TEE的资源。
S420a、通过TEE的Face TA,将识别结果以及AE结果,向REE的硬件抽象层的Face CA传输。
S423a、通过Face CA,将AE结果,向REE的应用程序框架层的Camera service传输。
S424a、通过Camera service,将AE结果,向硬件抽象层的Camera HAL3传输。
S425a、通过Camera HAL3,依据AE结果,通过TOF相机驱动调整TOF相机。
Camera HAL3可以通过TOF相机驱动向TOF传感器控制器发送AE结果,由TOF传感器控制器控制TOF传感器按照AE结果采集TOF数据。
图6所示为生成并反馈安全标识、以及依据安全标识控制相机的流程,这里仅说明图6与图4的区别:
S413b、通过TEE的数据处理模块,响应于TOF数据为安全指示帧,生成安全标识。
安全标识用于指示TOF光源对人眼是否安全。
可选的,安全标识的生成方式为:在接收到携带人眼安全标志位的TOF数据帧(即安全指示帧)后,提取人眼安全标志位,并依据提取的人眼标志位判断人眼是否安全,生成 安全标识。人眼安全标志位指示安全,则得到指示安全的安全标识。如果人眼安全标志位指示不安全,则得到指示不安全的安全标识。
S415b、通过TEE的数据处理模块,将第二存储信息、以及安全标识,向TEE的Face TA传输。
一般情况下,在TOF相机从启动采集到结束采集的过程中,各安全指示帧携带的人眼安全标志位相同,例如,第一帧安全指示帧至第四帧安全指示帧中,人眼安全标志位均指示人眼安全,或均指示人眼不安全。在此情况下,得到的多个安全标识相同,因此,还可以仅传输一个安全标识,以节省TEE的资源。
但不排除有些情况下,多帧安全指示帧中的人眼标志位不相同。因此,还可以仅传输最后一个安全标识,即依据最后接收到的安全指示帧确定的安全标识,以节省TEE的资源。
S420b、通过TEE的Face TA,将识别结果、以及安全标识,向REE的硬件抽象层的Face CA传输。
S423b、通过Face CA,将安全标识,向REE的应用程序框架层的Camera service传输。
S424b、通过Camera service,将安全标识,向硬件抽象层的Camera HAL3传输。
S425b、通过Camera HAL3,依据安全标识,通过TOF相机驱动调整TOF相机。
调整TOF相机包括但不限于:安全标识指示人眼不安全,则关闭TOF相机,或者,降低TOF光源的发射强度。安全标识指示人眼安全,则采集TOF数据帧,并标识为用于面部识别的TOF数据帧(即面部识别帧)。
基于图2,可选的,Camera HAL3可以通过TOF相机驱动向TOF传感器控制器发送关闭指令,以关闭TOF相机。
综上所述,电子设备通过处理器获得TOF Raw Data之后,将TOF Raw Data,在TEE中处理为TOF图像,并使用TOF图像进行面部识别,所以使得面部识别具有更高的安全性。
并且,结合安卓系统的各层,实现了TOF数据的存储信息向TEE的传输,不仅为TEE中的处理奠定了基础,还实现了与安卓系统的兼容。
可见,即使电子设备的处理器不支持TOF图像的生成,本实施例所述的上述框架以及流程,也能够实现使用TOF数据实现面部识别,从而提高安全性的目的。
进一步的,还能实现人眼安全保障以及曝光参数的调整,从而在提高面部识别安全性的基础上,使得电子设备具有更优的性能。
可以理解的是,以上图4-图6虽然以安卓操作系统为例进行说明,但本申请实施例所述的面部识别方法,并不限于安卓操作系统。例如,TEE的运行不限于安卓操作系统,所以,TEE中的各个应用、模块以及单元的功能,在其它操作系统也能够实现。REE的运行也不限于安卓操作系统,所以,TOF数据、标定数据传输至TEE的传输通路,以及任务请求的传输通路、识别结果的反馈通路等,均能够适应于不同的操作系统而调整,只要能够实现TOF数据、标定数据从REE传输至TEE,以及任务请求触发TOF数据的采集、识别结果用于执行任务即可。
发明人发现,除了安全性之外,面部识别的准确性和执行速度也有提升空间,即面部 识别还存在以下问题:
图像质量不高导致的识别结果不准确:因为环境光的影响,相机采集的数据可能存在过度曝光、模糊、亮度不够等质量缺陷,而面部识别以相机采集的数据为基础,所以,有可能出现比对结果错误的问题,例如原本是同一人,但人脸识别结果不通过。
时延过大导致用户体验不好:结合图1a所示的场景,如果面部识别的速度太慢,用户需要等待的时长过长,则会降低用户的使用舒适度。图像质量不高通常是导致时延过大的原因。
提高图像质量和提高执行速度是一对矛盾体:要提高图像质量,就要对图像进行迭代、降噪等处理使得图像收敛,处理的手段越多,则处理速度越慢。
发明人在研究的过程中还发现,TOF成像具有以下特点:
(1)TOF相机采集的TOF数据受环境光的影响较小,所以大多数场景下,TOF相机采集的首帧TOF数据(如首帧面部识别帧)的质量能够满足识别准确性的需求。因此,在大多数场景下,使用首帧TOF数据即可得到准确性较高的面部识别结果。
还以人脸解锁业务为例:假设尝试解锁的人脸即为电子设备中已录入的人脸,也就是说,尝试解锁的人脸能够解锁屏幕,在此情况下,首帧TOF数据就可以实现解锁。
(2)在反馈调整机制(反馈调整机制是指,使用先采集的数据以及相机的标定数据,调整后续采集数据的曝光参数以提升图像质量)生效的情况下,TOF数据的收敛(收敛是指不再变化)速度快,大概3-4帧TOF数据即收敛。
接上例,假设采集的首帧TOF数据没有实现解锁,那么在使用反馈调整机制调整曝光参数的情况下,采集的第3帧或第4帧TOF数据实现收敛(即后续即使再调整曝光参数,TOF数据也不再变化),则采集的第3帧或第4帧TOF数据即可实现解锁。
基于上述TOF成像的特点,在使用TOF数据进行面部识别以提高安全性的前提下,为兼顾准确性和速度提供了可能性。
在以下实施例中,基于前述以第一时长间隔依次将每一帧TOF数据存储至TEE,并将每一帧TOF数据的存储信息传输至TEE的方式,结合TOF成像的上述特点,通过Face TA对数据处理模块以及面部识别模块的调度,实现准确且快速地获取识别结果的目的。
图7为本申请实施例公开的一种数据处理方法,与图4、图5或图6相比,改进在于TEE中的应用以及模块,REE中的各个模块的功能以及数据传输方式均与图4、图5或图6相同,不再赘述。图7中所示的流程,从Face TA接收到FD1以及标定数据后开始,包括以下步骤:
S701、Face TA接收到第一帧TOF数据的FD(简称为FD1)后,将FD1传输至数据处理模块。
在一些实现方式中,第一帧TOF数据为Camera HAL3响应图像请求,控制TOF相机驱动,驱动TOF相机采集的第一帧TOF数据。
在另一些实现方式中,TOF相机采集的TOF数据帧分别标识为安全指示帧和面部识别帧。安全指示帧中携带人眼安全标志位,用于指示TOF相机发射的红外光对于人眼是否安全。面部识别帧是用于进行面部识别TOF数据帧。在此情况下,本步骤中的第一帧TOF数据,为第一帧面部识别帧。
S702、数据处理模块使用FD1,从第一安全缓冲存储单元读取第一帧TOF数据,并从标定数据存储单元读取标定数据。
本实施例中,假设标定数据已经被存储至标定数据存储单元。
S703、数据处理模块使用第一帧TOF数据以及标定数据,生成TOF图像,并生成自动曝光(Automatic Exposure,AE)结果。
如前所述,TOF图像包括深度图像和红外图像。
AE结果用于TOF相机调整采集TOF数据的曝光参数。AE结果包括但不限于曝光时长以及TOF传感器的物理增益。生成AE结果的一种方式为:提取TOF原始图像(Raw Data)中的面部区域,计算面部区域的亮度值,并与预先配置的目标亮度值相比,得到曝光时长以及TOF传感器的物理增益。
为了便于描述,将第一帧TOF数据处理得到的TOF图像称为第一帧TOF图像。将第一帧TOF数据得到的AE结果称为第一AE结果。
S704、数据处理模块将第一帧TOF图像存储在第二安全缓冲存储单元。
S705、数据处理模块将第一帧TOF图像的存储信息以及第一AE结果向Face TA传输。
可以理解的是,因为第一帧TOF图像的存储信息仅在TEE传输,所以可以使用明文形式传输。
S706、Face TA将第一AE结果向TOF相机传输。
第一AE结果向TOF相机传输的通路,结合图4、图5或图6所示的示例为:Face TA、Face CA、Camera service、Camera HAL3、以及TOF相机的TOF传感器控制器,再由TOF传感器控制器使用第一AE结果调整TOF相机的参数,包括但不限于曝光时长。
可以理解的是,第一AE结果能够作用的TOF数据帧,与处理器处理TOF数据的时长相关。以TOF相机30ms的采集间隔为例,假设处理器生成第一TOF图像以及第一AE结果的总耗时也为30ms,则第一AE结果最早能够作用于TOF相机采集的第三帧TOF数据。
生成以及传输AE结果为可选步骤。目的在于,改善后续采集的TOF数据的质量,进而得到质量更好的TOF图像,以及更准确的识别结果。
第一帧TOF数据可以使用预先配置的固定曝光参数采集。
S707、Face TA将第一帧TOF图像的存储信息向面部识别模块传输,以触发面部识别模块进行面部识别。
可以理解的是,S706以及S707的执行顺序不做限定。
S708、面部识别模块使用第一帧TOF图像的存储信息,从第二安全缓冲存储单元读取第一帧TOF图像。
S709、面部识别模块使用第一帧TOF图像进行面部识别,得到第一识别结果。
结合图3,面部识别模块调用面部模板,对第一帧TOF图像进行面部比对。除了面部比对之外,还可以使用深度图和红外图进行防伪识别。因此,面部识别包括面部比对和防伪识别。面部比对和防伪识别均通过,则面部识别的识别结果指示通过。
S710、面部识别模块将第一识别结果向Face TA传输。
S711、Face TA判断第一识别结果是否指示识别通过,如果是,执行S712,如果否,执行S713。
S712、Face TA向Face CA传输第一识别结果。
如前所述,Face CA将第一识别结果传输至任务的发起方,例如解锁应用。
S713、Face TA判断是否满足结束条件,如果是,执行S712、如果否,执行S714。
结束条件可以预先设置,本步骤中,结束条件包括:执行任务的时长达到第二时长门限。第二时长门限的一个示例为5秒。结束条件为可选,也可以不判断是否满足结束条件,在第一识别结果指示识别不通过的情况下,执行S714。
可见,设置结束条件的目的为避免任务执行过程中不必要的耗时:如果经历一定的时长后,面部识别还是不通过,则基本可以确定面部识别结果确定为不通过,例如,尝试解锁的人脸没有被预先存储在电子设备中。所以没有必要再继续面部识别。而应该尽快向任务反馈面部识别结果,以减小任务执行的时延,从而保证较好的用户体验。
S714、Face TA再次从TOF相机接收一帧TOF数据。
还以TOF相机30ms的采集间隔,生成第一帧TOF图像以及第一AE结果的耗时也为30ms为例,在此情况下,再次从TOF相机接收的一帧TOF数据为TOF相机采集到的第七帧TOF数据。
可以理解的是,因为第一AE最早可以作用于第三帧TOF数据,所以,第七帧TOF数据是使用第一AE结果调整曝光参数后采集的TOF数据帧。
再次接收到TOF数据帧后,Face TA要执行的后续流程,可以参见S701-S714,即将S701-S714中的作为处理对象的第一帧TOF数据,替换为再次接收的一帧TOF数据如第七帧TOF数据,将第一帧TOF数据的处理结果,适应性替换为再次接收的一帧TOF数据的处理结果(例如将第一帧TOF图像替换为第二帧TOF图像)即可,这里不再赘述。
图7所示的流程,与现有的面部识别流程相比,至少具有以下区别:现有技术的图像数据的处理流程中包括迭代处理图像数据,以获取到收敛的图像数据的步骤,目的在于获得质量满足识别准确性需求的图像。
以RGB数据为例,因为RGB数据很容易受到环境光的影响,所以采集的首帧RGB数据的质量通常不能满足识别准确性的需求,因此,需要对RGB数据进行迭代处理,以获取到收敛的RGB数据,再使用收敛的RGB数据执行面部识别,以保证识别结果的准确性。通常,RGB数据基于迭代算法的不同,较快能够10帧左右收敛,较慢则需30-40帧收敛。所以,本领域技术人员的共识为,首帧RGB数据大概率因为环境光的影响质量很差,直接用于面部识别没有意义。
而本实施例中,基于上述TOF成像原理(1)大多数场景下,首帧TOF数据即收敛,可知,第一帧TOF数据大概率可以得到较为准确的识别结果。因此,在电子设备采集到首帧TOF数据后,即使用首帧TOF数据生成的首帧TOF图像得到面部识别结果,所以,能够在保证识别结果具有较高的准确性的前提下,获得更小的处理时延。也就是说,基于TOF成像的上述原理,省略了迭代图像数据以期望获得收敛的图像数据的步骤,以换取图像数据处理速度的提升。
进一步的,发明人在研究中发现,TOF相机采集TOF数据的间隔以及一个处理器生成TOF图像的时长通常均为30ms,人脸识别的时长通常为150ms,所以,使用TOF数据得到识别结果的时长为:30ms+150ms=180ms。在其他条件等同的情况下,该数值能够说明本 实施例所述的方法能够使用户明显感受到解锁速度的加快,所以能够改善用户体验。
因为再次接收的一帧TOF数据使用第一AE结果确定的曝光参数采集,所以收敛的概率更大,因此,即使首帧TOF数据因为质量问题没有识别通过,那么,再次接收到的一帧TOF数据也能够识别成功,耗时为:(30ms+150ms)*2=360ms,跟RGB数据收敛的时长相比,仍然具有优势。
本实施例应用的一种更为具体的场景示例为:
受限于手机的硬件条件,仅有一个处理器(如图2所示的安全区域处理器21)能够被用于实现面部识别。
假设尝试解锁的人脸即为电子设备中已录入的人脸,用户将人脸对准手机屏幕后,TOF相机以30ms的间隔采集TOF数据,并向处理器传输,处理器在接收到第一帧TOF数据后,按照本实施例所述的流程,将第一帧TOF数据处理为第一帧深度图像和第一帧红外图像,并使用为第一帧深度图像和第一帧红外图像得到第一识别结果,在处理器生成第一帧TOF图像并计算识别结果的过程中,因为处理器被占用,所以虽然TOF相机还在向处理器传输TOF数据帧,但处理器不能再接收后续的TOF数据帧,也就是说,除了接收到的第一帧TOF数据,其它TOF数据帧被丢弃。
可见,本实施例不考虑TOF数据迭代过程的流程,恰好能够契合上述一个处理器的场景。也就是说,在仅有一个处理器被用于TOF数据处理和面部识别的场景下,如果既要实现识别处理,还要执行TOF数据的迭代以等待TOF数据的收敛,则必须由处理器串行处理,所以耗时过长。而本实施例所述的方法,因为不执行TOF数据的迭代步骤,所以,能够缩短耗时,又因为TOF数据在绝大多数情况下首帧即收敛的特性,在缩短面部识别耗时的情况下,还能够保证识别结果较为准确。
综上所述,本实施例所述的数据方法,基于TOF数据大概率首帧收敛的特性,保证了绝大多数情况下能够准确且快速地进行面部识别,又因为TOF数据被用于面部识别的安全性较高,因此能够实现安全、准确、以及快速地实现面部识别的目的。
发明人在研究的过程中进一步发现,在室外场景下,因为自然光中也存在与TOF光源发射的红外光的波长接近的光,所以,采集的首帧TOF数据可能会过曝,在此情况下,首帧TOF数据的质量不足以获得准确的识别结果。如前述示例,在首帧不能解锁的情况下,所需的解锁时延至少为360ms,这个时延还有进一步缩小的空间。
本申请实施例公开的又一种数据处理方法,如图8所示:
S801-S806与S701-S706相同,具体可参见图8,这里不再赘述。
S807-S812为接收到第二帧TOF数据的存储信息后,对第二帧TOF数据的处理流程,与对第一帧TOF数据处理的处理流程相同,具体可参见图8,这里不再赘述。
可以理解的是,第二帧TOF数据可能为TOF相机使用基于第一AE结果调整的曝光参数采集的TOF数据帧,也可能与第一帧TOF数据的曝光参数相同。具体是以上哪种情况,取决于TOF相机采集TOF数据帧的间隔(如前述第一时长间隔)以及处理器从接收到TOF数据帧至反馈AE结果的时长。
假设TOF相机采集以及传输数据的间隔为30ms,处理器将TOF数据处理为TOF图像 并获得AE结果的时长也为30ms,则第一AE结果不能作用于第二帧TOF数据的采集。
S813-S818为接收到第三帧TOF数据的存储信息后,对第三帧TOF数据的处理流程,与对第一帧TOF数据处理的处理流程相同,具体可参见图8,这里不再赘述。
从上述流程可以看出,本实施例与上述实施例的不同在于,接收并处理前三帧TOF数据。
S819、Face TA调用面部识别模块进行面部识别。
S819的一种可选的实现方式如图9所示:
S901、Face TA将第三帧TOF图像的存储信息向面部识别模块传输。
S902、面部识别模块使用第三帧TOF图像的存储信息读取第三帧TOF图像后,使用第三帧TOF图像进行面部识别,得到第三帧TOF图像的识别结果。
可以理解的是,面部识别模块从第二安全缓冲存储单元读取第三帧TOF图像,图9中不再赘述。
为了便于区别,第三帧图像的识别结果称为第三识别结果。
S903、面部识别模块将第三识别结果向Face TA传输。
S904、Face TA判断第三识别结果是否指示识别通过,如果是,执行S905,如果否,执行S906。
S905、Face TA将第三识别结果向Face CA传输。
因为第三帧TOF数据最有可能是使用AE结果调整曝光参数后采集的数据,所以,质量最有可能最优,因此,先使用第三帧TOF图像进行面部识别,可以进一步减少耗时。
S906、Face TA将第一帧TOF图像的存储信息以及第二帧TOF图像的存储信息向面部识别模块传输。
S907、面部识别模块得到第一识别结果和第二识别结果。
第一识别结果为第一帧TOF图像进行面部识别的结果。第二识别结果为第二帧TOF图像进行面部识别的结果。
可以理解的是,使用第一帧TOF图像以及第二帧TOF图像进行面部识别的顺序不限定。
S908、面部识别模块将第一识别结果和第二识别结果向Face TA传输。
S909、如果第一识别结果和第二识别结果的至少一个指示通过,Face TA将识别通过的识别结果向TOF CA传输。
S910、如果第一识别结果和第二识别结果均指示识别不通过,Face TA判断是否满足结束条件,如果是,执行S911,如果否,再次执行图8所示的流程。
结束条件的定义可参见上述实施例。本步骤中,结束条件中的时长门限可以比上述实施例中的结束条件的时长门限短,原因在于,本实施例中,已经采集并处理了多帧TOF数据。本步骤中,时长门限的一个示例为3秒。
需要说明的是,再次执行图8所示的流程时,“第N帧TOF数据”,N不再是TOF相机采集的数据帧的序列,N=1,2,3,是指处理器在本次执行图8所示的流程时,接收到的TOF数据帧的排序。例如,S801再次被执行时,TOF相机采集到的可能是第9帧TOF数据,但在本次执行图8所示的流程中,处理器接收的是第一帧TOF数据。因此,S801所述“第 一帧TOF数据”是指本次执行图8所示的流程中的第一帧TOF数据,而非TOF相机实际采集的第一帧TOF数据。
S911、Face TA将识别不通过的识别结果向TOF CA传输。
可以理解的是,除了图9所示之外,还可以使用另外的方式实现S819,例如依次使用第一帧TOF图像、第二帧TOF图像以及第三帧TOF图像进行面部识别等,又或者,仅使用第三帧TOF图像进行识别,如果识别结果指示不通过,重新执行图8所示的流程,以节省内存等资源。其它实现方式这里不再一一列举。本实施例所述的方法,基于上述TOF成像的原理(2)TOF数据2-3帧即收敛,连续处理3帧TOF数据,并得到识别结果,对于室外场景,能够获得比上述实施例更小的处理时延。
图8所示的流程的应用场景示例为:
假设尝试解锁的人脸即为电子设备中已录入的人脸,用户在室外环境下将人脸对准屏幕,TOF相机以30ms的间隔采集TOF数据,并向处理器传输,处理器在接收到第一帧TOF数据后,对接收的TOF数据进行处理,生成第一TOF图像以及第一AE结果。
因为电子设备处于室外环境,所以大概率依据首帧TOF数据不会被解锁,所以处理器在生成第一TOF图像并传输第一AE结果后,不会进行面部识别,而是继续接收并处理TOF数据帧。因为生成TOF图像以及AE结果的时长为30ms,与TOF相机采集并传输TOF数据的时长相当,所以,在处理完第一帧TOF数据后,可以继续接收并处理第二帧TOF数据,在处理完第二帧TOF数据后,可以继续接收并处理第三帧TOF数据。
基于上述时长,第一帧TOF数据生成的第一AE结果,最快能够作用于第三帧TOF数据,因此,第三帧TOF数据大概率会解锁成功。
通过第三帧TOF数据得到识别结果消耗的最短时长为:30ms*3+150ms,相比于上述实施例,在首帧TOF数据解锁不成功的情况下,从第七帧解锁变化为第三帧解锁,所以能够提高速度,减小时延。
可见,如果首帧TOF数据可以解锁,则与上述实施例相比,增加了两个TOF数据的处理(生成TOF图像以及AE结果)的时长即60ms。因此本实施例所述的数据处理方法,通过牺牲首帧TOF数据解锁的速度,提高室外强光场景下的解锁速度。
可以理解的是,因为第一AE结果作用于第三帧TOF数据,所以,可选的,在图8中,可以不处理第二帧TOF数据以及第二帧TOF数据的AE结果,即不执行S807-S812,以节省资源。相适应的,获得识别结果也不再有第二帧TOF图像的参与。
综上所述,第二帧TOF数据可以仅被接收,而不做处理。或者,第二帧TOF数据也可以不被接收而被丢弃。
本实施例中,以一个流程中连续处理三帧TOF数据为例,实际上,连续处理的TOF数据帧的数量,与生成并反馈结果AE结果的耗时,以及TOF相机采集数据帧的第一时长间隔相关。
基于上述实施例分别适用的场景,可以理解的是,以上图7或图8所述的方法,可以使用判断条件,择优使用,基于图3所示的软件框架,本申请实施例公开的又一种数据处理方法包括以下步骤:
1、Face CA获取环境光的强度值。
基于图3所示的软件框架,Face CA接收到任务请求后,还可以发送光线强度请求,以获取环境光的强度值。可以理解的是,可以通过应用程序框架层、以及硬件抽象层相应模块,向内核层的相应驱动发送光线强度请求,内核层的相应驱动,驱动光线传感器感应环境光的强度值,并反馈至Face CA。
2、Face CA将环境光的强度值向Face TA传输。
3、Face TA判断环境光的强度值是否大于预设的第一强度阈值,如果是,执行图7所示的数据处理流程,如果否,执行图8所示的数据处理流程。
本实施例中,使用光线的强度值判断电子设备所处的环境,并使用与环境更适配的流程获得面部识别结果,所以在提高安全性的基础上,还能够最大程度地提高获得面部识别结果的准确性和速度。
发明人在研究的过程中还进一步发现,无论是室外还是室内场景,在环境光极强且TOF相机的传感器面对强光源的情况下,首帧TOF数据的质量不能支持得到准确的面部识别结果,所以,还需要对TOF数据进行收敛,从而增加面部识别的时长。
例如图1b中,用户位于太阳光极强的室外,且用户背对太阳。在这种情况下,用户将手机的前置相机对准人脸,以预期进行人脸解锁(假设用户的人脸已经作为人脸模块存储在手机中),与室内环境相比,需要花费更多的时间才能解锁。
本申请实施例针对以上问题,提供一种数据获取方法,目的在于获取更高质量的首帧TOF数据,以在使用TOF数据进行面部识别以获得更高安全性的前提下,提高首帧TOF数据的识别结果的准确性,从而进一步实现快速完成面部识别的目的。
图10为本申请实施例公开的一种数据获取方法,由上述电子设备执行,结合图2-图6所示,图10所述流程从Camera HAL3通过TOF相机驱动,驱动TOF相机采集数据开始执行,包括以下步骤:
S1001、Camera HAL3响应于图像请求,向TOF相机传输出图指令。
结合图2以及图3,可以理解的是,Camera HAL3可以通过内核层的TOF相机驱动,向TOF相机传输出图指令。
S1002、TOF相机响应于出图指令,采集包括深度数据、投射关闭数据以及红外数据的第一帧TOF数据。
深度数据为用于生成深度图像的数据。投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据。红外数据为用于生成红外图像的数据。
深度数据以及红外数据均为TOF相机在开启TOF光源的情况下采集的TOF数据。
TOF相机(上电)启动时,默认开启TOF光源。在某些实现方式中,TOF相机的TOF传感器控制器向TOF传感器传输出图指令,TOF传感器响应于出图指令,采集深度数据,并向TOF光源控制器传输关闭指令。TOF光源控制器响应于关闭指令,关闭TOF光源,TOF传感器采集投射关闭数据。TOF传感器向TOF光源控制器传输开启指令,TOF光源控制器响应于开启指令,开启TOF光源,TOF传感器采集红外数据。
在一些实现方式中,第一帧TOF数据的一种示例为:4组深度数据、一组投射关闭数 据、以及一组红外数据。“一组数据”可以理解为一个二维数组。
TOF传感器先以第一曝光时长,依次采集每一组深度数据,以第二曝光时长采集红外数据,以第一曝光时长或第二曝光时长采集投射关闭数据。第一曝光时长和第二曝光时长,可以由Camera HAL3携带在出图指令中或单独向TOF相机传输,也可以由其它模块向TOF相机传输,还可以由TOF相机预先从存储模块获取。
可以理解的是,TOF传感器向TOF光源控制器发送关闭指令的时机,与第一曝光时长相关,TOF传感器向TOF光源控制器发送开启指令的时机,与投射关闭数据的曝光时长相关。
在一些实现方式中,TOF传感器可以将第一时间范围内采集到的TOF数据标记为投射关闭数据。第一时间范围可以依据TOF传感器发出关闭指令的时间与发出开启指令的时间之间的间隔时长确定。TOF传感器可以将第一时间范围之前采集的TOF数据标记为深度数据,将第一时间范围之后采集的TOF数据标记为红外数据。
可以理解的是,上述TOF相机采集深度数据、投射关闭数据以及红外数据的顺序不作为限定。例如,也可以按照投射关闭数据、红外数据、深度数据的顺序采集,还可以按照红外数据、深度数据、投射关闭数据的顺序采集。TOF传感器向TOF光源控制器传输关闭指令以及开启指令的时机,依据顺序调整。
本步骤中,使用TOF传感器控制TOF光源的开启或关闭TOF光源,具有更高的执行速度。
S1003、数据处理模块获取第一帧TOF数据后,判断第一帧TOF数据中的红外数据中是否包含目标数据块,如果是,执行S1004,如果否,执行S1005。
数据处理模块获取第一帧TOF数据的具体方式,如图4、图5或图6所示,不再赘述。
红外数据为二维数组,可以理解的是,二维数组中包括一些排列成行和列的数值,每个数值可以看作一个数据点,目标数据块为满足以下预设条件的数据块:数值大于第一阈值的数据点的数量大于第二阈值。
如前所述,TOF传感器采集的红外数据为二维数组,在经过ISP-Lite处理为红外原始数据,即红外原始图像。因此,可以理解的是,目标数据块为红外原始图像中的目标区域,目标数据块中的各个数值为目标区域中的对应像素点的亮度值,因此,对于红外原始图像而言,目标区域为:亮度值大于第一阈值的像素的数量大于第二阈值的区域。
在环境光极强且TOF相机面对强光源的情况下,TOF相机采集的TOF数据,通常会因为强光产生过曝,所以,TOF数据生成的红外原始图像中的过曝区域的大部分像素值的亮度太高,而影响识别,本步骤中的目标区域(数据块)的条件,基于这一原理设置。
S1004、使用第一帧TOF数据中的红外数据,减去第一帧TOF数据中的投射关闭数据,得到处理后的第一帧TOF数据,并将处理后的第一帧TOF数据处理为TOF图像。
S1005、将第一帧TOF数据处理为TOF图像。
可以理解的是,将第一帧TOF数据处理为TOF图像是指,将第一帧TOF数据中的深度数据处理为深度图像,将第一帧TOF数据中的红外数据处理为红外图像。
本实施例所述的方法,在首帧TOF数据中的红外数据过曝的情况下,通过将首帧TOF数据中的红外数据减去投射关闭数据的方式,去除环境光对于红外数据的影响,从而提高 首帧TOF数据的质量,进一步得到较高质量的TOF图像,有利于提高面部识别的准确性和速度。
图11为本申请实施例公开的又一种数据获取方法,由图3中所示的Camera HAL3执行,包括以下步骤:
S1101、响应于图像请求,判断间隔时长是否在预设范围内,如果是,执行S1102,如果否,执行S1103。
间隔时长为最后一次采集TOF数据的时刻(简称为第一时刻)与将要采集第一帧TOF数据的时刻(简称为第二时刻)之间的间隔时长。
可以理解的是,因为Camera HAL3通过内核层的TOF相机驱动,驱动TOF相机采集TOF数据,所以,Camera HAL3可以依据向TOF相机驱动发送出图指令的时间,选择第二时刻,例如,将向TOF相机驱动发送出图指令的时刻,增加一定的时延后的时刻,作为第二时刻。又例如,为了简化流程,Camera HAL3可以直接将系统的当前时刻,作为第二时刻。
本实施例中,第一帧TOF数据是指,如图4、图5、或图6所示,应用发出一次任务请求,而触发TOF相机采集的第一帧TOF数据。最后一次可以为,应用上一次发出的任务请求触发的TOF相机采集的最后一帧TOF数据。
S1102、指示TOF相机使用最后一次采集TOF数据使用的曝光参数,在第二时刻采集第一帧TOF数据。
如前所述,因为可以使用AE结果调整TOF相机的参数,所以,最后一次采集TOF数据使用的曝光参数,很大可能性是调整后的曝光参数,又因为第一时刻与第二时刻的间隔时长在预设范围内,所以,很有可能与最后一次采集处于相同的环境,所以,最后一次采集TOF数据使用的曝光参数,有很大可能性也适用于当前环境,所以有利于得到较高质量的TOF数据,从而有利于得到较高质量的TOF图像。
S1103、指示TOF相机使用预先配置的曝光参数采集第一帧TOF数据。
因为如果第一时刻与第二时刻的间隔时长较长,说明距离最后一次采集TOF数据已经过去较长的时间,所以,很有可能电子设备所处的环境已经发生改变,因此,最后一次采集TOF数据使用的曝光参数已经不适用于当前环境,所以再使用最后一次采集TOF数据使用的曝光参数的意义不大,因此使用预先配置的曝光参数。
本实施例所述的数据获取方法,充分利用AE调整机制调整后的曝光参数,提高首帧TOF数据的质量,进一步得到较高质量的TOF图像,有利于提高面部识别的准确性和速度。
可以理解的是,上述步骤中,最后一次采集TOF数据使用的曝光参数仅为一种实现方式,因为最后一次采集TOF数据的时刻,是距离第二时刻最晚的时刻,所以与最后一次采集TOF数据进行比较的目的在于节省算力资源。
但本实施例中,并不限定于与最后一次采集TOF数据进行比较,只要是第二时刻之前的任意一次采集,均可与第二时刻进行比较。所以,第一时刻满足的条件可概括为:早于第二时刻且与第二时刻的间隔时长在预设范围内。
图12为本申请实施例公开的又一种数据获取方法,由图3中所示的Camera HAL3执行,包括以下步骤:
S1201、响应于图像请求,获取光线强度。
本步骤中,可以使用电子设备上的环境光传感器获取电子设备所处的环境的光线强度。
S1202、指示TOF相机使用光线强度对应的曝光时长采集第一帧TOF数据。
本实施例中,预先配置多个光线强度区间与曝光时长的对应关系,对应关系满足以下原则:
1、光线强度区间包括室内光线强度区间以及室外光线强度区间。
预先配置的固定曝光时长可能不适用于室外环境,所以,室内环境与室外环境使用的曝光时长不同,因此,需要使用光线强度区间体现这种不同。
进一步的,因为在室外环境下,光线强度的变化范围很大,所以有必要进一步区分室外环境下的不同光线强度区间使用的曝光参数。
2、各个光线强度区间中的数值越大,对应的曝光时长越短。
因为曝光时长越长则图像的亮度越高,所以,在室外环境下,要缩短曝光时长才有可能避免图像的亮度太高而降低清晰度的问题。
基于以上原则配置的对应关系的一种示例为:
L<=500lux,t=1ms(默认的固定值);
500lux<L<=3000lux,t=0.7ms;
3000lux<L<=30000lux,t=0.5ms;
L>30000lux,t=0.3ms。
在上述示例中,L表示光线的强度值,t表示曝光时长,L<=500lux为室内光线强度区间,即:小于500lux可认为电子设备处于室内环境,在此情况下,使用默认的固定曝光时长即可。
500lux<L<=3000lux,3000lux<L<=30000lux,以及L>30000lux为针对室外环境进一步划分的是区间,可见三个区间中,数值越大的区间,对应的曝光时长越短。
可以理解的是,上述示例中的室外环境的强度区间的颗粒度可以调整,颗粒度越小,则对曝光时间的控制越精细,越有利于提高图像的质量,从而越有利于提高图像处理的速度。
可以理解的是,Camera HAL3可通过TOF相机驱动,将光线强度对应的曝光时长传输至TOF相机的TOF传感器控制器。
可见,本实施例中,以电子设备所处环境的光线强度作为依据,获取采集TOF数据的曝光时长,所以,有利于在首帧采集到亮度满足面部识别要求的TOF数据,从而有利于提高面部识别的准确性和速度。
需要说明的是,图7-图12所示的流程,并不限于图3所示的软件框架,还可以应用在图13所示的软件框架中。
图13中,不设置TEE,也就是说,TOF数据的处理以及面部识别均在REE中执行,具体的,可以在Face CA中执行。因此,各个模块之间可以直接传输TOF数据,而非TOF 数据的存储信息。即图13与图3的区别为:ISP-Lite接收到TOF相机采集的TOF数据后,经由Camera HAL3以及Camera service向Face CA传输的是TOF数据。Face CA将TOF数据处理为TOF图像,并使用TOF图像进行面部识别。
基于图13,图7所示的流程中,各个步骤均由Face CA执行:Face CA接收到第一帧TOF数据后,将第一帧TOF数据处理为第一帧TOF图像,并得到第一AE结果,将第一AE结果向Camera HAL3传输,并使用第一帧TOF图像进行面部识别,得到第一识别结果,第一识别结果指示识别通过,则将第一识别结果向Face service传输,第一识别结果指示不通过,再次接收第七帧TOF数据。
基于图13,图8所示的流程中的,各个步骤均由Face CA执行,这里不再赘述。
基于图13,图10所示的流程,将数据处理模块替换为Face CA即可,不再赘述。
图11-图12所示的流程,适用于图13。
可以理解的是,图7-图12所示的流程,也不限定于安卓操作系统。其它操作系统中,与数据模块具有相同功能的模块可以实现上述数据模块执行的步骤,与Camera HAL3具有相同功能的模块,可以实现上述Camera HAL3执行的步骤。
本申请实施例还公开了一种芯片系统,包括:至少一个处理器以及接口,所述接口用于接收代码指令,并传输至至少一个处理器,至少一个处理器运行代码指令,实现上述面部识别方法、数据获取方法、以及数据处理方法的至少一种。
处理器实现上述功能的具体流程,可以参见上述实施例,这里不再赘述。
本申请实施例还公开了一种计算机可读存储介质,其上存储有程序代码,所述程序代码被计算机设备执行时,实现上述实施例所述的面部识别方法、数据获取方法、以及数据处理方法的至少一种。

Claims (17)

  1. 一种数据获取方法,其特征在于,包括:
    获取第一帧飞行时间TOF数据,所述第一帧TOF数据包括投射关闭数据以及红外数据,所述投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据;
    确定所述红外数据中存在满足预设条件的数据块,所述预设条件包括所述数据块中数值大于第一阈值的数据点的数量大于第二阈值;
    依据所述红外数据与所述投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据。
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一帧TOF数据包括:
    在所述TOF相机关闭所述TOF光源后,采集所述投射关闭数据;以及
    在所述TOF相机开启所述TOF光源后,采集所述红外数据。
  3. 根据权利要求2所述的方法,其特征在于,在所述采集所述投射关闭数据之前,还包括:
    在所述TOF相机开启所述TOF光源的情况下,采集用于生成深度图像的深度数据。
  4. 根据权利要求3所述的方法,其特征在于,控制所述TOF相机关闭所述TOF光源的时机,依据采集所述深度数据使用的第一曝光时长确定。
  5. 根据权利要求3或4所述的方法,其特征在于,控制所述TOF相机开启所述TOF光源的时机,依据采集所述投射关闭数据的曝光时长确定。
  6. 根据权利要求3-5任一项所述的方法,其特征在于,所述采集所述投射关闭数据的曝光时长为采集所述深度数据使用的第一曝光时长,或者采集所述红外数据的第二曝光时长。
  7. 根据权利要求2-6任一项所述的方法,其特征在于,
    所述TOF相机关闭所述TOF光源,包括:
    通过所述TOF相机的TOF传感器,控制所述TOF光源关闭;
    所述TOF相机开启所述TOF光源,包括:
    通过所述TOF相机的TOF传感器,控制所述TOF光源开启。
  8. 根据权利要求1-6任一项所述的方法,其特征在于,还包括:
    确定所述红外数据中不存在所述数据块,将采集的所述第一帧TOF数据处理为第一帧TOF图像。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,还包括:
    使用所述生成第一帧TOF图像的TOF数据,生成第一帧TOF图像;
    使用所述第一帧TOF图像进行面部识别,得到识别结果。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述第一帧TOF数据包括:
    面部识别帧。
  11. 根据权利要求10所述的方法,其特征在于,在所述获取第一帧TOF数据之前,还包括:
    依据安全指示帧确定人眼在所述TOF光源开启的情况下安全,所述安全指示帧为在采 集所述第一帧TOF数据之前,采集的TOF数据。
  12. 根据权利要求11所述的方法,其特征在于,还包括:
    依据所述安全指示帧确定人眼在所述TOF光源开启的情况下不安全,控制所述TOF相机关闭。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述获取第一帧TOF数据,包括:
    通过内核层,将所述TOF相机采集的所述第一帧TOF数据存储至可信执行环境TEE;
    所述确定所述红外数据中存在满足预设条件的数据块,依据所述红外数据与所述投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据,包括:
    在所述TEE,确定所述红外数据中存在满足所述预设条件的数据块,依据所述红外数据与所述投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据。
  14. 一种数据获取方法,其特征在于,包括:
    获取第一帧飞行时间TOF数据,所述第一帧TOF数据包括投射关闭数据以及红外数据,所述投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据,所述第一帧TOF数据为应用程序发出任务请求而触发TOF相机采集的第一帧TOF数据;
    确定所述红外数据中存在满足预设条件的数据块,所述预设条件包括所述数据块中数值大于第一阈值的数据点的数量大于第二阈值;
    依据所述红外数据与所述投射关闭数据之差,获取用于生成第一帧TOF图像的TOF数据。
  15. 一种电子设备,其特征在于,包括:
    TOF相机,用于采集第一帧TOF数据,所述第一帧TOF数据包括投射关闭数据以及红外数据,所述投射关闭数据为TOF相机在关闭TOF光源的情况下采集的TOF数据;
    存储器,用于存储程序代码;
    处理器,用于运行所述程序代码,以实现权利要求1-14任一项所述的数据获取方法。
  16. 一种芯片系统,其特征在于,包括:
    至少一个处理器以及接口,所述接口用于接收代码指令,并传输至所述至少一个处理器;所述至少一个处理器运行所述代码指令,以实现权利要求1-14任一项所述的数据获取方法。
  17. 一种可读存储介质,其上存储有程序,其特征在于,在所述程序被计算设备读取并运行时,实现权利要求1-14任一项所述的数据获取方法。
PCT/CN2022/092485 2021-08-12 2022-05-12 数据获取方法及装置 WO2023016005A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22789143.9A EP4156674A4 (en) 2021-08-12 2022-05-12 METHOD AND DEVICE FOR DATA COLLECTION
US17/966,142 US20230052356A1 (en) 2021-08-12 2022-10-14 Data obtaining method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110925831.XA CN113727033A (zh) 2021-08-12 2021-08-12 数据获取方法及装置
CN202110925831.X 2021-08-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/966,142 Continuation US20230052356A1 (en) 2021-08-12 2022-10-14 Data obtaining method and apparatus

Publications (1)

Publication Number Publication Date
WO2023016005A1 true WO2023016005A1 (zh) 2023-02-16

Family

ID=78675639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/092485 WO2023016005A1 (zh) 2021-08-12 2022-05-12 数据获取方法及装置

Country Status (2)

Country Link
CN (2) CN113727033A (zh)
WO (1) WO2023016005A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727033A (zh) * 2021-08-12 2021-11-30 荣耀终端有限公司 数据获取方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808127A (zh) * 2017-10-11 2018-03-16 广东欧珀移动通信有限公司 人脸识别方法及相关产品
CN109522722A (zh) * 2018-10-17 2019-03-26 联想(北京)有限公司 系统安全处理方法和装置
CN111524088A (zh) * 2020-05-06 2020-08-11 北京未动科技有限公司 用于图像采集的方法、装置、设备及计算机可读存储介质
CN112384822A (zh) * 2018-07-09 2021-02-19 Lg伊诺特有限公司 输出光的方法和装置
CN113219476A (zh) * 2021-07-08 2021-08-06 武汉市聚芯微电子有限责任公司 测距方法、终端及存储介质
CN113727033A (zh) * 2021-08-12 2021-11-30 荣耀终端有限公司 数据获取方法及装置
CN113780090A (zh) * 2021-08-12 2021-12-10 荣耀终端有限公司 数据处理方法及装置
CN113779588A (zh) * 2021-08-12 2021-12-10 荣耀终端有限公司 面部识别方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10275610B2 (en) * 2016-11-28 2019-04-30 Stmicroelectronics, Inc. Time of flight sensing for providing security and power savings in electronic devices
KR102590900B1 (ko) * 2018-08-27 2023-10-19 엘지이노텍 주식회사 영상 처리 장치 및 영상 처리 방법
CN109451228B (zh) * 2018-12-24 2020-11-10 华为技术有限公司 摄像组件及电子设备
CN109981902B (zh) * 2019-03-26 2022-03-22 Oppo广东移动通信有限公司 终端及控制方法
CN113126067A (zh) * 2019-12-26 2021-07-16 华为技术有限公司 激光安全电路及激光安全设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808127A (zh) * 2017-10-11 2018-03-16 广东欧珀移动通信有限公司 人脸识别方法及相关产品
CN112384822A (zh) * 2018-07-09 2021-02-19 Lg伊诺特有限公司 输出光的方法和装置
CN109522722A (zh) * 2018-10-17 2019-03-26 联想(北京)有限公司 系统安全处理方法和装置
CN111524088A (zh) * 2020-05-06 2020-08-11 北京未动科技有限公司 用于图像采集的方法、装置、设备及计算机可读存储介质
CN113219476A (zh) * 2021-07-08 2021-08-06 武汉市聚芯微电子有限责任公司 测距方法、终端及存储介质
CN113727033A (zh) * 2021-08-12 2021-11-30 荣耀终端有限公司 数据获取方法及装置
CN113780090A (zh) * 2021-08-12 2021-12-10 荣耀终端有限公司 数据处理方法及装置
CN113779588A (zh) * 2021-08-12 2021-12-10 荣耀终端有限公司 面部识别方法及装置

Also Published As

Publication number Publication date
CN113727033A (zh) 2021-11-30
CN117014727A (zh) 2023-11-07

Similar Documents

Publication Publication Date Title
WO2023015996A1 (zh) 面部识别方法及装置
WO2023015995A1 (zh) 数据处理方法及装置
US11012626B2 (en) Electronic device for providing quality-customized image based on at least two sets of parameters
US9451173B2 (en) Electronic device and control method of the same
EP3209012A1 (en) Electronic device and operating method thereof
WO2017096857A1 (zh) 相机拍摄参数调整方法及装置
US11281892B2 (en) Technologies for efficient identity recognition based on skin features
KR102263537B1 (ko) 전자 장치와, 그의 제어 방법
KR102317820B1 (ko) 이미지 처리 방법 및 이를 지원하는 전자장치
WO2018054054A1 (zh) 一种人脸识别的方法、装置、移动终端及计算机存储介质
WO2021115038A1 (zh) 一种应用数据处理方法及相关装置
US20200195905A1 (en) Method and apparatus for obtaining image, storage medium and electronic device
CN110995994A (zh) 图像拍摄方法及相关装置
US20220262163A1 (en) Method of face anti-spoofing, device, and storage medium
WO2023016005A1 (zh) 数据获取方法及装置
TW202139684A (zh) 追焦方法及相關設備
US20150243063A1 (en) Method and apparatus for displaying biometric information
US11039080B2 (en) Control method and processing apparatus
US20240095329A1 (en) Cross-Device Authentication Method and Electronic Device
US10769416B2 (en) Image processing method, electronic device and storage medium
CN110958390A (zh) 图像处理方法及相关装置
US11483463B2 (en) Adaptive glare removal and/or color correction
EP4156674A1 (en) Data acquisition method and apparatus
CN116055699A (zh) 一种图像处理方法及相关电子设备
CN116723418B (zh) 拍照方法和相关装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022789143

Country of ref document: EP

Effective date: 20221024

NENP Non-entry into the national phase

Ref country code: DE