US12354403B2 - Data obtaining method and apparatus - Google Patents
Data obtaining method and apparatus Download PDFInfo
- Publication number
- US12354403B2 US12354403B2 US17/966,142 US202217966142A US12354403B2 US 12354403 B2 US12354403 B2 US 12354403B2 US 202217966142 A US202217966142 A US 202217966142A US 12354403 B2 US12354403 B2 US 12354403B2
- Authority
- US
- United States
- Prior art keywords
- data
- tof
- frame
- facial recognition
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/20—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/705—Pixels for depth measurement, e.g. RGBZ
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10144—Varying exposure
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
Definitions
- This application relates to the field of communication technologies, and in particular, to a data obtaining method and an apparatus.
- Facial recognition is widely used in electronic devices for identifying authorized users. For example, for a face unlock function, whether a screen is to be unlocked is determined depending on whether facial recognition is successful.
- This application provides a data obtaining method and an apparatus with an objective to solve how facial recognition is implemented securely, accurately and quickly.
- a first aspect of this application provides a data obtaining method, including: obtaining the first frame of time of flight (TOF) data, where the first frame of TOF data includes projection off data and infrared data, and the projection off data is TOF data acquired by a TOF camera with a TOF light source being off; determining that a data block satisfying a preset condition is present in the infrared data, where the preset condition includes that the number of data points in the data block with values greater than a first threshold is greater than a second threshold; and obtaining, based on a difference between the infrared data and the projection off data, TOF data used for generating the first frame of TOF image.
- TOF time of flight
- the projection off data is TOF data acquired by the TOF camera with the TOF light source being off
- overexposure can be corrected to improve quality of the first frame of TOF data.
- the higher quality first frame of TOF data being used for facial recognition provides not only greater security but also higher accuracy and execution speed.
- the obtaining the first frame of TOF data includes: acquiring the projection off data after the TOF camera turns off the TOF light source; and acquiring the infrared data after the TOF camera turns on the TOF light source.
- the acquisition of the projection off data and the infrared data with the TOF light source of the TOF camera controlled to be off and on respectively is easy to implement.
- the method before the acquiring the projection off data, the method further includes: acquiring depth data used for generating a depth image, when the TOF light source of the TOF camera is on. Depth images and infrared images generated from infrared data can be used for anti-counterfeiting recognition in facial recognition, and therefore can improve security of facial recognition.
- an occasion for controlling the TOF camera to turn off the TOF light source is determined based on a first exposure time for acquiring the depth data, which can not only ensure that the depth data is acquired, but also minimize a latency of acquiring the first frame of TOF data.
- an occasion for controlling the TOF camera to turn on the TOF light source is determined based on an exposure time for acquiring the projection off data, which can not only ensure that the projection off data is acquired, but also minimize the latency of acquiring the first frame of TOF data.
- the exposure time for acquiring the projection off data is the first exposure time for acquiring the depth data or a second exposure time for acquiring the infrared data.
- the TOF camera turning off the TOF light source includes controlling the TOF light source to be turned off via a TOF sensor of the TOF camera; and the TOF camera turning on the TOF light source includes controlling the TOF light source to be turned on via the TOF sensor of the TOF camera.
- the method further includes: determining that no such data block is present in the infrared data and processing the first frame of TOF data acquired into the first frame of TOF image. That no data block satisfying the preset condition is present in the infrared data indicates that there is no overexposed infrared data and the first frame of TOF data can be directly processed into a TOF image.
- the method further includes: generating the first frame of TOF image by using the TOF data for generating the first frame of TOF image; and performing facial recognition by using the first frame of TOF image so as to obtain a recognition result.
- the first frame of TOF image being used for obtaining a facial recognition result can improve the execution speed of facial recognition.
- the first frame of TOF data includes a facial recognition frame.
- the method before the obtaining the first frame of TOF data, the method further includes: determining, according to a safety indication frame acquired before the first frame of TOF data is acquired, that human eyes are safe when the TOF light source is on.
- a facial recognition frame is processed so as to obtain TOF data used for generating the first frame of TOF image, and before the processing, a safety indication frame is used to indicate that human eyes are safe.
- the method further includes: determining, according to the safety indication frame, that human eyes are not safe when the TOF light source is on; and controlling the TOF camera to shut down, which can avoid harm to human eyes caused by infrared light of the TOF camera.
- a specific implementation of the obtaining the first frame of TOF data is: storing the first frame of TOF data acquired by the TOF camera into a trusted execution environment TEE via a kernel layer.
- a specific implementation of the determining that a data block satisfying a preset condition is present in the infrared data and obtaining, based on a difference between the infrared data and the projection off data, TOF data used for generating the first frame of TOF image is: in the TEE, determining that a data block satisfying the preset condition is present in the infrared data and obtaining, based on the difference between the infrared data and the projection off data, the TOF data used for generating the first frame of TOF image.
- TOF data is processed in a TEE so as to improve security of the TOF data.
- a second aspect of this application provides a data obtaining method, including: acquiring TOF data at a first moment by using an exposure parameter; and acquiring the first frame of TOF data at a second moment by using the exposure parameter, where an interval between the second moment and the first moment is within a preset range and the first moment is earlier than the second moment.
- the preset range may be so set as to make the interval between the second moment and the first moment not very long, so that an external environment where the first frame of TOF data is acquired is similar to an external environment where the TOF data is acquired at the first moment.
- the first frame of TOF data is acquired at the second moment by using the exposure parameter for the first moment, so as to improve the probability of fit between the exposure parameter and the environment, thereby obtaining a higher quality first frame of TOF data. This helps improve the accuracy and speed of facial recognition when TOF data is used for facial recognition for greater security.
- FIG. 12 is a flowchart of still another data obtaining method disclosed in an embodiment of this application.
- the TOF camera 1 includes a TOF sensor 11 , a TOF sensor controller 12 , a TOF light source 13 , and a TOF light source controller 14 .
- the electronic device may further include an ambient light sensor (not shown in FIG. 2 ) configured to sense light intensity of an environment in which the electronic device is located. It can be understood that the ambient light sensor communicates with the processor 2 via an ambient light sensor controller (not shown in FIG. 2 ) disposed in the I/O subsystem 4 .
- an ambient light sensor (not shown in FIG. 2 ) configured to sense light intensity of an environment in which the electronic device is located. It can be understood that the ambient light sensor communicates with the processor 2 via an ambient light sensor controller (not shown in FIG. 2 ) disposed in the I/O subsystem 4 .
- the structure illustrated in this embodiment does not constitute any specific limitation on the electronic device.
- the electronic device may include more or fewer components than shown in the figure, or combine some components, or split some components, or have different component arrangements.
- the components shown in the figure may be implemented by using hardware, software, or a combination of software and hardware.
- the application may initiate a task in response to at least one command, at least one operation of a user, at least one sensor signal, or the like and send a task request to the face service.
- a lock screen application is used as an example. A user presses a power button to trigger an unlock task, and the lock screen application sends an unlock request to the face service.
- the face service transmits the task request to the face client application (Face CA) at the hardware abstract layer.
- the Face CA transmit an image request to the camera service at the application framework layer.
- the camera service transmits the image request to the camera hardware abstract layer (Camera HAL3) at the hardware abstract layer.
- the camera hardware abstract layer (Camera HAL3) at the hardware abstract layer.
- the Camera HAL3 transmits an image output command to the TOF camera driver at the kernel layer to drive the TOF camera to acquire TOF data.
- the ISP-Lite at the kernel layer stores the received and processed TOF data as first storage information into a first secure buffer in the TEE.
- the first storage information indicates a storage address.
- That the ISP-Lite stores the received and processed TOF data into the TEE is equivalent to that the TOF data is stored directly into a secure zone from hardware (the ISP of the processor), so that a possibility of being attacked is reduced.
- the ISP-Lite at the kernel layer transmits the first storage information to the Camera HAL3 at the hardware abstract layer.
- the first storage information may be encrypted information, that is, ciphertext.
- an example of the encrypted information is file descriptor (FD), used to describe a storage location and a read manner.
- ciphertext of the first storage information is referred to as FD1 for short.
- the Camera HAL3 transmits FD1 and calibration data to the camera service at the application framework layer.
- the Camera HAL3 obtains calibration data pre-configured in the REE and transmits it along with FD1 to the TEE.
- the Camera HAL3 may obtain one part of calibration data from a storage unit in the REE and obtain the other part of calibration data from the TOF camera.
- the camera service transmits FD1 and the calibration data to the Face CA at the hardware abstract layer.
- the Face CA transmits FD1 and the calibration data to the face trusted application (Face TA) in the TEE.
- the Face TA in the TEE transmits FD1 to the data processing module in the TEE and stores the calibration data into a calibration data storage unit into the TEE.
- the data processing module reads the TOF data from the first secure buffer according to FD1 and reads the calibration data from the calibration data storage unit.
- the data processing module stores a TOF image as second storage information into a second secure buffer in the TEE.
- the second storage information can be unencrypted.
- the face service transmits the recognition result to the application that has initiated the task request at the application layer.
- the calibration data is camera domain data rather than face domain data
- the calibration data may alternatively be transferred from the camera service to the Face CA via the face service.
- the calibration data in the calibration data storage unit in the TEE will not be lost, and therefore there is no need to reload the calibration data.
- the calibration data can be reloaded, which is not limited herein.
- the calibration data and FD1 being transmitted together is only one implementation, and they may alternatively be transmitted separately along the transmission path described above.
- the TOF camera acquires a set of TOF data for one exposure, and after acquiring multiple sets of TOF data for continual exposures, stops for a first duration, followed by continual exposures again, and then stops for the first duration again, and goes so on.
- the multiple sets of TOF data acquired continually are referred to as one frame of TOF data, and another multiple sets of TOF data acquired after an interval of the first duration is another frame of TOF data.
- the ISP-Lite stores frames of TOF data in turn to the first secure buffer also at intervals of the first duration and transmits storage information of the frames of TOF data in turn to the Face CA also at intervals of the first duration.
- the Face CA transmits the storage information of the frames of TOF data in turn to the Face TA also at intervals of the first duration. Therefore, the Face TA receives the frames of TOF data in turn also at intervals of the first duration.
- S 406 may be storing any frame of TOF data;
- S 407 to S 411 may be transmitting FD of any frame of TOF data;
- S 413 to S 419 is a process of processing in relation to this frame of TOF data;
- S 420 to S 422 may be a process of processing in relation to a recognition result of this frame of TOF data or a process of processing in relation to recognition results of multiple frames of TOF data.
- the TOF camera acquires TOF data by projecting infrared light, it is necessary to pay attention to safety of human eyes during the acquisition process. Also, because quality of a TOF image influences accuracy of a recognition result, and the exposure parameter of the TOF camera has immediate effects on the quality of TOF images, it is necessary to adjust and optimize the exposure parameter of the TOF camera.
- the process shown in FIG. 4 A and FIG. 4 B is further improved so that the data processing module can generate different parameters for adjusting the TOF camera based on type of the TOF data received.
- FIG. 5 A and FIG. 5 B show a process of an automatic exposure (AE) result being generated and fed back and a camera being controlled based on the AE result, for which only differences of FIG. 5 A and FIG. 5 B from FIG. 4 A and FIG. 4 B are described.
- AE automatic exposure
- the data processing module in the TEE In response to the TOF data being a facial recognition frame, the data processing module in the TEE generates a depth image, an infrared image, and an AE result.
- the TOF camera divides TOF data acquired into safety indication frames and facial recognition frames.
- the safety indication frame carries a human eyes safety flag bit, which is used to indicate whether infrared light emitted by the TOF camera is safe for human eyes.
- the facial recognition frame is a TOF data frame used for facial recognition.
- type of a TOF data frame is indicated by at least one of a value, a character, or a character string in that TOF data frame.
- the Camera HAL3 adjusts the TOF camera based on the AE result via the TOF camera driver.
- FIG. 6 A and FIG. 6 B show a process of a safety mark being generated and fed back and a camera being controlled according to the safety mark, for which only differences of FIG. 6 A and FIG. 6 B from FIG. 4 A and FIG. 4 B are described.
- the data processing module in the TEE In response to the TOF data being a safety indication frame, the data processing module in the TEE generates a safety mark.
- the safety mark is used to indicate whether the TOF light source is safe for human eyes.
- a generation method of the safety mark is: after a TOF data frame carrying a human eyes safety flag bit (that is, a safety indication frame) is received, extracting the human eyes safety flag bit, and according to the human eyes safety flag bit, determining whether human eyes are safe, so as to generate the safety mark. If the human eyes safety flag bit indicates safety, a safety mark indicating safety is obtained. If the human eyes safety flag bit indicates non-safety, a safety mark indicating non-safety is obtained.
- a human eyes safety flag bit that is, a safety indication frame
- the data processing module in the TEE transmits the second storage information and the safety mark to the Face TA in the TEE.
- the Face TA in the TEE transmits the recognition result and the safety mark to the Face CA at the hardware abstract layer in the REE.
- the Face CA transmits the safety mark to the camera service at the application framework layer in the REE.
- Adjustment of the TOF camera includes but is not limited to that: if the safety mark indicates that human eyes are unsafe, the TOF camera is turned off, or emission intensity of the TOF light source is reduced; or if the safety mark indicates that human eyes are safe, TOF data is acquired and identified as a TOF data frame for facial recognition (that is, a facial recognition frame).
- the electronic device processes the TOF raw data into a TOF image in the TEE and uses the TOF image for facial recognition, so as to improve security of facial recognition.
- transmission of the data storage information of the TOF data to the TEE is implemented under cooperation between the layers of the Android operating system. This not only lays a foundation for processing in the TEE but also implements compatibility with the Android operating system.
- safety guarantee of human eyes and adjustment of the exposure parameter can also be implemented, allowing the electronic device to have better performance on the basis of improved security of facial recognition.
- FIG. 4 A and FIG. 4 B to FIG. 6 A and FIG. 6 B use an Android operating system as an example
- the facial recognition method described in the embodiments of this application is not limited to the Android operating system.
- operations of the TEE are not limited to the Android operating system, and therefore functions of the applications, modules, and units in the TEE can also be implemented on other operating systems.
- Operations of the REE are not limited to the Android operating system either, and therefore transmission paths of TOF data and calibration data to the TEE, transmission paths of task requests, feedback paths of recognition results, and the like can all be adapted to different operating systems, provided that TOF data and calibration data can be transmitted from the REE to the TEE, that a task request triggers acquisition of TOF data, and that a recognition result is used for task execution.
- the inventor has found that, in addition to security, accuracy and execution speed of facial recognition also have room for improvement. That is, facial recognition still has the following problems.
- the Face TA transmits the storage information of the first frame of TOF image to the facial recognition module to trigger the facial recognition module to perform facial recognition.
- the Face TA transmits the first recognition result to the Face CA.
- the Face CA transmits the first identification result to the initiator of the task, for example, the lock screen application.
- the Face TA determines whether a stop condition is met, and if yes, performs S 712 , or if no, performs S 714 .
- a purpose of setting the stop condition is to avoid unnecessary time consumption during task execution. If facial recognition is still not successful after a given period of time, it can be basically determined that the facial recognition result is failure. For example, a face attempting to unlock is not stored in the electronic device. Therefore, there is no need to continue with the facial recognition. Instead, the facial recognition result should be fed back to the task as soon as possible to reduce a latency of task execution and ensure better user experience.
- the Face TA receives a new frame of TOF data from the TOF camera.
- the example is still used where an acquisition interval of the TOF camera is 30 ms and a time for the processor to generate the first frame of TOF image and the first AE result is also 30 ms.
- the new frame of TOF data received from the TOF camera is the seventh frame of TOF data acquired by the TOF camera.
- the seventh frame of TOF data is a TOF data frame acquired after the exposure parameter is adjusted by using the first AE result.
- the first frame of TOF data as the object of processing in S 701 to S 714 is replaced with the new frame of TOF data received, for example, the seventh frame of TOF data, and the processing result of the first frame of TOF data is adaptively replaced with a processing result of the new frame of TOF data received (for example, the first frame of TOF image is replaced with the seventh frame of TOF image), without further description herein.
- An image data processing process in the prior art includes a step that image data is processed iteratively to obtain convergent image data, with a purpose of obtaining an image whose quality meets a required recognition accuracy.
- RGB data is used as an example. As RGB data is easily affected by ambient light, quality of the first frame of RGB data acquired generally cannot meet the required recognition accuracy. Therefore, RGB data needs to be iteratively processed to obtain convergent RGB data, and then the convergent RGB data is used for facial recognition so as to ensure accuracy of a recognition result. Generally, for different iterative algorithms, RGB data can converge in about 10 frames if faster, or 30 to 40 frames are required for convergence if slower. Therefore, a consensus of persons skilled in the art is that the first frame of RGB data is highly likely to have poor quality due to influence of ambient light, making it meaningless to be directly used for facial recognition.
- only one processor for example, the secure zone processor 21 shown in FIG. 2 ) can be used for facial recognition.
- a face attempting to unlock is a face that is already recorded in the electronic device.
- the TOF camera acquires TOF data and transmits the TOF data to the processor, at intervals of 30 ms.
- the processor After receiving the first frame of TOF data, the processor processes the first frame of TOF data into the first frame of depth image and the first frame of infrared image according to the process described in this embodiment and obtains a first recognition result by using the first frame of depth image and the first frame of infrared image.
- the processor In the process of the processor generating the first frame of TOF image and computing the recognition result, the processor is occupied, and therefore, although the TOF camera is still transmitting TOF data frames to the processor, the processor cannot receive the subsequent TOF data frames. In a word, other TOF data frames are discarded except for the first frame of TOF data received.
- this embodiment does not take into consideration a TOF data iteration process and therefore fits right into the foregoing scenario with one processor.
- the scenario where only one processor is used for TOF data processing and facial recognition if both recognition processing and iteration of TOF data for convergence of the TOF data need to be performed, serial processing is required for the processor, leading to consumption of a long time.
- time consumption can be reduced.
- TOF data converges in the first frame in most cases, a more accurate recognition result can be ensured while time consumption of facial recognition is reduced.
- the inventor has further found in research that, in an outdoor scenario, because in natural light, there is light close to infrared light emitted by a TOF light source in wavelength, the first frame of TOF data acquired is likely to be overexposed. In this case, quality of the first frame of TOF data is not good enough for obtaining an accurate recognition result. As in the foregoing example, if the first frame is unable to unlock the screen, an unlock latency required is at least 360 ms, for which there is still room for further reduction.
- FIGS. 8 Another data processing method disclosed in an embodiment of this application is shown in FIGS. 8 .
- S 801 to S 806 are the same as S 701 to S 706 .
- S 701 to S 706 For details, refer to FIG. 8 , which are not described herein.
- S 807 to S 812 are a processing process of the second frame of TOF data after storage information of the second frame of TOF data is received, which is the same as the processing process of the first frame of TOF data. For details, refer to FIG. 8 , which are not described herein.
- the second frame of TOF data may be a TOF data frame acquired by the TOF camera by using a exposure parameter adjusted based on the first AE result, or the second frame of TOF data may have the same exposure parameter as the first frame of TOF data.
- Which of the foregoing cases specifically prevails depends on an interval at which the TOF camera acquires TOF data frames (for example, the foregoing interval of a first duration) and a time from reception of a TOF data frame to feedback of an AE result by the processor.
- the TOF camera acquires and transmits data at intervals of 30 ms, and the processor processes TOF data into TOF images and obtains AE results also at intervals of 30 ms. Then the first AE result cannot be applied to acquisition of the second frame of TOF data.
- S 813 to S 818 are a processing process of the third frame of TOF data after storage information of the third frame of TOF data is received, which is the same as the processing process of the first frame of TOF data.
- FIG. 8 For details, refer to FIG. 8 , which are not described herein.
- this embodiment differs from the foregoing embodiment in that the first three frames of TOF data are received and processed.
- the Face TA invokes the facial recognition module to perform facial recognition.
- S 819 An optional implementation of S 819 is shown in FIG. 9 .
- the Face TA transmits the storage information of the third frame of TOF image to the facial recognition module.
- the facial recognition module reads the third frame of TOF image by using the storage information of the third frame of TOF image, and then performs facial recognition by using the third frame of TOF image so as to obtain a recognition result.
- the facial recognition module reads the third frame of TOF image from the second secure buffer, which is not detailed in the description of FIG. 9 .
- the recognition result of the third frame of image is referred to as a third recognition result.
- the facial recognition module transmits the third recognition result to the Face TA.
- the Face TA determines whether the third recognition result indicates successful recognition, and if yes, performs S 905 , or if no, performs S 906 .
- the Face TA transmits the third recognition result to the Face CA.
- the third frame of TOF data is most likely to be data acquired after the exposure parameter is adjusted based on the AE result, quality of the third frame of TOF data is most likely to be optimal. Therefore, using the third frame of TOF image first for facial recognition can further reduce time consumption.
- the Face TA transmits the storage information of the first frame of TOF image and the storage information of the second frame of TOF image to the facial recognition module.
- the facial recognition module obtains the first recognition result and the second recognition result.
- the first recognition result is a result of facial recognition on the first frame of TOF image.
- the second recognition result is a result of facial recognition on the second frame of TOF image.
- first frame of TOF image and the second frame of TOF image are used for facial recognition is not limited.
- the facial recognition module transmits the first recognition result and the second recognition result to the Face TA.
- the Face TA transmits a recognition result indicating successful recognition to the Face CA.
- the first frame of TOF image, the second frame of TOF image, and the third frame of TOF image are used for facial recognition in turn; or only the third frame of TOF image is used for facial recognition, and if a recognition result indicates failure, the process shown in FIG. 8 is executed again, so as to save resources such as memory.
- Other implementations are not listed herein one by one.
- a face attempting to unlock is a face that is already recorded in the electronic device.
- the TOF camera acquires TOF data at intervals of 30 ms and transmits the TOF data to the processor.
- the processor After receiving the first frame of TOF data, the processor processes the TOF data received and generates the first TOF image and a first AE result.
- the processor continues to receive and process another TOF data frame rather than performing facial recognition. Because a time for generation of a TOF image and an AE result is 30 ms, which is similar to a time of acquisition and transmission of TOF data by the TOF camera, the second frame of TOF data can continue to be received and processed after the first frame of TOF data is processed, and the third frame of TOF data can continue to be received and processed after the second frame of TOF data is processed.
- the shortest time consumed for obtaining a recognition result by using the third frame of TOF data is: 30 ms ⁇ 3+150 ms.
- unlocking using the seventh frame is changed to unlocking using the third frame, thereby increasing the speed and reducing the latency.
- the data processing method described in this embodiment increases the unlocking speed in an outdoor scenario with strong light by compromising the unlocking speed achieved using the first frame of TOF data.
- the second frame of TOF data and the AE result of the second frame of TOF data may not be processed, that is, S 807 to S 812 may not be performed so as to save resources.
- the recognition result is obtained without participation of the second frame of TOF image.
- the Face CA obtains an intensity of ambient light.
- the Face CA may further send a light intensity request to obtain the intensity of ambient light.
- the light intensity request may be sent to a corresponding driver at the kernel layer via the application framework layer and a corresponding module at the hardware abstract layer.
- the corresponding driver at the kernel layer drives a light sensor to sense the intensity of ambient light and feed it back to the Face CA.
- the Face CA transmits the intensity of ambient light to the Face TA.
- the light intensity is used for determining an environment in which an electronic device is located, and a process more appropriate to the environment is used to obtain a facial recognition result, and therefore, accuracy and speed of obtaining the facial recognition result can be improved to the greatest extent on the basis that security is improved.
- the inventor has further found in research that, whether in an outdoor or indoor scenario, when ambient light is extremely strong and the sensor of the TOF camera faces a strong light source, quality of the first frame of TOF data cannot not support obtaining of an accurate facial recognition result. Therefore, TOF data needs to converge, leading to increased time for facial recognition.
- the embodiments of this application provide a data obtaining method in an attempt to address the foregoing problem, with a purpose of obtaining a higher quality first frame of TOF data, so as to improve accuracy of a recognition result obtained using the first frame of TOF data on the premise that TOF data is used for facial recognition to ensure higher security, which in turn realizes fast facial recognition.
- FIG. 10 shows a data obtaining method disclosed in an embodiment of this application, which is performed by the foregoing electronic device.
- a process shown in FIG. 10 starts from the Camera HAL3 driving the TOF camera via the TOF camera driver to acquire data and includes the following steps.
- an occasion for the TOF sensor to transmit the off command to the TOF light source controller is related to the first exposure time
- an occasion for the TOF sensor to transmit the turning on command to the TOF light source controller is related to the exposure time of the projection off data
- the TOF camera acquires the depth data, projection off data, and infrared data is not limited.
- the TOF camera may acquire them in the order of projection off data, infrared data, and depth data, or acquire them in the order of infrared data, depth data, and projection off data.
- the occasions for the TOF sensor to transmit the off command and the on command are adjusted according to this order.
- the TOF sensor is used to control the TOF light source to be on or off, featuring higher execution speed.
- the infrared data acquired by the TOF sensor is a two-dimensional array, which is processed by the ISP-Lite into infrared raw data, that is, an infrared raw image. Therefore, it can be understood that the target data block is a target region in the infrared raw image, and the values in the target data block are the brightness values of corresponding pixels in the target region. Therefore, for the infrared raw image, the target region is a region in which the number of pixels with brightness values greater than the first threshold is greater than the second threshold.
- the ambient light sensor of the electronic device may be used to obtain light intensity of an environment in which the electronic device is located.
- a correspondence between a plurality of light intensity ranges and exposure times is pre-configured, where the correspondence meets the following principles.
- the light intensity range includes an indoor light intensity range and an outdoor light intensity range.
- a larger value in the light intensity ranges corresponds to a shorter exposure time.
- the exposure time for acquiring the first frame of TOF data is obtained based on the light intensity of the environment in which the electronic device is located, which helps acquiring TOF data whose brightness meets the requirement of facial recognition in the first frame, thereby helping improve the accuracy and speed of facial recognition.
- FIG. 13 differs from FIG. 3 in that after the ISP-Lite receives TOF data acquired by the TOF camera, what is transmitted to the Face CA via the Camera HAL3 and the camera service is the TOF data.
- the Face CA processes the TOF data into a TOF image and uses the TOF image for facial recognition.
- the Face CA processes the first frame of TOF data into the first frame of TOF image and obtains the first AE result, transmits the first AE result to the Camera HAL3, and obtains the first recognition result by using the first frame of TOF image for facial recognition. If the first recognition result indicates that the recognition is successful, the Face CA transmits the first recognition result to the face service; or if the first recognition result indicates that the recognition is unsuccessful, the Face CA continues to receive the seventh frame of TOF data.
- the data processing module is replaced with the Face CA, for which further description is omitted.
- An embodiment of this application further discloses a system on chip, including: at least one processor and an interface, where the interface is configured to receive code instructions and transmit the code instructions to the at least one processor; and the at least one processor runs the code instructions to implement at least one of the foregoing facial recognition method, data obtaining method, and data processing method.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Studio Devices (AREA)
- Collating Specific Patterns (AREA)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110925831.X | 2021-08-12 | ||
| CN202110925831.XA CN113727033A (zh) | 2021-08-12 | 2021-08-12 | 数据获取方法及装置 |
| PCT/CN2022/092485 WO2023016005A1 (zh) | 2021-08-12 | 2022-05-12 | 数据获取方法及装置 |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/092485 Continuation WO2023016005A1 (zh) | 2021-08-12 | 2022-05-12 | 数据获取方法及装置 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20230052356A1 US20230052356A1 (en) | 2023-02-16 |
| US12354403B2 true US12354403B2 (en) | 2025-07-08 |
Family
ID=85176920
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/966,142 Active 2043-06-15 US12354403B2 (en) | 2021-08-12 | 2022-10-14 | Data obtaining method and apparatus |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US12354403B2 (de) |
| EP (1) | EP4156674B1 (de) |
Citations (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107808127A (zh) | 2017-10-11 | 2018-03-16 | 广东欧珀移动通信有限公司 | 人脸识别方法及相关产品 |
| US20180149748A1 (en) * | 2016-11-28 | 2018-05-31 | Stmicroelectronics, Inc. | Time of flight sensing for providing security and power savings in electronic devices |
| US20180211398A1 (en) * | 2017-01-25 | 2018-07-26 | Google Inc. | System for 3d image filtering |
| CN108965721A (zh) * | 2018-08-22 | 2018-12-07 | Oppo广东移动通信有限公司 | 摄像头模组的控制方法和装置、电子设备 |
| CN109522722A (zh) | 2018-10-17 | 2019-03-26 | 联想(北京)有限公司 | 系统安全处理方法和装置 |
| CN109981902A (zh) | 2019-03-26 | 2019-07-05 | Oppo广东移动通信有限公司 | 终端及控制方法 |
| WO2020045946A1 (ko) | 2018-08-27 | 2020-03-05 | 엘지이노텍 주식회사 | 영상 처리 장치 및 영상 처리 방법 |
| US20200125832A1 (en) | 2018-05-29 | 2020-04-23 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Verification System, Electronic Device, and Verification Method |
| WO2020134879A1 (zh) | 2018-12-24 | 2020-07-02 | 华为技术有限公司 | 摄像组件及电子设备 |
| CN111524088A (zh) | 2020-05-06 | 2020-08-11 | 北京未动科技有限公司 | 用于图像采集的方法、装置、设备及计算机可读存储介质 |
| CN112384822A (zh) | 2018-07-09 | 2021-02-19 | Lg伊诺特有限公司 | 输出光的方法和装置 |
| WO2021059735A1 (ja) | 2019-09-26 | 2021-04-01 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置、電子機器、画像処理方法及びプログラム |
| CN113126067A (zh) | 2019-12-26 | 2021-07-16 | 华为技术有限公司 | 激光安全电路及激光安全设备 |
| CN113219476A (zh) | 2021-07-08 | 2021-08-06 | 武汉市聚芯微电子有限责任公司 | 测距方法、终端及存储介质 |
| CN113727033A (zh) | 2021-08-12 | 2021-11-30 | 荣耀终端有限公司 | 数据获取方法及装置 |
| CN113780090A (zh) | 2021-08-12 | 2021-12-10 | 荣耀终端有限公司 | 数据处理方法及装置 |
| CN113779588A (zh) | 2021-08-12 | 2021-12-10 | 荣耀终端有限公司 | 面部识别方法及装置 |
| US20220358785A1 (en) * | 2019-06-19 | 2022-11-10 | Oledcomm | Face detection and optical wireless communication module |
-
2022
- 2022-05-12 EP EP22789143.9A patent/EP4156674B1/de active Active
- 2022-10-14 US US17/966,142 patent/US12354403B2/en active Active
Patent Citations (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180149748A1 (en) * | 2016-11-28 | 2018-05-31 | Stmicroelectronics, Inc. | Time of flight sensing for providing security and power savings in electronic devices |
| US20180211398A1 (en) * | 2017-01-25 | 2018-07-26 | Google Inc. | System for 3d image filtering |
| US20190108409A1 (en) | 2017-10-11 | 2019-04-11 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Face recognition method and related product |
| CN107808127A (zh) | 2017-10-11 | 2018-03-16 | 广东欧珀移动通信有限公司 | 人脸识别方法及相关产品 |
| US20200125832A1 (en) | 2018-05-29 | 2020-04-23 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Verification System, Electronic Device, and Verification Method |
| US20210263136A1 (en) | 2018-07-09 | 2021-08-26 | Lg Innotek Co., Ltd. | Method and device for outputting light |
| CN112384822A (zh) | 2018-07-09 | 2021-02-19 | Lg伊诺特有限公司 | 输出光的方法和装置 |
| CN108965721A (zh) * | 2018-08-22 | 2018-12-07 | Oppo广东移动通信有限公司 | 摄像头模组的控制方法和装置、电子设备 |
| US20210248719A1 (en) * | 2018-08-27 | 2021-08-12 | Lg Innotek Co., Ltd. | Image processing device and image processing method |
| WO2020045946A1 (ko) | 2018-08-27 | 2020-03-05 | 엘지이노텍 주식회사 | 영상 처리 장치 및 영상 처리 방법 |
| CN109522722A (zh) | 2018-10-17 | 2019-03-26 | 联想(北京)有限公司 | 系统安全处理方法和装置 |
| WO2020134879A1 (zh) | 2018-12-24 | 2020-07-02 | 华为技术有限公司 | 摄像组件及电子设备 |
| US20210352198A1 (en) | 2018-12-24 | 2021-11-11 | Huawei Technologies Co., Ltd. | Camera Assembly and Electronic Device |
| CN109981902A (zh) | 2019-03-26 | 2019-07-05 | Oppo广东移动通信有限公司 | 终端及控制方法 |
| US20220358785A1 (en) * | 2019-06-19 | 2022-11-10 | Oledcomm | Face detection and optical wireless communication module |
| WO2021059735A1 (ja) | 2019-09-26 | 2021-04-01 | ソニーセミコンダクタソリューションズ株式会社 | 画像処理装置、電子機器、画像処理方法及びプログラム |
| US20220360702A1 (en) * | 2019-09-26 | 2022-11-10 | Sony Semiconductor Solutions Corporation | Image processing device, electronic equipment, image processing method, and program |
| CN113126067A (zh) | 2019-12-26 | 2021-07-16 | 华为技术有限公司 | 激光安全电路及激光安全设备 |
| CN111524088A (zh) | 2020-05-06 | 2020-08-11 | 北京未动科技有限公司 | 用于图像采集的方法、装置、设备及计算机可读存储介质 |
| CN113219476A (zh) | 2021-07-08 | 2021-08-06 | 武汉市聚芯微电子有限责任公司 | 测距方法、终端及存储介质 |
| CN113727033A (zh) | 2021-08-12 | 2021-11-30 | 荣耀终端有限公司 | 数据获取方法及装置 |
| CN113780090A (zh) | 2021-08-12 | 2021-12-10 | 荣耀终端有限公司 | 数据处理方法及装置 |
| CN113779588A (zh) | 2021-08-12 | 2021-12-10 | 荣耀终端有限公司 | 面部识别方法及装置 |
Non-Patent Citations (1)
| Title |
|---|
| Andrew Dean Payne, "Development of a Full-Field Time-of-Flight Range Imaging System," The University of Waikato, XP055486879, URL: https://researchconunons.waikato.ac.nz/bitstream/handle/10289/3521/thesis.pdf?sequence=1&isAllowed=y (2008). |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4156674A4 (de) | 2023-10-11 |
| EP4156674A1 (de) | 2023-03-29 |
| EP4156674B1 (de) | 2025-01-29 |
| US20230052356A1 (en) | 2023-02-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12248576B2 (en) | Face recognition method and apparatus | |
| US12283128B2 (en) | Data processing method and apparatus | |
| CN117014727B (zh) | 数据获取方法及装置 | |
| US11687635B2 (en) | Automatic exposure and gain control for face authentication | |
| KR102263537B1 (ko) | 전자 장치와, 그의 제어 방법 | |
| WO2019148978A1 (zh) | 图像处理方法、装置、存储介质及电子设备 | |
| US20140232843A1 (en) | Gain Value of Image Capture Component | |
| CN108769509A (zh) | 控制摄像头的方法、装置、电子设备及存储介质 | |
| CN108804895A (zh) | 图像处理方法、装置、计算机可读存储介质和电子设备 | |
| US10878548B2 (en) | Specular reflection reduction using polarized light sources | |
| US20210084280A1 (en) | Image-Acquisition Method and Image-Capturing Device | |
| TWI739096B (zh) | 資料處理方法和電子設備 | |
| WO2021175014A1 (zh) | 追焦方法及相关设备 | |
| CN114120431B (zh) | 一种人脸识别的方法、介质和电子设备 | |
| US11532182B2 (en) | Authentication of RGB video based on infrared and depth sensing | |
| CN108650472A (zh) | 控制拍摄的方法、装置、电子设备及计算机可读存储介质 | |
| CN108830141A (zh) | 图像处理方法、装置、计算机可读存储介质和电子设备 | |
| US12354403B2 (en) | Data obtaining method and apparatus | |
| CN108156387A (zh) | 通过检测眼睛视线自动结束摄像的装置及方法 | |
| CN108769526A (zh) | 一种图像调整方法、装置、设备及存储介质 | |
| KR102649220B1 (ko) | 이미지의 떨림을 보정하는 전자 장치 및 전자 장치의 제어 방법 | |
| US20250247610A1 (en) | Method to prevent user from invalidating image processing | |
| CN120677708A (zh) | 人脸识别方法及其相关设备 | |
| CN116386095A (zh) | 一种掌静脉图像采集控制方法、系统、设备及介质 | |
| CN120318874A (zh) | 一种人脸识别方法及电子设备 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: HONOR DEVICE CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YUAN, JIANGFENG;LIAO, CHUAN;ZHOU, JUNWEI;SIGNING DATES FROM 20230423 TO 20230504;REEL/FRAME:063914/0236 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| CC | Certificate of correction |