WO2020024619A1 - 数据处理方法、装置、计算机可读存储介质和电子设备 - Google Patents

数据处理方法、装置、计算机可读存储介质和电子设备 Download PDF

Info

Publication number
WO2020024619A1
WO2020024619A1 PCT/CN2019/082696 CN2019082696W WO2020024619A1 WO 2020024619 A1 WO2020024619 A1 WO 2020024619A1 CN 2019082696 W CN2019082696 W CN 2019082696W WO 2020024619 A1 WO2020024619 A1 WO 2020024619A1
Authority
WO
WIPO (PCT)
Prior art keywords
face recognition
operating environment
recognition model
model
image
Prior art date
Application number
PCT/CN2019/082696
Other languages
English (en)
French (fr)
Inventor
郭子青
周海涛
欧锦荣
谭筱
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810864802.5A external-priority patent/CN109145772B/zh
Priority claimed from CN201810866139.2A external-priority patent/CN108985255B/zh
Priority claimed from CN201810864804.4A external-priority patent/CN109213610B/zh
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP19843800.4A priority Critical patent/EP3671551A4/en
Priority to US16/740,374 priority patent/US11373445B2/en
Publication of WO2020024619A1 publication Critical patent/WO2020024619A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of computer technology, and in particular, to a data processing method, apparatus, computer-readable storage medium, and electronic device.
  • Face recognition technology is gradually being applied to people's work and life, such as collecting face images for payment authentication, unlocking authentication, and beautifying the captured face images.
  • face recognition technology the face in the image can be detected, and the face of the person in the image can be identified, so as to identify the user's identity.
  • Embodiments of the present application provide a data processing method, apparatus, computer-readable storage medium, and electronic device.
  • a data processing method includes: obtaining a face recognition model stored in a first running environment; initializing the face recognition model in the first running environment, and initializing the face recognition after initialization
  • the model is transferred to a second running environment for storage; wherein the storage space in the first running environment is larger than the storage space in the second running environment.
  • a data processing device includes a model acquisition module and a model total processing module; the model acquisition module is used to acquire a face recognition model stored in a first operating environment; and the model total processing module is used in the
  • the face recognition model is initialized in the first running environment, and the initialized face recognition model is transferred to the second running environment for storage; wherein the storage space in the first running environment is larger than the storage space. Storage space in the second operating environment.
  • a computer-readable storage medium stores a computer program thereon.
  • the computer program is executed by a processor, the data processing method described above is implemented.
  • An electronic device includes a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor causes the processor to execute the foregoing data processing method.
  • FIG. 1 is a flowchart of a data processing method according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
  • FIG. 4 is a flowchart of a data processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a system for implementing a data processing method in an embodiment of the present application.
  • 6 and 7 are flowcharts of a data processing method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of calculating depth information in an embodiment of the present application.
  • FIG. 9 is a flowchart of a data processing method in an embodiment of the present application.
  • FIG. 10 is a hardware structure diagram of a data processing method according to an embodiment of the present application.
  • 11 and 12 are schematic structural diagrams of a data processing apparatus according to an embodiment of the present application.
  • FIG. 13 and 14 are flowcharts of a data processing method in an embodiment of the present application.
  • 15 is a schematic diagram of a system for implementing a data processing method in an embodiment of the present application.
  • 16 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
  • 17 and 18 are flowcharts of a data processing method in an embodiment of the present application.
  • FIG. 19 is a schematic diagram of a segmented face recognition model in an embodiment of the present application.
  • 20 and 21 are schematic structural diagrams of a data processing apparatus according to an embodiment of the present application.
  • 22 is a schematic diagram of a connection state between an electronic device and a computer-readable storage medium according to an embodiment of the present application
  • FIG. 23 is a schematic diagram of a module of an electronic device in an embodiment of the present application.
  • first, second, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • first client may be referred to as the second client, and similarly, the second client may be referred to as the first client. Both the first client and the second client are clients, but they are not the same client.
  • the data processing method includes:
  • Step 201 Obtain a face recognition model stored in a first operating environment
  • Step 203 Initialize the face recognition model in the first running environment, and transfer the initialized face recognition model to the second running environment for storage; wherein the storage space in the first running environment is larger than the second running environment Storage space in the running environment.
  • the data processing apparatus 910 includes a model acquisition module 912 and a model overall processing module 914.
  • the model acquisition module 912 is configured to acquire a face recognition model stored in a first operating environment.
  • the model general processing module 914 is configured to initialize a face recognition model in a first running environment, and transfer the initialized face recognition model to a second running environment for storage.
  • the storage space in the first operating environment is larger than the storage space in the second operating environment.
  • FIG. 3 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device 100 includes a processor 110, a memory 120, and a network interface 130 connected through a system bus 140.
  • the processor 110 is used to provide computing and control capabilities to support the operation of the entire electronic device 100.
  • the memory 120 is configured to store data, programs, and the like.
  • the memory 120 stores at least one computer program 1224 that can be executed by the processor 110 to implement the data processing method applicable to the electronic device 100 provided in the embodiment of the present application.
  • the memory 120 may include a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (ROM), or a random-access memory (Random-Access-Memory, RAM).
  • the memory 120 includes a non-volatile storage medium 122 and an internal memory 124.
  • the non-volatile storage medium 122 stores an operating system 1222 and a computer program 1224.
  • the computer program 1224 can be executed by the processor 110 to implement a data processing method provided by each of the following embodiments.
  • the internal memory 124 provides a cached operating environment for the operating system 1222 and the computer program 1224 in the non-volatile storage medium 122.
  • the network interface 130 may be an Ethernet card, a wireless network card, or the like, and is configured to communicate with an external electronic device 100.
  • the electronic device 100 may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • FIG. 4 is a flowchart of a data processing method in an embodiment. As shown in FIG. 4, the data processing method includes steps 1202 to 1206. among them:
  • Step 1202 Obtain a face recognition model stored in the first operating environment.
  • the electronic device may include a processor, and the processor may perform processing such as storage, calculation, and transmission of data.
  • the processor in an electronic device can run in different environments.
  • the processor can run in a TEE (Trusted Execution Environment) or a REE (Rich Execution Environment).
  • TEE Trusted Execution Environment
  • REE Raster Execution Environment
  • the electronic device can allocate resources of the processor and divide different resources for different operating environments. For example, in general, there will be fewer processes with higher security requirements in electronic devices, and there will be more common processes. Then the electronic device can divide a small part of the processor's resources into a higher security operating environment, and Most of the resources are allocated to a less secure operating environment.
  • the face recognition model is an algorithm model for identifying and processing a face in an image, and is generally stored in the form of a file. It can be understood that, because the algorithms for recognizing faces in an image are relatively complicated, the storage space occupied by storing the face recognition model is also relatively large. After the electronic device divides the processor into different operating environments, the storage space allocated to the first operating environment needs to be redundantly allocated to the storage space in the second operating environment. Therefore, the electronic device stores the face recognition model in the first operating environment. To ensure that there is enough space in the second operating environment to process the data.
  • step 1204 the face recognition model is initialized in the first operating environment, and the initialized face recognition model is passed into the shared buffer.
  • the shared buffer is a channel through which the first and second operating environments transmit data. Both the first and second operating environments can access the shared buffer.
  • the electronic device stores the face recognition model in the first running environment, and then initializes the face recognition model in the first running environment, and then places the initialized face recognition model in a shared buffer, and then from the shared buffer The zone is transferred to the second operating environment.
  • the electronic device can configure the shared buffer, and the size of the shared buffer can be set according to requirements. For example, the electronic device can set the storage space of the shared buffer to 5M or 10M. Before the face recognition model is initialized in the first running environment, the remaining storage space in the second running environment can be obtained; if the remaining storage space is less than the space threshold, the face recognition model is initialized in the first running environment, and Pass the initialized face recognition model to the shared buffer.
  • the space threshold can be set as required, and is generally the sum of the storage space occupied by the face recognition model and the storage space occupied when the face recognition model is initialized.
  • the face recognition model may be directly sent to the second operating environment, and initialization processing is performed in the second operating environment, and the initialization is completed. Then delete the original face recognition model to ensure the security of the data.
  • the above data processing method may further specifically include: if the remaining storage space is greater than or equal to the space threshold, passing in the face recognition model to the shared buffer, and passing the face recognition model from the shared buffer to the second operating environment Medium; initialize the face recognition model in the second operating environment, delete the face recognition model before initialization, and retain the face recognition model after initialization.
  • Step 1206 The initialized face recognition model is transferred from the shared buffer to the second operating environment for storage; wherein the storage space in the first operating environment is larger than the storage space in the second operating environment, and the face recognition model Used for face recognition processing on images. That is, the face recognition model is initialized in the first running environment, and the initialized face recognition model is transferred to the second running environment for storage, including: the face recognition model is carried out in the first running environment.
  • the electronic device performs face recognition processing on the image by using a face recognition model in a second operating environment.
  • a face recognition model needs to be initialized. If the face recognition model is stored in the second operating environment, storing the face recognition model needs to occupy the storage space in the second operating environment, and the initialization of the face recognition model also needs to occupy the storage space in the second operating environment. This will cause excessive resource consumption in the second operating environment and affect the efficiency of data processing.
  • the face recognition model occupies 20M of memory.
  • the initialization of the face recognition model requires an additional 10M of memory. If both storage and initialization are performed in the second operating environment, then a total of 30M of memory in the second operating environment is required. If the face recognition model is stored in the first running environment and initialized in the first running environment, and then the initialized face recognition model is sent to the second running environment, then only the second running environment needs to be occupied. 10M of memory, greatly reducing the resource occupation rate in the second operating environment.
  • step 1202 when it is detected that the initialization condition is met, step 1202 is started.
  • the face recognition model is stored in the first operating environment.
  • the electronic device can initialize the face recognition model when it is turned on, or it can detect people when it detects that an application that requires face recognition processing is opened.
  • the face recognition model is initialized.
  • the face recognition model can also be initialized when a face recognition instruction is detected, and then the initialized face recognition model is compressed and then transferred to the second operating environment.
  • FIG. 5 is a schematic diagram of a system for implementing a data processing method in an embodiment.
  • the system includes a first operating environment 302, a shared buffer 304, and a second operating environment 306.
  • the first operating environment 302 and the second operating environment 306 can perform data transmission through the shared buffer 304.
  • the face recognition model is stored in the first running environment 302.
  • the system can obtain the face recognition model stored in the first running environment 302, initialize the obtained face recognition model, and then initialize the initialized face recognition model. It is transferred to the shared buffer 304, and the initialized face recognition model is transferred to the second running environment 306 through the shared buffer 304.
  • a face recognition model may generally include multiple processing modules, each processing module performs different processing, and these multiple processing modules may be independent of each other.
  • it may include a face detection module, a face matching module, and a living body detection module.
  • some modules may have lower requirements for security, and some modules may have higher requirements for security. Therefore, a processing module with a relatively low security requirement can be initialized in a first operating environment, and a processing module with a relatively high security requirement can be initialized in a second operating environment.
  • step 1204 may include: firstly initializing the first module in the face recognition model in the first operating environment, and passing the first initialized face recognition model to the shared buffer.
  • step 1206 may include: transferring the first initialized face recognition model from the shared buffer to the second running environment for storage.
  • the method may include: secondly initializing the second module in the first initialized face recognition model, where the second module is a module other than the first module in the face recognition model, and the security of the first module Lower security than the second module.
  • the first module may be a face detection module
  • the second module may be a face matching module and a living body detection module.
  • the first module has lower requirements for safety, so it is initialized in the first operating environment.
  • the second module has higher safety requirements, so it is initialized in the second operating environment.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then transmit the face recognition model to a second running environment through a shared buffer. . Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource usage.
  • FIG. 6 is a flowchart of a data processing method in another embodiment. As shown in FIG. 6, the data processing method includes steps 1402 to 1414. among them:
  • Step 1402 The terminal receives the face recognition model sent by the server, and stores the face recognition model in the first operating environment of the terminal.
  • the face recognition model is trained to make the recognition accuracy of the face recognition model higher.
  • a training image set is obtained, and the images in the training image set are used as the input of the model, and the training parameters of the model are continuously adjusted according to the training results obtained during the training process, so as to obtain the most Best parameters.
  • the electronic device may be a terminal that interacts with the user, and because the terminal has limited resources, the face recognition model may be trained on the server. After the server has trained the face recognition model, it sends the trained face recognition model to the terminal. After receiving the trained face recognition model, the terminal stores the trained face recognition model in the first running environment.
  • Step 1404 When it is detected that the terminal is restarted, obtain a face recognition model stored in the first operating environment.
  • the terminal may include a first operating environment and a second operating environment.
  • the terminal may perform face recognition processing on the image in the second operating environment, but since the terminal is divided into the first operating environment, the storage space ratio is divided into the second operating environment.
  • the storage space is large, so the terminal can store the received face recognition model in the storage space of the first operating environment.
  • the face recognition model stored in the first operating environment is loaded into the second operating environment, so that when face recognition processing is required for the image, the second operating environment can be directly called The face recognition model loaded in the image is processed.
  • the face recognition model can be updated.
  • the server will send the updated face recognition model to the terminal.
  • the terminal After the terminal receives the updated face recognition model, it will update it.
  • the subsequent face recognition model is stored in the first operating environment, covering the original face recognition model. Then the control terminal is restarted. After the terminal is restarted, it obtains the updated face recognition model and initializes the updated face recognition model.
  • step 1406 the face recognition model is initialized in the first operating environment, the initialized face recognition model is encrypted, and the encrypted face recognition model is passed to the shared buffer.
  • the face recognition model Before the face recognition process is performed by the face recognition model, the face recognition model needs to be initialized. During the initialization process, parameters, modules, etc. in the face recognition model can be set to the default state. Because the process of initializing the model also requires memory, the terminal can initialize the face recognition model in the first running environment, and then send the initialized face recognition model to the second running environment, so that it can directly Face recognition processing is performed in the second operating environment without the need to occupy additional memory to initialize the model.
  • the first operating environment may be a common operating environment
  • the second operating environment is a safe operating environment
  • the safety of the second operating environment is higher than that of the first operating environment.
  • the first operating environment is generally used to process application operations with lower security
  • the second operating environment is generally used to process application operations with higher security. For example, operations with low security requirements such as shooting and gaming can be performed in the first operating environment, and operations with high security requirements such as payment and unlocking can be performed in the second operating environment.
  • the second operating environment is generally used for application operations with high security requirements. Therefore, when sending a face recognition model to the second operating environment, it is also necessary to ensure the security of the face recognition model.
  • the initialized face recognition model may be encrypted, and then the encrypted face processed model is sent to the second running environment through a shared buffer.
  • step 1408 the encrypted face recognition model is transferred from the shared buffer to the second running environment for storage, and the encrypted face recognition model is decrypted in the second running environment.
  • the encrypted face recognition model is transferred from the first running environment to the shared buffer, it is then transferred from the shared buffer to the second running environment.
  • the second operating environment performs decryption processing on the received encrypted face recognition model.
  • the algorithm for encrypting the face recognition model is not limited in this embodiment.
  • encryption processing may be performed according to encryption algorithms such as DES (Data Encryption Standard), MD5 (Message-Digest Algorithm 5, Message-Digest Algorithm 5), and HAVAL (Diffie-Hellman, Key Exchange Algorithm).
  • step 1410 when a face recognition instruction is detected, a security level of the face recognition instruction is determined.
  • a face recognition model is stored in both the first operating environment and the second operating environment.
  • the terminal may perform face recognition processing in the first operating environment, or may perform face recognition processing in the second operating environment. Specifically, the terminal may determine whether to perform face recognition processing in the first operating environment or face recognition processing in the second operating environment according to the face recognition instruction that triggers the face recognition processing.
  • the face recognition instruction is initiated by the upper-layer application of the terminal.
  • the upper-layer application initiates the face recognition instruction
  • information such as the time when the face recognition instruction was initiated, the application identification, and the operation identification may be written into the face recognition.
  • the application identifier may be used to indicate an application that initiates a face recognition instruction
  • the operation identifier may be used to indicate an application operation that requires a face recognition result.
  • application operations such as payment, unlocking, and beauty can be performed through the result of face recognition
  • the operation identifier in the face recognition instruction is used to indicate application operations such as payment, unlocking, and beauty.
  • the security level is used to indicate the security level of the application operation.
  • the higher the security level the higher the security requirements of the application operation.
  • the payment operation requires higher security
  • the beauty operation requires lower security, so the security level of the payment operation is higher than that of the beauty operation.
  • the security level can be directly written into the face recognition instruction. After the terminal detects the face recognition instruction, the terminal directly reads the safety level in the face recognition instruction.
  • the corresponding relationship of the operation identifiers may also be established in advance. After detecting the face recognition instruction, the corresponding security level is obtained through the operation identifier in the face recognition instruction.
  • Step 1412 if the security level is lower than the level threshold, perform face recognition processing according to the face recognition model in the first operating environment.
  • the face recognition processing may include, but is not limited to, one or more of face detection, face matching, and living body detection.
  • Face detection refers to a process of detecting whether a face exists in an image
  • face matching refers to The process of matching the detected human face with a preset human face
  • Living body detection refers to the process of detecting whether a human face in an image is a living body.
  • Step 1414 if the security level is higher than the level threshold, perform face recognition processing according to the face recognition model in the second operating environment; wherein the security of the second operating environment is higher than the security of the first operating environment.
  • the face recognition processing may be performed according to the face recognition model in the second operating environment.
  • the terminal may send a face recognition instruction to the second operating environment, and control the camera module to collect images through the second operating environment.
  • the captured image will be sent to the second operating environment first, and the safety level of the application operation will be judged in the second operating environment. If the safety level is lower than the level threshold, the captured image will be sent to the first operating environment for face recognition. Processing; if the security level is higher than the level threshold, perform face recognition processing on the acquired image in a second operating environment.
  • the method when performing face recognition processing in the first operating environment, the method includes:
  • Step 502 Control the camera module to collect the first target image and the speckle image, and send the first target image to the first operating environment, and send the speckle image to the second operating environment.
  • the application installed in the terminal may initiate a face recognition instruction and send the face recognition instruction to a second operating environment.
  • the camera module can be controlled to collect the first target image and the speckle image.
  • the first target image collected by the camera module can be directly sent to the first operating environment, and the collected speckle image is sent to the second operating environment.
  • the first target image may be a visible light image, or may be another type of image, which is not limited herein.
  • the camera module may include an RGB (Red Green Blue) camera, and the first target image is collected by the RGB camera.
  • the camera module may further include a laser light and a laser camera. The terminal can control the laser light to be turned on, and then collect a speckle image formed by the laser speckle emitted by the laser light and irradiating the object through the laser camera.
  • the wavelets scattered by randomly distributed surface elements on these surfaces are superimposed on each other so that the reflected light field has a random spatial light intensity distribution and appears granular. Structure, this is the laser speckle.
  • the formed laser speckles are highly random, so the laser speckles generated by different laser emitters are different.
  • the generated speckle images are different.
  • Laser speckles formed by different laser lights are unique, and the speckle images obtained are also unique.
  • Step 504 Calculate a depth image according to the speckle image in the second operating environment, and send the depth image to the first operating environment.
  • the terminal will ensure that the speckle image is always processed in a secure environment, so the terminal will transmit the speckle image to a second operating environment for processing.
  • the depth image is an image used to represent the depth information of the photographed object.
  • the depth image can be obtained from the speckle image calculation.
  • the terminal can control the camera module to collect the first target image and the speckle image at the same time, and the depth image calculated from the speckle image can represent the depth information of the object in the first target image.
  • a depth image may be calculated from the speckle image and the reference image in the second operating environment.
  • the depth image is an image acquired when the laser speckle is irradiated to the reference plane, so the reference image is an image with reference depth information.
  • the relative depth can be calculated according to the positional offset of the speckles in the speckle image relative to the speckles in the reference image.
  • the relative depth can represent the depth information of the actual shooting object to the reference plane. Then calculate the actual depth information of the object based on the obtained relative depth and reference depth.
  • the reference image is compared with the speckle image to obtain offset information, and the offset information is used to represent a horizontal offset of the speckle in the speckle image relative to the corresponding speckle in the reference image; according to the offset information and the reference depth The information is calculated to obtain the depth image.
  • FIG. 8 is a schematic diagram of calculating depth information in one embodiment.
  • the laser light 602 can generate a laser speckle.
  • the formed image is acquired by a laser camera 604.
  • the laser speckle emitted by the laser light 602 is reflected by the reference plane 608, and then the reflected light is collected by the laser camera 604, and the reference image is obtained by imaging through the imaging plane 610.
  • the reference depth from the reference plane 608 to the laser light 602 is L, and the reference depth is known.
  • the laser speckle emitted by the laser light 602 is reflected by the object 606, and the reflected light is collected by the laser camera 604, and the actual speckle image is obtained through the imaging plane 610.
  • the calculation formula that can get the actual depth information is:
  • L is the distance between the laser light 602 and the reference plane 608
  • f is the focal length of the lens in the laser camera 60
  • CD is the distance between the laser light 602 and the laser camera 60
  • AB is the imaging and reference plane of the object 606 The offset distance between the imaging of 608.
  • AB may be a product of the pixel offset n and the actual distance p of the pixel point.
  • Step 506 Perform face recognition processing on the first target image and the depth image through the face recognition model in the first running environment.
  • the calculated depth image may be sent to the first operating environment, and then face recognition processing is performed according to the first target image and the depth image in the first operating environment.
  • the running environment then sends the face recognition result to the upper-layer application, and the upper-layer application can perform corresponding application operations according to the face recognition result.
  • the position and area where the human face is located can be detected through the first target image. Since the first target image and the depth image are corresponding, the depth information of the face can be obtained through the corresponding area of the depth image, and the three-dimensional features of the face can be constructed based on the depth information of the face, so that the face can be processed according to the three-dimensional features of the face. Make up.
  • the method when performing face recognition processing in the second operating environment, the method specifically includes:
  • Step 702 Control the camera module to collect a second target image and a speckle image, and send the second target image and the speckle image to a second operating environment.
  • the second target image may be an infrared image.
  • the camera module may include a flood light, a laser light, and a laser camera.
  • the terminal may control the flood light to be turned on, and then collect the flood light through the laser camera to illuminate the object.
  • the formed infrared image is used as the second target image.
  • the terminal can also control the laser light to turn on, and then use a laser camera to collect the speckle image formed by the laser light illuminating the object.
  • the time interval between the acquisition of the second target image and the speckle image must be relatively short to ensure the consistency of the acquired second target image and the speckle image, and avoid a large gap between the second target image and the speckle image.
  • the error improves the accuracy of image processing.
  • the camera module is controlled to collect the second target image, and the camera module is controlled to collect the speckle image; wherein the time interval between the first time when the second target image is collected and the second time when the speckle image is collected is less than the first time A threshold.
  • the two-way PWM Pulse Width Modulation
  • the two-way PWM Pulse Width Modulation
  • the second target image may be an infrared image or other types of images, which is not limited herein.
  • the second target image may be a visible light image.
  • Step 704 Calculate a depth image according to the speckle image in the second operating environment.
  • the security level of the face recognition instruction is higher than the level threshold, it is considered that the application operation that initiates the face recognition instruction has higher security requirements, and it is necessary to perform face recognition processing in a high security environment To ensure the security of data processing.
  • the second target image and the speckle image collected by the camera module are directly sent to the second operating environment, and then a depth image is calculated based on the speckle image in the second operating environment.
  • Step 706 Perform face recognition processing on the second target image and the depth image through the face recognition model in the second running environment.
  • face detection when performing face recognition processing in the second operating environment, face detection may be performed according to the second target image to detect whether the second target image includes the target face. If the second target image includes a target face, the detected target face is matched with a preset face. If the detected target face matches a preset face, target depth information of the target face is obtained from the depth image, and whether the target face is a living body is detected based on the target depth information.
  • the face attribute features of the target face can be extracted, and then the extracted face attribute features are matched with the preset face attribute features. If the matching value exceeds the matching threshold, then Think that face matching is successful. For example, features such as face deflection angle, brightness information, facial features, etc. can be extracted as face attribute features. If the degree of matching between the face attribute features of the target face and the face attribute features of the preset face exceeds 90%, it is considered that Face matching succeeded.
  • the extracted face attribute features may also be successfully authenticated.
  • the living body detection processing can be performed according to the collected depth image.
  • the collected second target image can represent detailed information of a human face
  • the collected depth image can represent corresponding depth information
  • the living body detection can be performed according to the depth image. For example, if the captured face is a face in a photo, it can be determined that the collected face is not three-dimensional based on the depth image, and the collected face can be considered as a non-living face.
  • performing the living body detection according to the corrected depth image includes: finding depth information of a face corresponding to the target face in the depth image, and if there is depth information of the face corresponding to the target face in the depth image, and the person
  • the face depth information conforms to the stereo rule of the face, so the target face is a living face.
  • the aforementioned three-dimensional face rule is a rule with three-dimensional depth information of the face.
  • an artificial intelligence model may also be used to perform artificial intelligence recognition on the second target image and the depth image, obtain the living body attribute characteristics corresponding to the target face, and determine whether the target face is based on the obtained living attribute characteristics.
  • Living face image The living body attributes can include the skin characteristics corresponding to the target face, the direction of the texture, the density of the texture, and the width of the texture. Living human face.
  • the processing order can be switched as needed.
  • the face may be authenticated first, and then whether the face is alive is detected. You can also detect whether the face is alive, and then authenticate the face.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then transmit the face recognition model to a second running environment through a shared buffer. . Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource usage. According to the security level of the face recognition instruction, processing is selected in the first operating environment or the second operating environment, avoiding all applications being processed in the second operating environment, which can reduce the resource occupation rate of the second operating environment.
  • steps in the flowcharts of FIGS. 4, 6, 7, and 9 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed sequentially in the order indicated by the arrows. Unless explicitly stated in this document, the execution of these steps is not strictly limited, and these steps can be performed in other orders. Moreover, at least a part of the steps in FIG. 4, FIG. 6, FIG. 7, and FIG. 9 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily performed at the same time, but may be performed at different times. For execution, the order of execution of these sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • FIG. 10 is a hardware structure diagram of a data processing method according to an embodiment.
  • the electronic device may include a camera module 810, a Central Processing Unit (CPU) 820, and a Microcontroller Unit (MCU) 830.
  • the camera module 810 includes a laser camera 812 , Flood light 814, RGB camera 816, and laser light 818.
  • the micro control unit 830 includes a PWM (Pulse Width Modulation) module 832, a SPI / I2C (Serial Peripheral Interface / Inter-Integrated Circuit) module 834, and a RAM ( Random Access Memory module 836, Depth Engine module 838.
  • PWM Pulse Width Modulation
  • SPI / I2C Serial Peripheral Interface / Inter-Integrated Circuit
  • RAM Random Access Memory module 836, Depth Engine module 838.
  • the central processing unit 820 may be in a multi-core operating mode, and the CPU core in the central processing unit 820 may run under TEE or REE. Both TEE and REE are operating modes of the ARM module (Advanced RISC Machines, Advanced Reduced Instruction Set Processor).
  • the natural operating environment 822 in the central processing unit 820 may be the first operating environment, and the security is low.
  • the trusted operating environment 824 in the central processing unit 820 is the second operating environment and has high security.
  • micro control unit 830 is a processing module independent of the central processing unit 820, and its input and output are controlled by the central processing unit 820 under the trusted operating environment 824, the micro control unit 830 is also The processing module with higher security may be considered that the micro control unit 830 is also in a safe operating environment, that is, in a second operating environment.
  • the central processing unit 820 can control the SECURE SPI / I2C through the trusted operating environment 824 to send a face recognition instruction to the SPI / I2C module 834 in the micro control unit 830.
  • the micro control unit 830 receives the face recognition instruction, if it determines that the safety level of the face recognition instruction is higher than the level threshold, the PWM module 832 transmits a pulse wave to control the flood light 814 in the camera module 810 to turn on to acquire an infrared image. 2. Control the laser light 818 in the camera module 810 to turn on to collect speckle images.
  • the camera module 810 can transmit the acquired infrared image and speckle image to the Depth Engine module 838 in the micro control unit 830.
  • the Depth Engine module 838 can calculate the depth image based on the speckle image and send the infrared image and the depth image to the center
  • the trusted operating environment 824 of the processor 820 The trusted operating environment 824 of the central processing unit 820 performs face recognition processing according to the received infrared image and depth image.
  • the PWM module 832 transmits a pulse wave to control the laser light 818 in the camera module 810 to turn on to collect speckle images, and the RGB camera 816 to collect visible light images.
  • the camera module 810 directly sends the collected visible light image to the natural operating environment 822 of the central processing unit 820, and transmits the speckle image to the Depth Engine module 838 in the micro control unit 830.
  • the Depth Engine module 838 can calculate the depth based on the speckle image Image, and send the depth image to the trusted operating environment 824 of the central processing unit 820. Then the trusted operating environment 824 sends the depth image to the natural operating environment 822, and in the natural operating environment 822, the face recognition processing is performed according to the visible light image and the depth image.
  • FIG. 11 is a schematic structural diagram of a data processing apparatus in an embodiment.
  • the data processing apparatus 900 includes a model acquisition module 902, a model transmission module 904, and a model storage module 906. among them:
  • a model acquisition module 902 is configured to acquire a face recognition model stored in a first operating environment.
  • a model transmission module 904 is configured to initialize the face recognition model in the first operating environment, and transfer the initialized face recognition model to a shared buffer.
  • a model storage module 906 configured to transfer the initialized face recognition model from the shared buffer to a second running environment for storage; wherein the storage space in the first running environment is larger than the first running environment; The storage space in the second operating environment.
  • the face recognition model is used to perform face recognition processing on the image. That is, the model general processing module includes a model transmission module and a model storage module.
  • the data processing apparatus may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then transmit the face recognition model to a second running environment through a shared buffer. . Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource usage.
  • FIG. 12 is a schematic structural diagram of a data processing apparatus in another embodiment.
  • the data processing apparatus 1000 includes a model receiving module 1002, a model acquiring module 1004, a model transmitting module 1006, a model storing module 1008, and a face recognition module 1010. among them:
  • the model receiving module 1002 is configured to receive, by a terminal, a face recognition model sent by a server, and store the face recognition model in a first operating environment of the terminal.
  • a model acquisition module 1004 is configured to acquire a face recognition model stored in the first operating environment when the terminal is restarted.
  • a model transmission module 1006 is configured to initialize the face recognition model in the first operating environment, and transfer the initialized face recognition model to a shared buffer.
  • a model storage module 1008 is configured to transfer the initialized face recognition model from the shared buffer to a second running environment for storage; wherein the storage space in the first running environment is larger than the first running environment.
  • the face recognition model is used to perform face recognition processing on the image.
  • a face recognition module 1010 is configured to determine a safety level of the face recognition instruction when a face recognition instruction is detected; if the safety level is lower than a level threshold, according to the first operating environment according to the The face recognition model performs face recognition processing; if the security level is higher than a level threshold, face recognition processing is performed according to the face recognition model in the second operating environment; wherein the second operating environment The security is higher than the security of the first operating environment.
  • the data processing apparatus may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then transmit the face recognition model to a second running environment through a shared buffer. . Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource usage. According to the security level of the face recognition instruction, processing is selected in the first operating environment or the second operating environment, avoiding all applications being processed in the second operating environment, which can reduce the resource occupation rate of the second operating environment.
  • the model transmission module 1006 is further configured to perform encryption processing on the initialized face recognition model, and transfer the encrypted face recognition model to the shared buffer.
  • the model transmission module 1006 is further configured to obtain a remaining storage space in the second operating environment; if the remaining storage space is less than a space threshold, the face recognition model is used in the first operating environment. Initialize and pass the initialized face recognition model to the shared buffer.
  • the model transmission module 1006 is further configured to: if the remaining storage space is greater than or equal to a spatial threshold, transfer the face recognition model to a shared buffer, and transfer the face recognition model from the The shared buffer is passed to the second operating environment.
  • the model storage module 1008 is further configured to initialize the face recognition model in the second operating environment, delete the face recognition model before the initialization, and retain the face recognition model after the initialization. .
  • the model storage module 1008 is further configured to transfer the encrypted face recognition model from the shared buffer to a second running environment for storage, and store the face recognition model in the second running environment.
  • the decryption process is performed on the face recognition model after the encryption process.
  • the face recognition module 1010 is further configured to control a camera module to collect a first target image and a speckle image, and send the first target image to a first operating environment to send the speckle image. Sending to the second operating environment; obtaining a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment;
  • the face recognition model in the running environment performs face recognition processing on the first target image and the depth image.
  • the face recognition module 1010 is further configured to control the camera module to collect a second target image and a speckle image, and send the second target image and the speckle image to the second operating environment; A depth image is calculated according to the speckle image in the second operating environment; and a face recognition process is performed on the second target image and the depth image through a face recognition model in the second operating environment.
  • each module in the above data processing device is only for illustration. In other embodiments, the data processing device may be divided into different modules according to requirements to complete all or part of the functions of the above data processing device.
  • FIG. 3 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device 100 includes a processor 110, a memory 120, and a network interface 130 connected through a system bus 140.
  • the processor 110 is used to provide computing and control capabilities to support the operation of the entire electronic device 100.
  • the memory 120 is configured to store data, programs, and the like.
  • the memory 120 stores at least one computer program 1224 that can be executed by the processor 110 to implement the data processing method applicable to the electronic device 100 provided in the embodiment of the present application.
  • the memory 120 may include a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (ROM), or a random-access memory (Random-Access-Memory, RAM).
  • the memory 120 includes a non-volatile storage medium 122 and an internal memory 124.
  • the non-volatile storage medium 122 stores an operating system 1222 and a computer program 1224.
  • the computer program 1224 can be executed by the processor 110 to implement a data processing method provided by each of the following embodiments.
  • the internal memory 124 provides a cached operating environment for the operating system 1222 and the computer program 1224 in the non-volatile storage medium 122.
  • the network interface 130 may be an Ethernet card, a wireless network card, or the like, and is configured to communicate with an external electronic device 100.
  • the electronic device 100 may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • FIG. 13 is a flowchart of a data processing method in an embodiment. As shown in FIG. 13, the data processing method includes steps 2202 to 2206. among them:
  • Step 2202 Obtain a face recognition model stored in the first operating environment.
  • the electronic device may include a processor, and the processor may perform processing such as storage, calculation, and transmission of data.
  • the processor in an electronic device can run in different environments.
  • the processor can run in a TEE (Trusted Execution Environment) or a REE (Rich Execution Environment).
  • TEE Trusted Execution Environment
  • REE Raster Execution Environment
  • the electronic device can allocate resources of the processor and divide different resources for different operating environments. For example, in general, there will be fewer processes with higher security requirements in electronic devices, and there will be more common processes. Then the electronic device can divide a small part of the processor's resources into a higher security operating environment, and Most of the resources are allocated to a less secure operating environment.
  • the face recognition model is an algorithm model for identifying and processing a face in an image, and is generally stored in the form of a file. It can be understood that, because the algorithms for recognizing faces in an image are relatively complicated, the storage space occupied by storing the face recognition model is also relatively large. After the electronic device divides the processor into different operating environments, the storage space allocated to the first operating environment needs to be redundantly allocated to the storage space in the second operating environment. Therefore, the electronic device stores the face recognition model in the first operating environment. To ensure that there is enough space in the second operating environment to process the data.
  • step 2204 the face recognition model is initialized in the first operating environment, and the initialized face recognition model is compressed.
  • a face recognition model needs to be initialized. If the face recognition model is stored in the second operating environment, storing the face recognition model needs to occupy the storage space in the second operating environment, and the initialization of the face recognition model also needs to occupy the storage space in the second operating environment. This will cause excessive resource consumption in the second operating environment and affect the efficiency of data processing.
  • the face recognition model occupies 20M of memory.
  • the initialization of the face recognition model requires an additional 10M of memory. If both storage and initialization are performed in the second operating environment, then a total of 30M of memory in the second operating environment is required. If the face recognition model is stored in the first running environment and initialized in the first running environment, and then the initialized face recognition model is sent to the second running environment, then only the second running environment needs to be occupied. 10M of memory, greatly reducing the resource occupation rate in the second operating environment.
  • the electronic device stores the face recognition model in the first running environment, and then initializes the face recognition model in the first running environment, and then transmits the initialized face recognition model to the second running environment, which can reduce the Occupation of storage space in the second operating environment.
  • the initialized face recognition model can be further compressed, and then the compressed face recognition model is sent to a second operating environment for storage, further reducing resources in the second operating environment. Occupy, increase the speed of data processing.
  • Step 2206 the compressed face recognition model is transferred from the first running environment to the second running environment for storage; wherein the storage space of the first running environment is larger than the storage space of the second running environment, and the face recognition model is used for To perform face recognition processing on the image. That is, the face recognition model is initialized in the first running environment, and the initialized face recognition model is transferred to the second running environment for storage, including: the face recognition model is carried out in the first running environment. Initialize and compress the initialized face recognition model; transfer the compressed face recognition model from the first running environment to the second running environment for storage; where the face recognition model is used to process the image Face recognition processing.
  • step 2202 when it is detected that the initialization condition is met, step 2202 may be started.
  • the face recognition model is stored in the first operating environment.
  • the electronic device can initialize the face recognition model when it is turned on, or it can detect people when it detects that an application that requires face recognition processing is opened.
  • the face recognition model is initialized.
  • the face recognition model can also be initialized when a face recognition instruction is detected, and then the initialized face recognition model is compressed and then transferred to the second operating environment.
  • the remaining storage space in the second operating environment may be obtained; if the remaining storage space is less than the space threshold, the The face recognition model is initialized, and the initialized face recognition model is compressed.
  • the space threshold can be set as required, and is generally the sum of the storage space occupied by the face recognition model and the storage space occupied when the face recognition model is initialized.
  • the face recognition model can be directly sent to the second operating environment and initialized in the second operating environment. After the initialization is completed, the original face recognition model is sent. Delete, this can ensure the security of the data.
  • the above data processing method may further specifically include: if the remaining storage space is greater than or equal to the space threshold, compressing the face recognition model in the first operating environment, and transmitting the compressed face recognition model to the second In the running environment; initialize the compressed face recognition model in the second running environment, delete the face recognition model before the initialization, and retain the face recognition model after the initialization.
  • a face recognition model may generally include multiple processing modules, each processing module performs different processing, and these multiple processing modules may be independent of each other.
  • it may include a face detection module, a face matching module, and a living body detection module.
  • some modules may have lower requirements for security, and some modules may have higher requirements for security. Therefore, a processing module with a relatively low security requirement may be initialized in a first operating environment, and a processing module with a relatively high security requirement may be initialized in a second operating environment.
  • step 2204 may include: firstly initializing the first module in the face recognition model in the first operating environment, and performing compression processing on the first initialized face recognition model.
  • the method may further include: secondly initializing a second module in the compressed face recognition model, where the second module is a module other than the first module in the face recognition model, and the first module has low security
  • the first module may be a face detection module
  • the second module may be a face matching module and a living body detection module.
  • the first module has lower requirements for safety, so it is initialized in the first operating environment.
  • the second module has higher safety requirements, so it is initialized in the second operating environment.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then compress the initialized face recognition model and then transmit it.
  • initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed.
  • the face recognition model is compressed and then transmitted to the second operating environment, which further improves the data processing speed.
  • FIG. 14 is a flowchart of a data processing method in another embodiment. As shown in FIG. 14, the data processing method includes steps 2302 to 2316. among them:
  • Step 2302 Obtain a face recognition model stored in the first operating environment.
  • the face recognition model is trained to make the recognition accuracy of the face recognition model higher.
  • a training image set is obtained, and the images in the training image set are used as the input of the model, and the training parameters of the model are continuously adjusted according to the training results obtained during the training process, so as to obtain the most Best parameters.
  • the electronic device may be a terminal that interacts with the user, and because the terminal has limited resources, the face recognition model may be trained on the server. After the server has trained the face recognition model, it sends the trained face recognition model to the terminal. After receiving the trained face recognition model, the terminal stores the trained face recognition model in the first running environment. Then, before step 2302, the method may further include: receiving, by the terminal, the face recognition model sent by the server, and storing the face recognition model in the first operating environment of the terminal.
  • the terminal may include a first operating environment and a second operating environment.
  • the terminal may perform face recognition processing on the image in the second operating environment, but since the terminal is divided into the first operating environment, the storage space ratio is divided into the second operating environment.
  • the storage space is large, so the terminal can store the received face recognition model in the storage space of the first operating environment.
  • the face recognition model stored in the first running environment may be loaded into the second running environment each time a terminal restart is detected. In this way, when face recognition processing is required on an image, The face recognition model loaded in the second running environment can be directly called for processing.
  • step 2302 may specifically include: when it is detected that the terminal is restarted, obtaining a face recognition model stored in the first operating environment.
  • the face recognition model can be updated.
  • the server will send the updated face recognition model to the terminal.
  • the terminal After the terminal receives the updated face recognition model, it will update it.
  • the subsequent face recognition model is stored in the first operating environment, covering the original face recognition model. Then the control terminal is restarted. After the terminal is restarted, it obtains the updated face recognition model and initializes the updated face recognition model.
  • step 2304 the face recognition model is initialized in the first running environment, and the target space capacity for storing the face recognition model in the second running environment and the data amount of the initialized face recognition model are obtained.
  • the face recognition model Before the face recognition process is performed by the face recognition model, the face recognition model needs to be initialized. During the initialization process, parameters, modules, etc. in the face recognition model can be set to the default state. Because the process of initializing the model also requires memory, the terminal can initialize the face recognition model in the first running environment, and then send the initialized face recognition model to the second running environment, so that it can directly Face recognition processing is performed in the second operating environment without the need to occupy additional memory to initialize the model.
  • the initialized face recognition model can be further compressed.
  • the target space capacity for storing the face recognition model in the second operating environment and the data amount of the initialized face recognition model may be obtained, and compression processing may be performed according to the target space capacity and data amount.
  • a storage space dedicated to storing a face recognition model may be designated in the second operating environment, so that other data cannot occupy this storage space.
  • the target space capacity is just the capacity of the storage space dedicated to storing the face recognition model.
  • the data amount of the face recognition model refers to the data size of the face recognition model.
  • Step 2306 Calculate the compression coefficient according to the target space capacity and data amount.
  • the compression coefficient can be calculated according to the target space capacity and data amount, and then the face recognition model is compressed according to the calculated compression coefficient.
  • the target space capacity is less than the amount of data, it means that there is not enough storage space in the second operating environment to store the face recognition model.
  • the face recognition model can be correspondingly compressed according to the target space capacity and data amount.
  • the compressed face recognition model is stored in a second operating environment.
  • the face recognition model may be compressed according to a preset compression coefficient, or the face recognition model may not be compressed, which is not limited here.
  • Step 2308 Perform the compression processing corresponding to the compression coefficient on the initialized face recognition model.
  • the initialized face recognition model can be compressed according to the compression coefficient, and the compressed face recognition model can be stored in the second operating environment. It can be understood that once the face recognition model is compressed, the accuracy of the corresponding face recognition processing will be reduced, so the accuracy of the recognition cannot be guaranteed. Therefore, in order to ensure the accuracy of face recognition, a maximum compression limit can be set, and the compression of the face recognition model cannot exceed the maximum compression limit.
  • a compression threshold may be set, and when the compression coefficient is larger than the compression threshold, it is considered that the accuracy of the face recognition processing performed by the compressed face recognition model is low.
  • step 2308 may include: if the compression coefficient is less than the compression threshold, performing the compression processing corresponding to the compression coefficient on the initialized face recognition model; if the compression coefficient is greater than or equal to the compression threshold, the initialized face The recognition model performs compression processing corresponding to the compression threshold.
  • the electronic device may reallocate the storage space for storing the compressed face recognition model in the second operating environment according to the data size of the compressed face recognition model.
  • FIG. 15 is a schematic diagram of compression processing of a face recognition model in one embodiment.
  • the face recognition model 2402 is stored in a file form, for a total of 30M.
  • a compressed face recognition model 2404 is formed.
  • the compressed face recognition model 2404 is also stored in the form of a file, for a total of 20M.
  • Step 2310 The compressed face recognition model is transferred from the first running environment to the shared buffer, and the compressed face recognition model is transferred from the shared buffer to the second running environment for storage.
  • the shared buffer is a channel through which the first and second operating environments transmit data. Both the first and second operating environments can access the shared buffer. It should be noted that the electronic device can configure the shared buffer, and the size of the shared buffer can be set according to requirements. For example, the electronic device can set the storage space of the shared buffer to 5M or 10M.
  • FIG. 5 is a schematic diagram of a system for implementing a data processing method in an embodiment.
  • the system includes a first operating environment 302, a shared buffer 304, and a second operating environment 306.
  • the first operating environment 302 and the second operating environment 306 can perform data transmission through the shared buffer 304.
  • the face recognition model is stored in the first running environment 302.
  • the system can obtain the face recognition model stored in the first running environment 302, initialize the obtained face recognition model, and then initialize the initialized face recognition model.
  • the compression processing is performed, and the compressed face recognition model is transferred into the shared buffer 304, and the initialized face recognition model is transferred into the second running environment 306 through the shared buffer 304.
  • step 2312 when a face recognition instruction is detected, the security level of the face recognition instruction is determined.
  • a face recognition model is stored in both the first operating environment and the second operating environment.
  • the terminal may perform face recognition processing in the first operating environment, or may perform face recognition processing in the second operating environment. Specifically, the terminal may determine whether to perform face recognition processing in the first operating environment or face recognition processing in the second operating environment according to the face recognition instruction that triggers the face recognition processing.
  • the face recognition instruction is initiated by the upper-layer application of the terminal.
  • the upper-layer application initiates the face recognition instruction
  • information such as the time when the face recognition instruction was initiated, the application identification, and the operation identification may be written into the face recognition.
  • the application identifier may be used to indicate an application that initiates a face recognition instruction
  • the operation identifier may be used to indicate an application operation that requires a face recognition result.
  • application operations such as payment, unlocking, and beauty can be performed through the result of face recognition
  • the operation identifier in the face recognition instruction is used to indicate application operations such as payment, unlocking, and beauty.
  • the security level is used to indicate the security level of the application operation.
  • the higher the security level the higher the security requirements of the application operation.
  • the payment operation requires higher security
  • the beauty operation requires lower security, so the security level of the payment operation is higher than that of the beauty operation.
  • the security level can be directly written into the face recognition instruction. After the terminal detects the face recognition instruction, the terminal directly reads the safety level in the face recognition instruction.
  • the corresponding relationship of the operation identifiers may also be established in advance. After detecting the face recognition instruction, the corresponding security level is obtained through the operation identifier in the face recognition instruction.
  • Step 2314 if the security level is lower than the level threshold, perform face recognition processing according to the face recognition model in the first operating environment.
  • the face recognition processing may include, but is not limited to, one or more of face detection, face matching, and living body detection.
  • Face detection refers to a process of detecting whether a face exists in an image
  • face matching refers to The process of matching the detected human face with a preset human face
  • Living body detection refers to the process of detecting whether a human face in an image is a living body.
  • Step 2316 if the security level is higher than the level threshold, perform face recognition processing according to the face recognition model in the second operating environment; wherein the security of the second operating environment is higher than that of the first operating environment.
  • the face recognition processing may be performed according to the face recognition model in the second operating environment.
  • the terminal may send a face recognition instruction to the second operating environment, and control the camera module to collect images through the second operating environment.
  • the captured image will be sent to the second operating environment first, and the safety level of the application operation will be judged in the second operating environment. If the safety level is lower than the level threshold, the captured image will be sent to the first operating environment for face recognition. Processing; if the security level is higher than the level threshold, perform face recognition processing on the acquired image in a second operating environment.
  • the method when performing face recognition processing in the first operating environment, the method includes:
  • Step 502 Control the camera module to collect the first target image and the speckle image, and send the first target image to the first operating environment, and send the speckle image to the second operating environment.
  • the application installed in the terminal may initiate a face recognition instruction and send the face recognition instruction to a second operating environment.
  • the camera module can be controlled to collect the first target image and the speckle image.
  • the first target image collected by the camera module can be directly sent to the first operating environment, and the collected speckle image is sent to the second operating environment.
  • the first target image may be a visible light image, or may be another type of image, which is not limited herein.
  • the camera module may include an RGB (Red Green Blue) camera, and the first target image is collected by the RGB camera.
  • the camera module may further include a laser light and a laser camera. The terminal can control the laser light to be turned on, and then collect a speckle image formed by the laser speckle emitted by the laser light and irradiating the object through the laser camera.
  • the wavelets scattered by randomly distributed surface elements on these surfaces are superimposed on each other so that the reflected light field has a random spatial light intensity distribution and appears granular. Structure, this is the laser speckle.
  • the formed laser speckles are highly random, so the laser speckles generated by different laser emitters are different.
  • the generated speckle images are different.
  • Laser speckles formed by different laser lights are unique, and the speckle images obtained are also unique.
  • Step 504 Calculate a depth image according to the speckle image in the second operating environment and send the depth image to the first operating environment.
  • the terminal will ensure that the speckle image is always processed in a secure environment, so the terminal will transmit the speckle image to a second operating environment for processing.
  • the depth image is an image used to represent the depth information of the photographed object.
  • the depth image can be obtained from the speckle image calculation.
  • the terminal can control the camera module to collect the first target image and the speckle image at the same time, and the depth image calculated from the speckle image can represent the depth information of the object in the first target image.
  • a depth image may be calculated from the speckle image and the reference image in the second operating environment.
  • the depth image is an image acquired when the laser speckle is irradiated to the reference plane, so the reference image is an image with reference depth information.
  • the relative depth can be calculated according to the positional offset of the speckles in the speckle image relative to the speckles in the reference image.
  • the relative depth can represent the depth information of the actual shooting object to the reference plane. Then calculate the actual depth information of the object based on the obtained relative depth and reference depth.
  • the reference image is compared with the speckle image to obtain offset information, and the offset information is used to represent a horizontal offset of the speckle in the speckle image relative to the corresponding speckle in the reference image; according to the offset information and the reference depth The information is calculated to obtain the depth image.
  • FIG. 8 is a schematic diagram of calculating depth information in one embodiment.
  • the laser light 602 can generate a laser speckle.
  • the formed image is acquired by a laser camera 604.
  • the laser speckle emitted by the laser light 602 is reflected by the reference plane 608, and then the reflected light is collected by the laser camera 604, and the reference image is obtained by imaging through the imaging plane 610.
  • the reference depth from the reference plane 608 to the laser light 602 is L, and the reference depth is known.
  • the laser speckle emitted by the laser light 602 is reflected by the object 606, and the reflected light is collected by the laser camera 604, and the actual speckle image is obtained through the imaging plane 610.
  • the calculation formula that can get the actual depth information is:
  • L is the distance between the laser light 602 and the reference plane 608
  • f is the focal length of the lens in the laser camera 60
  • CD is the distance between the laser light 602 and the laser camera 60
  • AB is the imaging and reference plane of the object 606 The offset distance between the imaging of 608.
  • AB may be a product of the pixel offset n and the actual distance p of the pixel point.
  • Step 506 Perform face recognition processing on the first target image and the depth image through the face recognition model in the first running environment.
  • the calculated depth image may be sent to the first operating environment, and then face recognition processing is performed according to the first target image and the depth image in the first operating environment.
  • the running environment then sends the face recognition result to the upper-layer application, and the upper-layer application can perform corresponding application operations according to the face recognition result.
  • the position and area where the human face is located can be detected through the first target image. Since the first target image and the depth image are corresponding, the depth information of the face can be obtained through the corresponding area of the depth image, and the three-dimensional features of the face can be constructed based on the depth information of the face, so that the face can be processed according to the three-dimensional features of the face. Make up.
  • the method when performing face recognition processing in the second operating environment, the method specifically includes:
  • Step 702 Control the camera module to collect a second target image and a speckle image, and send the second target image and the speckle image to a second operating environment.
  • the second target image may be an infrared image.
  • the camera module may include a flood light, a laser light, and a laser camera.
  • the terminal may control the flood light to be turned on, and then collect the flood light through the laser camera to illuminate the object.
  • the formed infrared image is used as the second target image.
  • the terminal can also control the laser light to turn on, and then use a laser camera to collect the speckle image formed by the laser light illuminating the object.
  • the time interval between the acquisition of the second target image and the speckle image must be relatively short to ensure the consistency of the acquired second target image and the speckle image, and avoid a large gap between the second target image and the speckle image.
  • the error improves the accuracy of image processing.
  • the camera module is controlled to collect the second target image, and the camera module is controlled to collect the speckle image; wherein the time interval between the first time when the second target image is collected and the second time when the speckle image is collected is less than the first time A threshold.
  • the two-way PWM Pulse Width Modulation
  • the two-way PWM Pulse Width Modulation
  • the second target image may be an infrared image or other types of images, which is not limited herein.
  • the second target image may be a visible light image.
  • Step 704 Calculate a depth image according to the speckle image in the second operating environment.
  • the security level of the face recognition instruction is higher than the level threshold, it is considered that the application operation that initiates the face recognition instruction has higher security requirements, and it is necessary to perform face recognition processing in a high security environment To ensure the security of data processing.
  • the second target image and speckle image collected by the camera module are directly sent to the second operating environment, and then a depth image is calculated based on the speckle image in the second operating environment.
  • Step 706 Perform face recognition processing on the second target image and the depth image through the face recognition model in the second running environment.
  • face detection when performing face recognition processing in the second operating environment, face detection may be performed according to the second target image to detect whether the second target image includes the target face. If the second target image includes a target face, the detected target face is matched with a preset face. If the detected target face matches a preset face, target depth information of the target face is obtained from the depth image, and whether the target face is a living body is detected based on the target depth information.
  • the face attribute features of the target face can be extracted, and then the extracted face attribute features are matched with the preset face attribute features. If the matching value exceeds the matching threshold, then Think that face matching is successful. For example, features such as face deflection angle, brightness information, facial features, etc. can be extracted as face attribute features. If the degree of matching between the face attribute features of the target face and the face attribute features of the preset face exceeds 90%, it is considered that Face matching succeeded.
  • the extracted face attribute features may also be successfully authenticated.
  • the living body detection processing can be performed according to the collected depth image.
  • the collected second target image can represent detailed information of a human face
  • the collected depth image can represent corresponding depth information
  • the living body detection can be performed according to the depth image. For example, if the captured face is a face in a photo, it can be determined that the collected face is not three-dimensional based on the depth image, and the collected face can be considered as a non-living face.
  • performing the living body detection according to the corrected depth image includes: finding depth information of a face corresponding to the target face in the depth image, and if there is depth information of the face corresponding to the target face in the depth image, and the person
  • the face depth information conforms to the stereo rule of the face, so the target face is a living face.
  • the aforementioned three-dimensional face rule is a rule with three-dimensional depth information of the face.
  • an artificial intelligence model may also be used to perform artificial intelligence recognition on the second target image and the depth image, obtain the living body attribute characteristics corresponding to the target face, and determine whether the target face is based on the obtained living attribute characteristics.
  • Living face image The living body attributes can include the skin characteristics corresponding to the target face, the direction of the texture, the density of the texture, and the width of the texture. Living human face.
  • the processing order can be switched as needed.
  • the face may be authenticated first, and then whether the face is alive is detected. You can also detect whether the face is alive, and then authenticate the face.
  • the compressed face recognition model may be encrypted, and the encrypted face recognition model may be changed from the first
  • the operating environment is transferred to the second operating environment; in the second operating environment, the encrypted face recognition model is decrypted, and the decrypted face recognition model is stored.
  • the first operating environment may be an ordinary operating environment
  • the second operating environment is a safe operating environment
  • the safety of the second operating environment is higher than the first operating environment.
  • the first operating environment is generally used to process application operations with lower security
  • the second operating environment is generally used to process application operations with higher security. For example, operations with low security requirements such as shooting and gaming can be performed in the first operating environment, and operations with high security requirements such as payment and unlocking can be performed in the second operating environment.
  • the second operating environment is generally used for application operations with high security requirements. Therefore, when sending a face recognition model to the second operating environment, it is also necessary to ensure the security of the face recognition model.
  • the compressed face recognition model may be encrypted, and then the encrypted face processed model is sent to the second running environment through a shared buffer.
  • the encrypted face recognition model is transferred from the first running environment to the shared buffer, it is then transferred from the shared buffer to the second running environment.
  • the second operating environment performs decryption processing on the received encrypted face recognition model.
  • the algorithm for encrypting the face recognition model is not limited in this embodiment.
  • encryption processing may be performed according to encryption algorithms such as DES (Data Encryption Standard), MD5 (Message-Digest Algorithm 5, Message-Digest Algorithm 5), and HAVAL (Diffie-Hellman, Key Exchange Algorithm).
  • the method may further include: when it is detected that the duration that the face recognition model has not been invoked exceeds the duration threshold, or when the terminal is turned off, detecting The face recognition model in the running environment is deleted. In this way, the storage space in the second operating environment can be released to save space in the electronic device.
  • the operation condition can be detected during the operation of the electronic device, and the storage space occupied by the face recognition model can be released according to the operation condition of the electronic device. Specifically, when it is detected that the electronic device is in a stuttered state and the length of time during which the face recognition model has not been called exceeds a duration threshold, the face recognition model in the second operating environment is deleted.
  • the face recognition model stored in the first operating environment can be obtained when the electronic device is restored to the normal operating state or the face recognition instruction is detected; the face is stored in the first operating environment.
  • the recognition model is initialized, and the initialized face recognition model is compressed; the compressed face recognition model is transferred from the first running environment to the second running environment for storage.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then compress the initialized face recognition model and then transmit it.
  • initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed.
  • the face recognition model is compressed and then transmitted to the second operating environment, which further improves the data processing speed.
  • processing is selected in the first operating environment or the second operating environment to avoid all applications being processed in the second operating environment, which can reduce the resource occupation rate of the second operating environment.
  • steps in the flowcharts of FIGS. 13, 14, 7, and 9 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed sequentially in the order indicated by the arrows. Unless explicitly stated in this document, the execution of these steps is not strictly limited, and these steps can be performed in other orders. Moreover, at least a part of the steps in FIG. 13, FIG. 14, FIG. 7, and FIG. 9 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily performed at the same time, but may be performed at different times. For execution, the order of execution of these sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • FIG. 10 is a hardware structure diagram of a data processing method according to an embodiment.
  • the electronic device may include a camera module 810, a Central Processing Unit (CPU) 820, and a Microcontroller Unit (MCU) 830.
  • the camera module 810 includes a laser camera 812 , Flood light 814, RGB camera 816, and laser light 818.
  • the micro control unit 830 includes a PWM (Pulse Width Modulation) module 832, a SPI / I2C (Serial Peripheral Interface / Inter-Integrated Circuit) module 834, and a RAM ( Random Access Memory module 836, Depth Engine module 838.
  • PWM Pulse Width Modulation
  • SPI / I2C Serial Peripheral Interface / Inter-Integrated Circuit
  • RAM Random Access Memory module 836, Depth Engine module 838.
  • the central processing unit 820 may be in a multi-core operating mode, and the CPU core in the central processing unit 820 may run under TEE or REE. Both TEE and REE are operating modes of the ARM module (Advanced RISC Machines, Advanced Reduced Instruction Set Processor).
  • the natural operating environment 822 in the central processing unit 820 may be the first operating environment, and the security is low.
  • the trusted operating environment 824 in the central processing unit 820 is the second operating environment and has high security.
  • micro control unit 830 is a processing module independent of the central processing unit 820, and its input and output are controlled by the central processing unit 820 under the trusted operating environment 824, the micro control unit 830 is also The processing module with higher security may be considered that the micro control unit 830 is also in a safe operating environment, that is, the micro control unit 830 is also in a second operating environment.
  • the central processing unit 820 can control the SECURE SPI / I2C through the trusted operating environment 824 to send a face recognition instruction to the SPI / I2C module 834 in the micro control unit 830.
  • the micro control unit 830 receives the face recognition instruction, if it determines that the safety level of the face recognition instruction is higher than the level threshold, the PWM module 832 transmits a pulse wave to control the flood light 814 in the camera module 810 to turn on to acquire an infrared image. 2. Control the laser light 818 in the camera module 810 to turn on to collect speckle images.
  • the camera module 810 can transmit the acquired infrared image and speckle image to the Depth Engine module 838 in the micro control unit 830.
  • the Depth Engine module 838 can calculate the depth image based on the speckle image and send the infrared image and the depth image to the center
  • the trusted operating environment 824 of the processor 820 The trusted operating environment 824 of the central processing unit 820 performs face recognition processing according to the received infrared image and depth image.
  • the PWM module 832 transmits a pulse wave to control the laser light 818 in the camera module 810 to turn on to collect speckle images, and the RGB camera 816 to collect visible light images.
  • the camera module 810 directly sends the collected visible light image to the natural operating environment 822 of the central processing unit 820, and transmits the speckle image to the Depth Engine module 838 in the micro control unit 830.
  • the Depth Engine module 838 can calculate the depth based on the speckle image Image, and send the depth image to the trusted operating environment 824 of the central processing unit 820. Then the trusted operating environment 824 sends the depth image to the natural operating environment 822, and in the natural operating environment 822, the face recognition processing is performed according to the visible light image and the depth image.
  • FIG. 11 is a schematic structural diagram of a data processing apparatus in an embodiment.
  • the data processing apparatus 900 includes a model acquisition module 902, a model transmission module 904, and a model storage module 906. among them:
  • a model acquisition module 902 is configured to acquire a face recognition model stored in a first operating environment.
  • a model transmission module 904 is configured to initialize the face recognition model in the first operating environment, and perform compression processing on the initialized face recognition model.
  • a model storage module 906 configured to transfer the compressed face recognition model from the first running environment to a second running environment for storage; wherein the storage space of the first running environment is larger than that of the first running environment; Storage space of two operating environments, the face recognition model is used to perform face recognition processing on the image. That is, the model general processing module includes a model transmission module and a model storage module.
  • the data processing apparatus may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then compress the initialized face recognition model and then transmit it.
  • initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed.
  • the face recognition model is compressed and then transmitted to the second operating environment, which further improves the data processing speed.
  • FIG. 16 is a schematic structural diagram of a data processing apparatus in another embodiment.
  • the data processing device 1030 includes a model acquisition module 1032, a model transmission module 1034, a model storage module 1036, and a face recognition module 1038. among them:
  • a model acquisition module 1032 is configured to acquire a face recognition model stored in a first operating environment.
  • a model transmission module 1034 is configured to initialize the face recognition model in the first operating environment, and perform compression processing on the initialized face recognition model.
  • a model storage module 1036 configured to transfer the compressed face recognition model from the first running environment to a second running environment for storage; wherein the storage space of the first running environment is larger than that of the first running environment; Storage space of two operating environments, the face recognition model is used to perform face recognition processing on the image.
  • a face recognition module 1038 is configured to determine a safety level of the face recognition instruction when a face recognition instruction is detected; if the safety level is lower than a level threshold, in the first operating environment according to the The face recognition model performs face recognition processing; if the security level is higher than a level threshold, face recognition processing is performed according to the face recognition model in the second operating environment; wherein the second operation The security of the environment is higher than the security of the first operating environment.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then compress the initialized face recognition model and then transmit it.
  • initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed.
  • the face recognition model is compressed and then transmitted to the second operating environment, which further improves the data processing speed.
  • processing is selected in the first operating environment or the second operating environment to avoid all applications being processed in the second operating environment, which can reduce the resource occupation rate of the second operating environment.
  • the model transmission module 1034 is further configured to obtain a target space capacity for storing the face recognition model in the second operating environment, and an amount of data of the initialized face recognition model;
  • the initialized face recognition model is subjected to compression processing corresponding to the compression coefficient.
  • the model transmission module 1034 is further configured to: if the target space capacity is smaller than the data amount, use a ratio of the target space capacity to the data amount as a compression factor.
  • the model storage module 1036 is further configured to transfer the compressed face recognition model from the first operating environment to a shared buffer, and transfer the compressed face recognition model from the shared The buffer is passed to the second operating environment for storage.
  • the model storage module 1036 is further configured to perform encryption processing on the compressed face recognition model, and transfer the encrypted face recognition model from the first running environment to the second running environment; Performing decryption processing on the encrypted face recognition model in the second operating environment, and storing the decrypted face recognition model.
  • the model storage module 1038 is further configured to control a camera module to collect a first target image and a speckle image, and send the first target image to a first operating environment, and send the speckle image.
  • a camera module to collect a first target image and a speckle image, and send the first target image to a first operating environment, and send the speckle image.
  • a face recognition model in the environment performs face recognition processing on the first target image and the depth image.
  • the model storage module 1038 is further configured to control the camera module to collect a second target image and a speckle image, and send the second target image and the speckle image to the second operating environment; in the A depth image is calculated according to the speckle image in a second operating environment; and a face recognition process is performed on the second target image and the depth image through a face recognition model in the second operating environment.
  • each module in the above data processing device is only for illustration. In other embodiments, the data processing device may be divided into different modules according to requirements to complete all or part of the functions of the above data processing device.
  • FIG. 3 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device 100 includes a processor 110, a memory 120, and a network interface 130 connected through a system bus 140.
  • the processor 110 is used to provide computing and control capabilities to support the operation of the entire electronic device 100.
  • the memory 120 is configured to store data, programs, and the like.
  • the memory 120 stores at least one computer program 1224 that can be executed by the processor 110 to implement the data processing method applicable to the electronic device 100 provided in the embodiment of the present application.
  • the memory 120 may include a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (ROM), or a random-access memory (Random-Access-Memory, RAM).
  • the memory 120 includes a non-volatile storage medium 122 and an internal memory 124.
  • the non-volatile storage medium 122 stores an operating system 1222 and a computer program 1224.
  • the computer program 1224 can be executed by the processor 110 to implement a data processing method provided by each of the following embodiments.
  • the internal memory 124 provides a cached operating environment for the operating system 1222 and the computer program 1224 in the non-volatile storage medium 122.
  • the network interface 130 may be an Ethernet card, a wireless network card, or the like, and is configured to communicate with an external electronic device 100.
  • the electronic device 100 may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • FIG. 17 is a flowchart of a data processing method in an embodiment. As shown in FIG. 17, the data processing method includes steps 3202 to 3206. among them:
  • Step 3202 Obtain a face recognition model stored in a first operating environment.
  • the electronic device may include a processor, and the processor may perform processing such as storage, calculation, and transmission of data.
  • the processor in an electronic device can run in different environments.
  • the processor can run in a TEE (Trusted Execution Environment) or a REE (Rich Execution Environment).
  • TEE Trusted Execution Environment
  • REE Raster Execution Environment
  • the electronic device can allocate resources of the processor and divide different resources for different operating environments. For example, in general, there will be fewer processes with higher security requirements in electronic devices, and there will be more common processes. Then the electronic device can divide a small part of the processor's resources into a higher security operating environment, and Most of the resources are allocated to a less secure operating environment.
  • the face recognition model is an algorithm model for identifying and processing a face in an image, and is generally stored in the form of a file. It can be understood that, because the algorithms for recognizing faces in an image are relatively complicated, the storage space occupied by storing the face recognition model is also relatively large. After the electronic device divides the processor into different operating environments, the storage space allocated to the first operating environment needs to be redundantly allocated to the storage space in the second operating environment. Therefore, the electronic device stores the face recognition model in the first operating environment. To ensure that there is enough space in the second operating environment to process the data.
  • Step 3204 Initialize the face recognition model in the first operating environment, and divide the initialized face recognition model into at least two model data packets.
  • a face recognition model needs to be initialized. If the face recognition model is stored in the second operating environment, storing the face recognition model needs to occupy the storage space in the second operating environment, and the initialization of the face recognition model also needs to occupy the storage space in the second operating environment. This will cause excessive resource consumption in the second operating environment and affect the efficiency of data processing.
  • the face recognition model occupies 20M of memory.
  • the initialization of the face recognition model requires an additional 10M of memory. If both storage and initialization are performed in the second operating environment, then a total of 30M of memory in the second operating environment is required. If the face recognition model is stored in the first running environment and initialized in the first running environment, and then the initialized face recognition model is sent to the second running environment, then only the second running environment needs to be occupied. 10M of memory, greatly reducing the resource occupation rate in the second operating environment.
  • the electronic device stores the face recognition model in the first running environment, and then initializes the face recognition model in the first running environment, and then transmits the initialized face recognition model to the second running environment, which can reduce the Occupation of storage space in the second operating environment. Further, after the face recognition model is initialized, the initialized face recognition model may be divided into at least two model data packets, so that the initialized face recognition model is transmitted in segments.
  • Step 3206 sequentially transferring the model data package from the first running environment to the second running environment, and generating the target face recognition model according to the model data package in the second running environment; wherein the storage space of the first running environment is larger than that of the first running environment.
  • the target face recognition model is used to perform face recognition processing on the image. That is, the face recognition model is initialized in the first running environment, and the initialized face recognition model is transferred to the second running environment for storage, including: the face recognition model is carried out in the first running environment.
  • the face recognition model is stored in the form of a file.
  • the initial running environment divides the initialized face recognition model into model data packets
  • the obtained model data packets are sequentially sent to the second running environment.
  • the model data packet is transmitted to the second operating environment, the model data packet is stitched together to generate a target face recognition model.
  • the face recognition model may be segmented according to different functional modules and transmitted to the second operating environment, and then the model data packages corresponding to each functional module are stitched to generate the final target face recognition model.
  • step 3202 when it is detected that the initialization condition is met, step 3202 is started.
  • the face recognition model is stored in the first operating environment.
  • the electronic device can initialize the face recognition model when it is turned on, or it can detect people when it detects that an application that requires face recognition processing is opened.
  • the face recognition model is initialized.
  • the face recognition model can also be initialized when a face recognition instruction is detected, and then the initialized face recognition model is compressed and then transferred to the second operating environment.
  • the remaining storage space in the second operating environment may be obtained; if the remaining storage space is less than the space threshold, the The face recognition model is initialized, and the initialized face recognition model is divided into at least two model data packets.
  • the space threshold can be set as required, and is generally the sum of the storage space occupied by the face recognition model and the storage space occupied when the face recognition model is initialized.
  • the face recognition model can be directly sent to the second operating environment and initialized in the second operating environment. After the initialization is completed, the original face recognition model is sent. Delete, this can ensure the security of the data.
  • the above data processing method may further specifically include: if the remaining storage space is greater than or equal to the space threshold, segmenting the face recognition model into at least two model data packets in the first operating environment, and the model data packets are passed to the first In the second operating environment, the model data package is used to generate a target face recognition model in the second running environment, and the above target face recognition model is initialized; the target face recognition model before initialization is deleted, and the Target face recognition model. After the target face recognition model is generated in the second operating environment, the face recognition processing can be directly performed according to the target face recognition model.
  • a face recognition model may generally include multiple processing modules, each processing module performs different processing, and these multiple processing modules may be independent of each other.
  • it may include a face detection module, a face matching module, and a living body detection module.
  • some modules may have lower requirements for security, and some modules may have higher requirements for security. Therefore, a processing module with a relatively low security requirement may be initialized in a first operating environment, and a processing module with a relatively high security requirement may be initialized in a second operating environment.
  • step 3204 may include: performing first initialization on the face recognition model in the first operating environment, and dividing the first initialized face recognition model into at least two model data packets.
  • the method may further include: secondly initializing a second module in the target face recognition model, where the second module is a module other than the first module in the face recognition model, and the security of the first module is lower than that of the first module.
  • the security of the two modules may be a face detection module, and the second module may be a face matching module and a living body detection module.
  • the first module has lower requirements for safety, so it is initialized in the first operating environment.
  • the second module has higher safety requirements, so it is initialized in the second operating environment.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then divide the initialized face recognition model into at least two A model data packet, and then transmit the data packet to the second operating environment. Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed. At the same time, the face recognition model is divided into multiple data packets for transmission, which improves the efficiency of data transmission.
  • FIG. 18 is a flowchart of a data processing method in another embodiment. As shown in FIG. 18, the data processing method includes steps 3302 to 3314. among them:
  • Step 3302 Obtain a face recognition model stored in a first operating environment.
  • the face recognition model is trained to make the recognition accuracy of the face recognition model higher.
  • a training image set is obtained, and the images in the training image set are used as the input of the model, and the training parameters of the model are continuously adjusted according to the training results obtained during the training process, so as to obtain the most Best parameters.
  • the electronic device may be a terminal that interacts with the user, and because the terminal has limited resources, the face recognition model may be trained on the server. After the server has trained the face recognition model, it sends the trained face recognition model to the terminal. After receiving the trained face recognition model, the terminal stores the trained face recognition model in the first running environment. Then, before step 3302, the method may further include: receiving, by the terminal, the face recognition model sent by the server, and storing the face recognition model in the first operating environment of the terminal.
  • the terminal may include a first operating environment and a second operating environment.
  • the terminal may perform face recognition processing on the image in the second operating environment, but since the terminal is divided into the first operating environment, the storage space ratio is divided into the second operating environment.
  • the storage space is large, so the terminal can store the received face recognition model in the storage space of the first operating environment.
  • the face recognition model stored in the first running environment may be loaded into the second running environment each time a terminal restart is detected. In this way, when face recognition processing is required on an image, The face recognition model loaded in the second running environment can be directly called for processing.
  • step 3302 may specifically include: when it is detected that the terminal is restarted, obtaining a face recognition model stored in the first operating environment.
  • the face recognition model can be updated.
  • the server will send the updated face recognition model to the terminal.
  • the terminal After the terminal receives the updated face recognition model, it will update it.
  • the subsequent face recognition model is stored in the first operating environment, covering the original face recognition model. Then the control terminal is restarted. After the terminal is restarted, it obtains the updated face recognition model and initializes the updated face recognition model.
  • Step 3304 Initialize the face recognition model in the first operating environment, obtain the space capacity of the shared buffer, and divide the face recognition model into at least two model data packets according to the space capacity.
  • the face recognition model Before the face recognition process is performed by the face recognition model, the face recognition model needs to be initialized. During the initialization process, parameters, modules, etc. in the face recognition model can be set to the default state. Because the process of initializing the model also requires memory, the terminal can initialize the face recognition model in the first running environment, and then send the initialized face recognition model to the second running environment, so that it can directly Face recognition processing is performed in the second operating environment without the need to occupy additional memory to initialize the model.
  • the face recognition model is stored in the form of a file, and may also be stored in another form, which is not limited herein.
  • the face recognition model may generally include multiple functional models, for example, it may include a face detection module, a face matching module, a living body detection module, and the like. Then, when cutting the face recognition model, it can be divided into at least two model data packets according to each functional module, which is convenient for subsequent recombination to generate the target face recognition model. In other embodiments, segmentation may be performed in other manners without limitation.
  • Step 3306 Assign a corresponding data number to each model data packet, and sequentially transfer the model data packet from the first running environment to the second running environment according to the data number.
  • the data when data is stored, the data is generally written to consecutive storage addresses in order according to the storage time.
  • the segmented model data packets After segmenting the face recognition model, the segmented model data packets can be numbered, and then the model data packets can be sequentially transferred to the second operating environment for storage according to the data number. After the model data packet transmission is completed, the model data packets are stitched in order to generate the target face recognition model.
  • the data transmission between the first operating environment and the second operating environment can be implemented by using a shared buffer.
  • the shared buffer can be used according to the shared buffer.
  • the capacity of the area to be cut. Specifically, the space capacity of the shared buffer is obtained, and the face recognition model is divided into at least two model data packets according to the space capacity; wherein the data amount of the model data packet is less than or equal to the space capacity.
  • the shared buffer is a channel through which the first and second operating environments transmit data, and both the first and second operating environments can access the shared buffer.
  • the electronic device can configure the shared buffer, and the space of the shared buffer can be set according to requirements. For example, the electronic device can set the storage space of the shared buffer to 5M or 10M.
  • the face recognition model is cut according to the capacity of the shared buffer and then transmitted. There is no need to configure the shared buffer with a larger capacity to transmit data, which reduces the resource occupation of the electronic device.
  • the method specifically includes: sequentially transferring model data packets from the first running environment to the shared buffer, and passing model data packets from the shared buffer to the second running environment.
  • Step 3306 may specifically include: assigning a corresponding data number to each model data packet, sequentially transferring the model data packet from the first operating environment to the shared buffer according to the data number, and then passing the model data packet from the shared buffer to the The second operating environment.
  • FIG. 5 is a schematic diagram of a system for implementing a data processing method in an embodiment.
  • the system includes a first operating environment 302, a shared buffer 304, and a second operating environment 306.
  • the first operating environment 302 and the second operating environment 306 can perform data transmission through the shared buffer 304.
  • the face recognition model is stored in the first running environment 302.
  • the system can obtain the face recognition model stored in the first running environment 302, initialize the obtained face recognition model, and then initialize the initialized face recognition model.
  • the segmentation is performed, and the model data packet formed after the segmentation is transferred to the shared buffer 304, and the model data packet is transferred to the second running environment 306 through the shared buffer 304. Finally, the model data package is stitched into the target face recognition model in the second operating environment 306.
  • step 3308 the model data packets are stitched according to the data number in the second operating environment to generate a target face recognition model.
  • the data number can be used to indicate the arrangement order of the model data packets. After the model data packets are transmitted to the second operating environment, the model data packets are sequentially arranged according to the data numbers, and then the splicing is performed according to the arrangement order to generate the target person Face recognition model.
  • FIG. 19 is a schematic diagram of a segmented face recognition model in one embodiment.
  • the face recognition model 3502 is stored in a file form, and the face recognition model 3502 is divided into three model data packages 3504.
  • the model data package 3504 may also be in a file format.
  • the data amount of the segmented model data packet 3504 is smaller than that of the face recognition model 3502, and the data amount of each model data packet 3504 may be the same or different. For example, if the face recognition model 3502 has a total of 30M, it can be divided evenly according to the amount of data, and each model data packet is 10M.
  • Step 3310 When a face recognition instruction is detected, determine the security level of the face recognition instruction.
  • a face recognition model is stored in both the first operating environment and the second operating environment.
  • the terminal may perform face recognition processing in the first operating environment, or may perform face recognition processing in the second operating environment. Specifically, the terminal may determine whether to perform face recognition processing in the first operating environment or face recognition processing in the second operating environment according to the face recognition instruction that triggers the face recognition processing.
  • the face recognition instruction is initiated by the upper-layer application of the terminal.
  • the upper-layer application initiates the face recognition instruction
  • information such as the time when the face recognition instruction was initiated, the application identification, and the operation identification may be written into the face recognition.
  • the application identifier may be used to indicate an application that initiates a face recognition instruction
  • the operation identifier may be used to indicate an application operation that requires a face recognition result.
  • application operations such as payment, unlocking, and beauty can be performed through the result of face recognition
  • the operation identifier in the face recognition instruction is used to indicate application operations such as payment, unlocking, and beauty.
  • the security level is used to indicate the security level of the application operation.
  • the higher the security level the higher the security requirements of the application operation.
  • the payment operation requires higher security
  • the beauty operation requires lower security, so the security level of the payment operation is higher than that of the beauty operation.
  • the security level can be directly written into the face recognition instruction. After the terminal detects the face recognition instruction, the terminal directly reads the safety level in the face recognition instruction.
  • the corresponding relationship of the operation identifiers may also be established in advance. After detecting the face recognition instruction, the corresponding security level is obtained through the operation identifier in the face recognition instruction.
  • Step 3312 if the security level is lower than the level threshold, perform face recognition processing according to the face recognition model in the first operating environment.
  • the face recognition processing may include, but is not limited to, one or more of face detection, face matching, and living body detection.
  • Face detection refers to a process of detecting whether a face exists in an image
  • face matching refers to The process of matching the detected human face with a preset human face
  • Living body detection refers to the process of detecting whether a human face in an image is a living body.
  • Step 3314 if the security level is higher than the level threshold, perform face recognition processing according to the face recognition model in the second operating environment; wherein the security of the second operating environment is higher than that of the first operating environment.
  • the face recognition processing may be performed according to the face recognition model in the second operating environment.
  • the terminal may send a face recognition instruction to the second operating environment, and control the camera module to collect images through the second operating environment.
  • the captured image will be sent to the second operating environment first, and the safety level of the application operation will be judged in the second operating environment. If the safety level is lower than the level threshold, the captured image will be sent to the first operating environment for face recognition. Processing; if the security level is higher than the level threshold, perform face recognition processing on the acquired image in a second operating environment.
  • the method when performing face recognition processing in the first operating environment, the method includes:
  • Step 502 Control the camera module to collect the first target image and the speckle image, and send the first target image to the first operating environment, and send the speckle image to the second operating environment.
  • the application installed in the terminal may initiate a face recognition instruction and send the face recognition instruction to a second operating environment.
  • the camera module can be controlled to collect the first target image and the speckle image.
  • the first target image collected by the camera module can be directly sent to the first operating environment, and the collected speckle image is sent to the second operating environment.
  • the first target image may be a visible light image, or may be another type of image, which is not limited herein.
  • the camera module may include an RGB (Red Green Blue) camera, and the first target image is collected by the RGB camera.
  • the camera module may further include a laser light and a laser camera. The terminal can control the laser light to be turned on, and then collect a speckle image formed by the laser speckle emitted by the laser light and irradiating the object through the laser camera.
  • the wavelets scattered by randomly distributed surface elements on these surfaces are superimposed on each other so that the reflected light field has a random spatial light intensity distribution and appears granular. Structure, this is the laser speckle.
  • the formed laser speckles are highly random, so the laser speckles generated by different laser emitters are different.
  • the generated speckle images are different.
  • Laser speckles formed by different laser lights are unique, and the speckle images obtained are also unique.
  • Step 504 Calculate a depth image according to the speckle image in the second operating environment, and send the depth image to the first operating environment.
  • the terminal will ensure that the speckle image is always processed in a secure environment, so the terminal will transmit the speckle image to a second operating environment for processing.
  • the depth image is an image used to represent the depth information of the photographed object.
  • the depth image can be obtained from the speckle image calculation.
  • the terminal can control the camera module to collect the first target image and the speckle image at the same time, and the depth image calculated from the speckle image can represent the depth information of the object in the first target image.
  • a depth image may be calculated from the speckle image and the reference image in the second operating environment.
  • the depth image is an image acquired when the laser speckle is irradiated to the reference plane, so the reference image is an image with reference depth information.
  • the relative depth can be calculated according to the positional offset of the speckles in the speckle image relative to the speckles in the reference image.
  • the relative depth can represent the depth information of the actual shooting object to the reference plane. Then calculate the actual depth information of the object based on the obtained relative depth and reference depth.
  • the reference image is compared with the speckle image to obtain offset information, and the offset information is used to represent a horizontal offset of the speckle in the speckle image relative to the corresponding speckle in the reference image; according to the offset information and the reference depth The information is calculated to obtain the depth image.
  • FIG. 8 is a schematic diagram of calculating depth information in one embodiment.
  • the laser light 602 can generate a laser speckle.
  • the formed image is acquired by a laser camera 604.
  • the laser speckle emitted by the laser light 602 is reflected by the reference plane 608, and then the reflected light is collected by the laser camera 604, and the reference image is obtained by imaging through the imaging plane 610.
  • the reference depth from the reference plane 608 to the laser light 602 is L, and the reference depth is known.
  • the laser speckle emitted by the laser light 602 is reflected by the object 606, and the reflected light is collected by the laser camera 604, and the actual speckle image is obtained through the imaging plane 610.
  • the calculation formula that can get the actual depth information is:
  • L is the distance between the laser light 602 and the reference plane 608
  • f is the focal length of the lens in the laser camera 60
  • CD is the distance between the laser light 602 and the laser camera 60
  • AB is the imaging and reference plane of the object 606 The offset distance between the imaging of 608.
  • AB may be a product of the pixel offset n and the actual distance p of the pixel point.
  • Step 506 Perform face recognition processing on the first target image and the depth image through the face recognition model in the first running environment.
  • the calculated depth image may be sent to the first operating environment, and then face recognition processing is performed according to the first target image and the depth image in the first operating environment.
  • the running environment then sends the face recognition result to the upper-layer application, and the upper-layer application can perform corresponding application operations according to the face recognition result.
  • the position and area where the human face is located can be detected through the first target image. Since the first target image and the depth image are corresponding, the depth information of the face can be obtained through the corresponding area of the depth image, and the three-dimensional features of the face can be constructed based on the depth information of the face, so that the face can be processed according to the three-dimensional features of the face. Make up.
  • the method when performing face recognition processing in the second operating environment, the method specifically includes:
  • Step 702 Control the camera module to collect a second target image and a speckle image, and send the second target image and the speckle image to a second operating environment.
  • the second target image may be an infrared image.
  • the camera module may include a flood light, a laser light, and a laser camera.
  • the terminal may control the flood light to be turned on, and then collect the flood light through the laser camera to illuminate the object.
  • the formed infrared image is used as the second target image.
  • the terminal can also control the laser light to turn on, and then use a laser camera to collect the speckle image formed by the laser light illuminating the object.
  • the time interval between the acquisition of the second target image and the speckle image must be relatively short to ensure the consistency of the acquired second target image and the speckle image, and avoid a large gap between the second target image and the speckle image.
  • the error improves the accuracy of image processing.
  • the camera module is controlled to collect the second target image, and the camera module is controlled to collect the speckle image; wherein the time interval between the first time when the second target image is collected and the second time when the speckle image is collected is less than the first time A threshold.
  • the two-way PWM Pulse Width Modulation
  • the two-way PWM Pulse Width Modulation
  • the second target image may be an infrared image or other types of images, which is not limited herein.
  • the second target image may be a visible light image.
  • Step 704 Calculate a depth image according to the speckle image in the second operating environment.
  • the security level of the face recognition instruction is higher than the level threshold, it is considered that the application operation that initiates the face recognition instruction has higher security requirements, and it is necessary to perform face recognition processing in a high security environment To ensure the security of data processing.
  • the second target image and the speckle image collected by the camera module are directly sent to the second operating environment, and then a depth image is calculated based on the speckle image in the second operating environment.
  • Step 706 Perform face recognition processing on the second target image and the depth image through the face recognition model in the second running environment.
  • face detection when performing face recognition processing in the second operating environment, face detection may be performed according to the second target image to detect whether the second target image includes the target face. If the second target image includes a target face, the detected target face is matched with a preset face. If the detected target face matches a preset face, target depth information of the target face is obtained from the depth image, and whether the target face is a living body is detected based on the target depth information.
  • the face attribute features of the target face can be extracted, and then the extracted face attribute features are matched with the preset face attribute features. If the matching value exceeds the matching threshold, then Think that face matching is successful. For example, features such as face deflection angle, brightness information, facial features, etc. can be extracted as face attribute features. If the degree of matching between the face attribute features of the target face and the face attribute features of the preset face exceeds 90%, it is considered that Face matching succeeded.
  • the extracted face attribute features may also be successfully authenticated.
  • the living body detection processing can be performed according to the collected depth image.
  • the collected second target image can represent detailed information of a human face
  • the collected depth image can represent corresponding depth information
  • the living body detection can be performed according to the depth image. For example, if the captured face is a face in a photo, it can be determined that the collected face is not three-dimensional based on the depth image, and the collected face can be considered as a non-living face.
  • performing the living body detection according to the corrected depth image includes: finding depth information of a face corresponding to the target face in the depth image, and if there is depth information of the face corresponding to the target face in the depth image, and the person
  • the face depth information conforms to the stereo rule of the face, so the target face is a living face.
  • the aforementioned three-dimensional face rule is a rule with three-dimensional depth information of the face.
  • an artificial intelligence model may also be used to perform artificial intelligence recognition on the second target image and the depth image, obtain the living body attribute characteristics corresponding to the target face, and determine whether the target face is based on the obtained living attribute characteristics.
  • Living face image The living body attributes can include the skin characteristics corresponding to the target face, the direction of the texture, the density of the texture, and the width of the texture. Living human face.
  • the processing order can be switched as needed.
  • the face may be authenticated first, and then whether the face is alive is detected. You can also detect whether the face is alive, and then authenticate the face.
  • the compressed face recognition model may be encrypted, and the encrypted face recognition model may be changed from the first
  • the operating environment is transferred to the second operating environment; in the second operating environment, the encrypted face recognition model is decrypted, and the decrypted face recognition model is stored.
  • the first operating environment may be an ordinary operating environment
  • the second operating environment is a safe operating environment
  • the safety of the second operating environment is higher than the first operating environment.
  • the first operating environment is generally used to process application operations with lower security
  • the second operating environment is generally used to process application operations with higher security. For example, operations with low security requirements such as shooting and gaming can be performed in the first operating environment, and operations with high security requirements such as payment and unlocking can be performed in the second operating environment.
  • the second operating environment is generally used for application operations with high security requirements. Therefore, when sending a face recognition model to the second operating environment, it is also necessary to ensure the security of the face recognition model.
  • the compressed face recognition model may be encrypted, and then the encrypted face processed model is sent to the second running environment through a shared buffer.
  • the encrypted face recognition model is transferred from the first running environment to the shared buffer, it is then transferred from the shared buffer to the second running environment.
  • the second operating environment performs decryption processing on the received encrypted face recognition model.
  • the algorithm for encrypting the face recognition model is not limited in this embodiment.
  • encryption processing may be performed according to encryption algorithms such as DES (Data Encryption Standard), MD5 (Message-Digest Algorithm 5, Message-Digest Algorithm 5), and HAVAL (Diffie-Hellman, Key Exchange Algorithm).
  • the method may further include: when it is detected that the duration that the target face recognition model has not been called exceeds a duration threshold, or when it is detected that the terminal is turned off, The target face recognition model in the second running environment is deleted. In this way, the storage space in the second operating environment can be released to save space in the electronic device.
  • the operation condition can be detected during the operation of the electronic device, and the storage space occupied by the target face recognition model can be released according to the operation condition of the electronic device. Specifically, when it is detected that the electronic device is in a stuttered state, and the target face recognition model has not been called for longer than the duration threshold, the target face recognition model in the second operating environment is deleted.
  • the face recognition model stored in the first operating environment can be obtained when it is detected that the electronic device is restored to a normal operating state or a face recognition instruction is detected;
  • the face recognition model is initialized, and the initialized face recognition model is divided into at least two model data packets; the model data packets are sequentially transferred from the first running environment to the second running environment, and in the second running environment Generate a target face recognition model based on the model data package.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then divide the initialized face recognition model into at least two A model data packet, and then transmit the data packet to the second operating environment. Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed. At the same time, the face recognition model is divided into multiple data packets for transmission, which improves the efficiency of data transmission. In addition, according to the security level of the face recognition instruction, processing is selected in the first operating environment or the second operating environment to avoid all applications being processed in the second operating environment, which can reduce the resource occupation rate of the second operating environment.
  • steps in the flowcharts of FIG. 17, FIG. 18, FIG. 7, and FIG. 9 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed sequentially in the order indicated by the arrows. Unless explicitly stated in this document, the execution of these steps is not strictly limited, and these steps can be performed in other orders. Moreover, at least some of the steps in FIG. 17, FIG. 18, FIG. 7, and FIG. 9 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily performed at the same time, but may be performed at different times For execution, the order of execution of these sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • FIG. 10 is a hardware structure diagram of a data processing method according to an embodiment.
  • the electronic device may include a camera module 810, a Central Processing Unit (CPU) 820, and a Microcontroller Unit (MCU) 830.
  • the camera module 810 includes a laser camera 812 , Flood light 814, RGB camera 816, and laser light 818.
  • the micro control unit 830 includes a PWM (Pulse Width Modulation) module 832, a SPI / I2C (Serial Peripheral Interface / Inter-Integrated Circuit) module 834, and a RAM ( Random Access Memory module 836, Depth Engine module 838.
  • PWM Pulse Width Modulation
  • SPI / I2C Serial Peripheral Interface / Inter-Integrated Circuit
  • RAM Random Access Memory module 836, Depth Engine module 838.
  • the central processing unit 820 may be in a multi-core operating mode, and the CPU core in the central processing unit 820 may run under TEE or REE. Both TEE and REE are operating modes of the ARM module (Advanced RISC Machines, Advanced Reduced Instruction Set Processor).
  • the natural operating environment 822 in the central processing unit 820 may be the first operating environment, and the security is low.
  • the trusted operating environment 824 in the central processing unit 820 is the second operating environment and has high security.
  • micro control unit 830 is a processing module independent of the central processing unit 820, and its input and output are controlled by the central processing unit 820 under the trusted operating environment 824, the micro control unit 830 is also The processing module with higher security may be considered that the micro control unit 830 is also in a safe operating environment, that is, the micro control unit 830 is also in a second operating environment.
  • the central processing unit 820 can control the SECURE SPI / I2C through the trusted operating environment 824 to send a face recognition instruction to the SPI / I2C module 834 in the micro control unit 830.
  • the micro control unit 830 receives the face recognition instruction, if it determines that the safety level of the face recognition instruction is higher than the level threshold, the PWM module 832 transmits a pulse wave to control the flood light 814 in the camera module 810 to turn on to acquire an infrared image. 2. Control the laser light 818 in the camera module 810 to turn on to collect speckle images.
  • the camera module 810 can transmit the acquired infrared image and speckle image to the Depth Engine module 838 in the micro control unit 830.
  • the Depth Engine module 838 can calculate the depth image based on the speckle image and send the infrared image and the depth image to the center
  • the trusted operating environment 824 of the processor 820 The trusted operating environment 824 of the central processing unit 820 performs face recognition processing according to the received infrared image and depth image.
  • the PWM module 832 transmits a pulse wave to control the laser light 818 in the camera module 810 to turn on to collect speckle images, and the RGB camera 816 to collect visible light images.
  • the camera module 810 directly sends the collected visible light image to the natural operating environment 822 of the central processing unit 820, and transmits the speckle image to the Depth Engine module 838 in the micro control unit 830.
  • the Depth Engine module 838 can calculate the depth based on the speckle image Image, and send the depth image to the trusted operating environment 824 of the central processing unit 820. Then the trusted operating environment 824 sends the depth image to the natural operating environment 822, and in the natural operating environment 822, the face recognition processing is performed according to the visible light image and the depth image.
  • FIG. 20 is a schematic structural diagram of a data processing apparatus according to an embodiment.
  • the data processing device 1020 includes a model acquisition module 1022, a model segmentation module 1024, and a model transmission module 1026. among them:
  • a model acquisition module 1022 is configured to acquire a face recognition model stored in a first operating environment.
  • a model segmentation module 1024 is configured to initialize the face recognition model in the first operating environment, and divide the initialized face recognition model into at least two model data packets.
  • a model transmission module 1026 is configured to sequentially transfer the model data package from the first running environment to the second running environment, and generate a target face recognition model according to the model data package in the second running environment. Where the storage space of the first operating environment is larger than the storage space of the second operating environment, and the target face recognition model is used to perform face recognition processing on the image. That is: the model overall processing module includes a model segmentation module and a model transmission module.
  • the data processing device may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then divide the initialized face recognition model into at least two A model data packet, and then transmit the data packet to the second operating environment. Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed. At the same time, the face recognition model is divided into multiple data packets for transmission, which improves the efficiency of data transmission.
  • FIG. 21 is a schematic structural diagram of a data processing apparatus in another embodiment.
  • the data processing device 1040 includes a model acquisition module 1042, a model segmentation module 1044, a model transmission module 1046, and a face recognition module 1048. among them:
  • a model acquisition module 1042 is configured to acquire a face recognition model stored in a first operating environment.
  • a model segmentation module 1044 is configured to initialize the face recognition model in the first operating environment, and divide the initialized face recognition model into at least two model data packets.
  • a model transmission module 1046 is configured to sequentially transfer the model data packet from the first running environment to a second running environment, and generate a target face recognition model according to the model data packet in the second running environment. Where the storage space of the first operating environment is larger than the storage space of the second operating environment, and the target face recognition model is used to perform face recognition processing on the image.
  • a face recognition module 1048 is configured to determine a safety level of the face recognition instruction when a face recognition instruction is detected; if the safety level is lower than a level threshold, according to the first operating environment according to the The face recognition model performs face recognition processing; if the security level is higher than a level threshold, face recognition processing is performed according to the face recognition model in the second operating environment; wherein the second operating environment The security is higher than the security of the first operating environment.
  • the data processing method provided in the foregoing embodiment may store a face recognition model in a first running environment, and then initialize the face recognition model in the first running environment, and then divide the initialized face recognition model into at least two A model data packet, and then transmit the data packet to the second operating environment. Since the storage space in the second operating environment is smaller than the storage space in the first operating environment, initializing the face recognition model in the first operating environment can improve the initialization efficiency of the face recognition model and reduce the efficiency in the second operating environment. Resource utilization and improve data processing speed. At the same time, the face recognition model is divided into multiple data packets for transmission, which improves the efficiency of data transmission. In addition, according to the security level of the face recognition instruction, processing is selected in the first operating environment or the second operating environment to avoid all applications being processed in the second operating environment, which can reduce the resource occupation rate of the second operating environment.
  • the model segmentation module 1044 is further configured to obtain a space capacity of the shared buffer, and segment the face recognition model into at least two model data packets according to the space capacity; wherein the model The data amount of the data packet is less than or equal to the space capacity.
  • the model transmission module 1046 is further configured to sequentially transfer the model data packet from the first operating environment to a shared buffer, and the model data packet from the shared buffer to The second operating environment.
  • the model transmission module 1046 is further configured to assign a corresponding data number to each model data packet, and sequentially transfer the model data packet from the first running environment to the second running according to the data number. Environment; stitching the model data packet according to the data number in the second operating environment to generate a target face recognition model.
  • the model transmission module 1046 is further configured to perform encryption processing on the model data packet, and transfer the encrypted processed model data packet from the first running environment to the second running environment; In a second operating environment, decryption processing is performed on the encrypted model data packet.
  • the face recognition module 1048 is further configured to control a camera module to collect a first target image and a speckle image, and send the first target image to a first operating environment to send the speckle image. Sending to the second operating environment; obtaining a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment;
  • the face recognition model in the running environment performs face recognition processing on the first target image and the depth image.
  • the face recognition module 1048 is further configured to control the camera module to collect a second target image and a speckle image, and send the second target image and the speckle image to the second operating environment; A depth image is calculated according to the speckle image in the second operating environment; and a face recognition process is performed on the second target image and the depth image through a face recognition model in the second operating environment.
  • each module in the above data processing device is only for illustration. In other embodiments, the data processing device may be divided into different modules according to requirements to complete all or part of the functions of the above data processing device.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • One or more non-transitory computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, causing the processors to execute the foregoing first embodiment, the first The data processing method provided in the second embodiment and the third embodiment.
  • the embodiment of the present application further provides a computer program product containing instructions, which when executed on a computer, causes the computer to execute the data processing methods provided by the first embodiment, the second embodiment, and the third embodiment described above.
  • an embodiment of the present application further provides a computer-readable storage medium 300 on which a computer program is stored.
  • the computer program is executed by the processor 210, the foregoing first embodiment, second embodiment, and third embodiment are implemented.
  • the data processing method of the embodiment is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to the processor 210.
  • an embodiment of the present application further provides an electronic device 400.
  • the electronic device 400 includes a memory 420 and a processor 410.
  • the memory 420 stores computer-readable instructions.
  • the processor 410 executes the data processing methods of the first, second, and third embodiments described above.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioethics (AREA)
  • Image Processing (AREA)

Abstract

一种数据处理方法、装置、计算机可读存储介质和电子设备。数据处理方法包括:获取第一运行环境中存储的人脸识别模型(201);在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储;其中,第一运行环境中的存储空间大于第二运行环境中的存储空间(203)。

Description

数据处理方法、装置、计算机可读存储介质和电子设备
优先权信息
本申请请求2018年8月1日向中国国家知识产权局提交的、专利申请号为201810864804.4、201810866139.2和201810864802.5的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及计算机技术领域,特别是涉及一种数据处理方法、装置、计算机可读存储介质和电子设备。
背景技术
人脸识别技术逐渐被应用到人们的工作和生活中,比如可以采集人脸图像进行支付认证、解锁认证,还可以对拍摄的人脸图像进行美颜处理。通过人脸识别技术中可以对图像中的人脸进行检测,还可以识别图像中的人脸是属于哪一个人的人脸,从而识别用户的身份。
发明内容
本申请实施例提供一种数据处理方法、装置、计算机可读存储介质和电子设备。
一种数据处理方法,所述方法包括:获取第一运行环境中存储的人脸识别模型;在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储;其中,所述第一运行环境中的存储空间大于所述第二运行环境中的存储空间。
一种数据处理装置,所述装置包括模型获取模块和模型总处理模块;所述模型获取模块用于获取第一运行环境中存储的人脸识别模型;所述模型总处理模块用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储;其中,所述第一运行环境中的存储空间大于所述第二运行环境中的存储空间。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的数据处理方法。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行上述的数据处理方法。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中数据处理方法的流程图;
图2为本申请实施例中数据处理装置的结构示意图;
图3为本申请实施例中电子设备的内部结构示意图;
图4为本申请实施例中数据处理方法的流程图;
图5为本申请实施例中实现数据处理方法的系统示意图;
图6和图7为本申请实施例中数据处理方法的流程图;
图8为本申请实施例中计算深度信息的原理图;
图9为本申请实施例中数据处理方法的流程图;
图10为本申请实施例中实现数据处理方法的硬件结构图;
图11和图12为本申请实施例中数据处理装置的结构示意图;
图13和图14为本申请实施例中数据处理方法的流程图;
图15为本申请实施例中实现数据处理方法的系统示意图;
图16为本申请实施例中数据处理装置的结构示意图;
图17和图18为本申请实施例中数据处理方法的流程图;
图19为本申请实施例中分割人脸识别模型的示意图;
图20和图21为本申请实施例中数据处理装置的结构示意图;
图22为本申请实施例中电子设备和计算机可读存储介质的连接状态示意图;
图23为本申请实施例中电子设备的模块示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。
请参阅图1,本申请实施例提供了一种数据处理方法,该数据处理方法包括:
步骤201:获取第一运行环境中存储的人脸识别模型;
步骤203:在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储;其中,第一运行环境中的存储空间大于第二运行环境中的存储空间。
请参阅图2,本申请实施例提供了一种数据处理装置,该数据处理装置910包括模型获取模块912和模型总处理模块914。模型获取模块912用于获取第一运行环境中存储的人脸识别模型。模型总处理模块914用于在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储。其中,第一运行环境中的存储空间大于第二运行环境中的存储空间。
第一实施方式:
图3为一个实施例中电子设备的内部结构示意图。如图3所示,该电子设备100包括通过系统总线140连接的处理器110、存储器120和网络接口130。其中,该处理器110用于提供计算和控制能力,支撑整个电子设备100的运行。存储器120用于存储数据、程序等,存储器120上存储至少一个计算机程序1224,该计算机程序1224可被处理器110执行,以实现本申请实施例中提供的适用于电子设备100的数据处理方法。存储器120可包括磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random-Access-Memory,RAM)等。例如,在一个实施例中,存储器120包括非易失性存储介质122及内存储器124。非易失性存储介质122存储有操作系统1222和计算机程序1224。该计算机程序1224可被处理器110所执行,以用于实现以下各个实施例所提供的一种数据处理方法。内存储器124为非易失性存储介质122中的操作系统1222、计算机程序1224提供高速缓存的运行环境。网络接口130可以是以太网卡或无线网卡等,用于与外部的电子设备100进行通信。该电子设备100可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
图4为一个实施例中数据处理方法的流程图。如图4所示,该数据处理方法包括步骤1202至步骤1206。其中:
步骤1202,获取第一运行环境中存储的人脸识别模型。
具体的,电子设备可包括处理器,处理器可以对数据进行存储、计算、传输等处理。电子设备中的处理器可以在不同的环境中运行,例如处理器可以在TEE(Trusted Execution Environment,可信执行环境)中运行,也可以在REE(Rich Execution Environment,自然运行环境)中运行,在TEE中运行时,数据的安全性更高;在REE中运行时,数据的安全性更低。
电子设备可以对处理器的资源进行分配,对不同的运行环境划分不同的资源。例如,一般情况下电子设备中的安全性要求较高的进程会比较少,普通进程会比较多,那么电子设备就可以将处理器的小部分资源划分到安全性较高的运行环境中,将大部分资源划分到安全性没那么高的运行环境中。
人脸识别模型是用于对图像中的人脸进行识别处理的算法模型,一般通过文件的形式进行存储。可以理解的是,由于对图像中的人脸进行识别的算法比较复杂,所以存储人脸识别模型时所占用的存储空间也比较大。电子设备对处理器划分不同的运行环境后,划分到第一运行环境中的存储空间要多余划分到第二运行环境中的存储空间,因此电子设备会将人脸识别模型存储在第一运行环境中,以保证第二运行环境中有足够的空间来对数据进行处理。
步骤1204,在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区。
共享缓冲区(Share Buffer)是第一运行环境和第二运行环境传输数据的通道,第一运行环境和第二运行环境都可以对共享缓冲区进行访问。电子设备将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型进行初始化,再将初始化之后的人脸识别模型放到共享缓冲区中,再从共享缓冲区传到第二运行环境中。
需要说明的是,电子设备可以对共享缓冲区进行配置,可根据需求设置共享缓冲区的空间大小。例如,电子设备可以将共享缓冲区的存储空间可以设置为5M,也可以设置为10M。在第一运行环境将人脸识别模型进行初始化之前,可以获取第二运行环境中的剩余存储空间;若剩余存储空间小于空间阈值,则在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区。空间阈值可以根据需要进行设置,一般为人脸识别模型所占用的存储空间与对人脸识别模型进行初始化时所占用的存储空间的总和。
在本申请提供的实施例中,若第二运行环境中的剩余存储空间比较大,可以将人脸识别模型直接发送到第二运行环境中,并在第二运行环境中进行初始化处理,初始化完成后再将原始的人脸识别模型删除,这样可以保证数据的安全性。则上述数据处理方法具体还可以包括:若剩余存储空间大于或等于空间阈值,则将人脸识别模型传入到共享缓冲区,并将人脸识别模型从共享缓冲区传入到第二运行环境中;在第二运行环境中将人脸识别模型进行初始化,并将初始化之前的人脸识别模型删除,保留初始化之后的人脸识别模型。
步骤1206,将初始化后的人脸识别模型从共享缓冲区传入到第二运行环境中进行存储;其中,第一运行环境中的存储空间大于第二运行环境中的存储空间,人脸识别模型用于对图像进行人脸识别处理。即:在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储,包括:在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区;将初始化后的人脸识别模型从共享缓冲区传入到第二运行环境中进行存储;其中,人脸识别模型用于对图像进行人脸识别处理。
在本申请提供的实施例中,电子设备会在第二运行环境中通过人脸识别模型对图像进行人脸识别处理。需要说明的是,在对图像进行人脸识别处理之前,需要将人脸识别模型进行初始化。如果将人脸识别模型存储在第二运行环境中,那么存储人脸识别模型需要占用第二运行环境中的存储空间,对人脸识别模型进行初始化也需要占占用第二运行环境中的存储空间,这样就会造成第二运行环境的资源消耗过大,影响数据处理的效率。
例如,人脸识别模型占用20M内存,对人脸识别模型进行初始化时需要另外的10M内存,如果存储和初始化都在第二运行环境中进行,那么就需要占用第二运行环境的总共30M内存。而如果将人脸识别模型存储在第一运行环境中,并在第一运行环境中初始化,再将初始化后的人脸识别模型发送到第二运行环境中,那么就只需要占用第二运行环境中的10M内存,大大减少了第二运行环境中的资源占用率。
在一个实施例中,可以在检测到满足初始化条件时,开始执行步骤1202。例如,把人脸识别模型存储在第一运行环境中,电子设备可以在开机的时候将人脸识别模型进行初始化,也可以在检测到需要进行人脸识别处理的应用程序被打开时就将人脸识别模型初始化,还可以在检测到人脸识别指令的时候将人脸识别模型进行初始化,然后将初始化好的人脸识别模型压缩之后再传入第二运行环境中。
图5为一个实施例中实现数据处理方法的系统示意图。如图5所示,该系统中包括第一运行环境302、共享缓冲区304和第二运行环境306。第一运行环境302和第二运行环境306可以通过共享缓冲区304进行数据传输。人脸识别模型存储在第一运行环境302中,系统可以获取第一运行环境302中存储的人脸识别模型,并对获取的人脸识别模型进行初始化处理,然后将初始化后的人脸识别模型传入到共享缓冲区304中,通过共享缓冲区304将初始化后的人脸识别模型传入到第二运行环境306中。
可以理解的是,人脸识别模型中一般可以包括多个处理模块,每个处理模块完成的处理不同,这多个处理模块可以是相互独立的。例如,可包括人脸检测模块、人脸匹配模块和活体检测模块。其中,一部分模块可能对安全性要求比较低,一部分模块可能对安全性要求比较高。因此,可以将安全性要求比较低的处理模块放在第一运行环境中进行初始化,安全性要求比较高的处理模块放在第二运行环 境中进行初始化。
具体的,步骤1204可以包括:在第一运行环境中将人脸识别模型中的第一模块进行第一初始化,并将第一初始化后的人脸识别模型传入到共享缓冲区。步骤1206可以包括:将第一初始化后的人脸识别模型从共享缓冲区传入到第二运行环境中进行存储。步骤1206之后可以包括:将第一初始化后的人脸识别模型中的第二模块进行第二初始化,其中第二模块为人脸识别模型中除第一模块之外的模块,第一模块的安全性低于第二模块的安全性。例如,第一模块可以是人脸检测模块,第二模块可以是人脸匹配模块和活体检测模块,第一模块对安全的要求比较低,所以放在第一运行环境中初始化。第二模块对安全的要求比较高,所以放在第二运行环境中初始化。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再通过共享缓冲区传输给第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率。
图6为另一个实施例中数据处理方法的流程图。如图6所示,该数据处理方法包括步骤1402至步骤1414。其中:
步骤1402,终端接收服务器发送的人脸识别模型,并将人脸识别模型存储到终端的第一运行环境中。
一般情况下,在进行人脸识别处理之前,会将人脸识别模型进行训练,使人脸识别模型的识别精度更高。在对模型进行训练的过程中,会获取一个训练图像集合,将训练图像集合中的图像作为模型的输入,并根据训练过程中得到的训练结果不断调整模型的训练参数,以此得到模型的最佳参数。训练图像集合中包含的图像越多,训练得到的模型越精确,但耗时也会相应地增加。
在一个实施例中,电子设备可以是与用户交互的终端,而由于终端资源有限,所以可在服务器上将人脸识别模型进行训练。服务器将人脸识别模型训练好之后,再将训练好的人脸识别模型发送给终端。终端接收到训练好后的人脸识别模型之后,再将训练好的人脸识别模型存储到第一运行环境中。
步骤1404,当检测到终端重启时,获取第一运行环境中存储的人脸识别模型。
终端中可以包括第一运行环境和第二运行环境,终端可在第二运行环境中对图像进行人脸识别处理,但由于终端划分到第一运行环境下的存储空间比划分到第二运行环境中的存储空间大,所以终端可以将接收到的人脸识别模型存放在第一运行环境的存储空间中。在每次检测到终端重启的时候,再将第一运行环境中存储的人脸识别模型加载到第二运行环境中,这样需要对图像进行人脸识别处理时,就可以直接调用第二运行环境中加载好的人脸识别模型进行处理。
可以理解的是,人脸识别模型是可以更新的,当人脸识别模型更新时,服务器会将更新后的人脸识别模型发送给终端,终端接收到更新之后的人脸识别模型后,将更新之后的人脸识别模型存储在第一运行环境中,覆盖原来的人脸识别模型。然后控制终端进行重启,终端重启后,再获取更新后的人脸识别模型,并将更新后的人脸识别模型进行初始化。
步骤1406,在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型进行加密处理,并将加密处理后的人脸识别模型传入到共享缓冲区。
在通过人脸识别模型进行人脸识别处理之前,需要将人脸识别模型进行初始化。初始化过程中,可将人脸识别模型中的参数、模块等设置为默认状态。由于对模型进行初始化的过程也需要占用内存,因此终端可在第一运行环境中将人脸识别模型进行初始化,然后将初始化后的人脸识别模型发送到第二运行环境中,这样就可以直接在第二运行环境中进行人脸识别处理,而不需要占用额外的内存去对模型进行初始化。
在本申请提供的实施例中,第一运行环境可以是普通运行环境,第二运行环境为安全运行环境,第二运行环境的安全性要高于第一运行环境。第一运行环境一般用于对安全性较低的应用操作进行处理,第二运行环境一般用于对安全性较高的应用操作进行处理。例如,拍摄、游戏等安全性要求不高的操作可以在第一运行环境中进行,支付、解锁等安全性要求较高操作可以在第二运行环境中进行。
第二运行环境一般用于进行安全性要求较高的应用操作,因此在向第二运行环境中发送人脸识别模型的时候,也需要保证人脸识别模型的安全性。在第一运行环境将人脸识别模型初始化之后,可将初始化后的人脸识别模型进行加密处理,然后将加密后处理后的人脸识别模型通过共享缓冲区发送到 第二运行环境中。
步骤1408,将加密处理后的人脸识别模型从共享缓冲区传入到第二运行环境中进行存储,并在第二运行环境中对加密处理后的人脸识别模型进行解密处理。
加密处理后的人脸识别模型从第一运行环境传入到共享缓冲区之后,再从共享缓冲区传入到第二运行环境中。第二运行环境再将接收到的加密处理后的人脸识别模型进行解密处理。对人脸识别模型进行加密处理的算法在本实施例中不做限定。例如,可以是根据DES(Data Encryption Standard,数据加密标准)、MD5(Message-Digest Algorithm 5,信息-摘要算法5)、HAVAL(Diffie-Hellman,密钥交换算法)等加密算法进行加密处理的。
步骤1410,当检测到人脸识别指令时,判断人脸识别指令的安全等级。
第一运行环境和第二运行环境中都存储了人脸识别模型,终端可在第一运行环境中进行人脸识别处理,也可以在第二运行环境中进行人脸识别处理。具体的,终端可根据触发人脸识别处理的人脸识别指令,判断是在第一运行环境中进行人脸识别处理,还是在第二运行环境中进行人脸识别处理。
人脸识别指令是由终端的上层应用发起的,上层应用发起人脸识别指令时,可以将发起人脸识别指令的时间、应用标识、操作标识等信息写入到人脸识别中。应用标识可用于标示发起人脸识别指令的应用程序,操作标识可用于标示需要人脸识别结果进行的应用操作。例如,可以通过人脸识别结果进行支付、解锁、美颜等应用操作,则人脸识别指令中的操作标识就用于标示支付、解锁、美颜等应用操作。
安全等级用于表示应用操作的安全性高低,安全等级越高,则应用操作对安全性的要求越高。例如,支付操作对安全性的要求比较高,美颜操作对安全性的要求就比较低,那么支付操作的安全等级就高于美颜操作的安全等级。安全等级可以直接写入到人脸识别指令中,终端检测到人脸识别指令后,直接读取人脸识别指令中的安全等级。也可以预先建立操作标识的对应关系,在检测到人脸识别指令后,通过人脸识别指令中的操作标识获取对应的安全等级。
步骤1412,若安全等级低于等级阈值,则在第一运行环境中根据人脸识别模型进行人脸识别处理。
当检测到安全等级低于等级阈值时,认为发起人脸识别处理的应用操作的安全性要求较低,则可以直接在第一运行环境中根据人脸识别模型进行人脸识别处理。具体的,人脸识别处理可以但不限于包含人脸检测、人脸匹配、活体检测中的一种或多种,人脸检测是指检测图像中是否存在人脸的过程,人脸匹配是指将检测到的人脸与预设的人脸进行匹配的过程,活体检测是指检测图像中的人脸是否为活体的过程。
步骤1414,若安全等级高于等级阈值,则在第二运行环境中根据人脸识别模型进行人脸识别处理;其中,第二运行环境的安全性高于第一运行环境的安全性。
当检测到安全等级高于等级阈值时,认为发起人脸识别处理的应用操作的安全性要求较高,则可以在第二运行环境中根据人脸识别模型进行人脸识别处理。具体的,终端可将人脸识别指令发送给第二运行环境,通过第二运行环境来控制摄像头模组采集图像。采集的图像会首先发送到第二运行环境中,在第二运行环境中判断应用操作的安全等级,若安全等级低于等级阈值,则将采集的图像发送到第一运行环境中进行人脸识别处理;若安全等级高于等级阈值,则在第二运行环境中对采集的图像进行人脸识别处理。
具体的,如图7所示,在第一运行环境中进行人脸识别处理时,包括:
步骤502,控制摄像头模组采集第一目标图像和散斑图像,并将第一目标图像发送到第一运行环境中,将散斑图像发送到第二运行环境中。
终端中安装的应用程序可发起人脸识别指令,并将人脸识别指令发送到第二运行环境中。在第二运行环境中检测到人脸识别指令的安全等级低于等级阈值时,就可以控制摄像头模组采集第一目标图像和散斑图像。摄像头模组采集到的第一目标图像可以直接发给第一运行环境,并将采集的散斑图像发送到第二运行环境中。
在一个实施例中,第一目标图像可以是可见光图像,也可以是其他类型的图像,在此不做限定。当第一目标图像为可见光图像时,摄像头模组中可包括RGB(Red Green Blue,红绿蓝)摄像头,通过RGB摄像头采集第一目标图像。摄像头模组中还可包括镭射灯和激光摄像头,终端可控制镭射灯开启,然后通过激光摄像头采集镭射灯发射的激光散斑照射到物体上所形成的散斑图像。
具体的,当激光照射在平均起伏大于波长数量级的光学粗糙表面上时,这些表面上无规分布的面元散射的子波相互叠加使反射光场具有随机的空间光强分布,呈现出颗粒状的结构,这就是激光散斑。形成的激光散斑具有高度随机性,因此不同的激光发射器发射出来的激光所生成的激光散斑不同。当形成的激光散斑照射到不同深度和形状的物体上时,生成的散斑图像是不一样的。通过不同的镭射灯形成的激光散斑具有唯一性,从而得到的散斑图像也具有唯一性。
步骤504,在第二运行环境中根据散斑图像计算得到深度图像,并将深度图像发送到第一运行环境中。
终端为了保护数据的安全,会保证散斑图像一直在安全的环境中进行处理,所以终端会将散斑图像传到第二运行环境下进行处理。深度图像是用于表示被拍摄物体深度信息的图像,根据散斑图像计算可以得到深度图像。终端可以控制摄像头模组同时采集第一目标图像和散斑图像,根据散斑图像计算得到的深度图像就可以表示第一目标图像中的物体的深度信息。
可在第二运行环境中根据散斑图像和参考图像计算得到深度图像。深度图像是激光散斑照射到参考平面时所采集的图像,所以参考图像是带有参考深度信息的图像。首先可根据散斑图像中的散斑点相对于参考图像中的散斑点的位置偏移量计算相对深度,相对深度可以表示实际拍摄物体到参考平面的深度信息。然后再根据获取的相对深度和参考深度计算物体的实际深度信息。具体的,将参考图像与散斑图像进行比较得到偏移信息,偏移信息用于表示散斑图像中散斑点相对于参考图像中对应散斑点的水平偏移量;根据偏移信息和参考深度信息计算得到深度图像。
图8为一个实施例中计算深度信息的原理图。如图8所示,镭射灯602可以生成激光散斑,激光散斑经过物体进行反射后,通过激光摄像头604获取形成的图像。在摄像头的标定过程中,镭射灯602发射的激光散斑会经过参考平面608进行反射,然后通过激光摄像头604采集反射光线,通过成像平面610成像得到参考图像。参考平面608到镭射灯602的参考深度为L,该参考深度为已知的。在实际计算深度信息的过程中,镭射灯602发射的激光散斑会经过物体606进行反射,再由激光摄像头604采集反射光线,通过成像平面610成像得到实际的散斑图像。则可以得到实际的深度信息的计算公式为:
Figure PCTCN2019082696-appb-000001
其中,L是镭射灯602到与参考平面608之间的距离,f为激光摄像头604中透镜的焦距,CD为镭射灯602到激光摄像头604之间的距离,AB为物体606的成像与参考平面608的成像之间的偏移距离。AB可为像素偏移量n与像素点的实际距离p的乘积。当物体604到镭射灯602之间的距离Dis大于参考平面608到镭射灯602之间的距离L时,AB为负值;当物体604到镭射灯602之间的距离Dis小于参考平面608到镭射灯602之间的距离L时,AB为正值。
步骤506,通过第一运行环境中的人脸识别模型,对第一目标图像和深度图像进行人脸识别处理。
在第二运行环境中计算得到深度图像之后,可以将计算得到的深度图像发送到第一运行环境中,然后在第一运行环境中根据第一目标图像和深度图像进行人脸识别处理,第一运行环境再将人脸识别结果发送给上层应用,上层应用可以根据人脸识别结果进行相应的应用操作。
例如,在对图像进行美颜处理的时候,通过第一目标图像可以检测到人脸所在的位置和区域。由于第一目标图像和深度图像是对应的,那么就可以通过深度图像的对应区域获取人脸的深度信息,通过人脸的深度信息可以构建人脸三维特征,从而根据人脸三维特征对人脸进行美颜处理。
在本申请提供的其他实施例中,在第二运行环境中进行人脸识别处理时,具体包括:
步骤702,控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到第二运行环境中。
在一个实施例中,第二目标图像可以为红外图像,摄像头模组中可包括泛光灯、镭射灯和激光摄像头,终端可控制泛光灯开启,然后通过激光摄像头采集泛光灯照射物体所形成的红外图像作为第二目标图像。终端还可以控制镭射灯开启,然后通过激光摄像头采集镭射灯照射物体所形成的散斑图像。
采集第二目标图像和散斑图像之间的时间间隔要比较短,才能保证采集到的第二目标图像和散斑图像的一致性,避免第二目标图像和散斑图像之间存在较大的误差,提高了对图像处理的准确性。具体地,控制摄像头模组采集第二目标图像,并控制摄像头模组采集散斑图像;其中,采集第二目标图 像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。
可分别设置泛光灯控制器和镭射灯控制器,通过两路PWM(Pulse Width Modulation,脉冲宽度调制)分别连接泛光灯控制器和镭射灯控制器,当需要控制泛光灯开启或镭射灯开启时,可通过PWM向泛光灯控制器发射脉冲波控制泛光灯开启或向镭射灯控制器发射脉冲波控制镭射灯开启,通过PWM分别向两个控制器发射脉冲波来控制采集第二目标图像和散斑图像之间的时间间隔。可以理解的是,第二目标图像可以为红外图像,也可以是其他类型的图像,在此不做限定。例如,第二目标图像也可以为可见光图像。
步骤704,在第二运行环境中根据散斑图像计算得到深度图像。
需要说明的是,当人脸识别指令的安全等级高于等级阈值时,认为发起人脸识别指令的应用操作的安全性要求较高,则需要在安全性较高的环境中进行人脸识别处理,才能保证数据处理的安全性。摄像头模组采集的第二目标图像和散斑图像直接发送到第二运行环境,然后在第二运行环境中根据散斑图像计算深度图像。
步骤706,通过第二运行环境中的人脸识别模型,对第二目标图像和深度图像进行人脸识别处理。
在一个实施例中,在第二运行环境中进行人脸识别处理时,可根据第二目标图像进行人脸检测,检测第二目标图像中是否包含目标人脸。若第二目标图像中包含目标人脸,则将检测到的目标人脸与预设人脸进行匹配。若检测到的目标人脸与预设人脸匹配,再根据深度图像获取目标人脸的目标深度信息,根据目标深度信息检测目标人脸是否为活体。
在对目标人脸进行匹配的时候,可以提取目标人脸的人脸属性特征,再将提取的人脸属性特征与预设人脸的人脸属性特征进行匹配,若匹配值超过匹配阈值,则认为人脸匹配成功。例如,可以提取人脸的偏转角度、亮度信息、五官特征等特征作为人脸属性特征,若目标人脸的人脸属性特征与预设人脸的人脸属性特征匹配度超过90%,则认为人脸匹配成功。
一般地,在人脸认证的过程中,假设拍摄的为照片或雕塑中的人脸时,提取的人脸属性特征也可能认证成功。那么为了提高准确率,可以根据采集的深度图像进行活体检测处理,这样必须保证采集的是人脸是活体人脸才能认证成功。可以理解的是,采集的第二目标图像可以表示人脸的细节信息,采集深度图像则可以表示对应的深度信息,根据深度图像就可以进行活体检测。例如,被拍摄的人脸为照片中的人脸的话,根据深度图像就可以判断采集的人脸不是立体的,则可以认为采集的人脸为非活体的人脸。
具体地,根据上述校正深度图像进行活体检测包括:深度图像中查找与上述目标人脸对应的人脸深度信息,若上述深度图像中存在与上述目标人脸对应的人脸深度信息,且上述人脸深度信息符合人脸立体规则,则上述目标人脸为活体人脸。上述人脸立体规则是带有人脸三维深度信息的规则。
在一个实施例中,还可以采用人工智能模型对上述第二目标图像和深度图像进行人工智能识别,获取目标人脸对应的活体属性特征,并根据获取的活体属性特征判断上述目标人脸是否为活体人脸图像。活体属性特征可以包括目标人脸对应的肤质特征、纹理的方向、纹理的密度、纹理的宽度等,若上述活体属性特征符合人脸活体规则,则认为上述目标人脸具有生物活性,即为活体人脸。
可以理解的是,在进行人脸检测、人脸匹配、活体检测等处理时,处理顺序可以根据需要进行调换。例如,可以先对人脸进行认证,再检测人脸是否为活体。也可以先检测人脸是否为活体,再对人脸进行认证。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再通过共享缓冲区传输给第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率。根据人脸识别指令的安全等级选择在第一运行环境或第二运行环境中进行处理,避免将所有应用都放在第二运行环境中处理,可以降低第二运行环境的资源占用率。
应该理解的是,虽然图4、图6、图7、图9的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图4、图6、图7、图9中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而 是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图10为一个实施例中实现数据处理方法的硬件结构图。如图10所示,该电子设备中可包括摄像头模组810、中央处理器(Central Processing Unit,CPU)820和微控制单元(Microcontroller Unit,MCU)830,上述摄像头模组810中包括激光摄像头812、泛光灯814、RGB摄像头816和镭射灯818。微控制单元830包括PWM(Pulse Width Modulation,脉冲宽度调制)模块832、SPI/I2C(Serial Peripheral Interface/Inter-Integrated Circuit,串行外设接口/双向二线制同步串行接口)模块834、RAM(Random Access Memory,随机存取存储器)模块836、Depth Engine模块838。其中,中央处理器820可以为多核运行模式,中央处理器820中的CPU内核可以在TEE或REE下运行。TEE和REE均为ARM模块(Advanced RISC Machines,高级精简指令集处理器)的运行模式。中央处理器820中的自然运行环境822可为第一运行环境,安全性较低。中央处理器820中的可信运行环境824为第二运行环境,安全性较高。可理解的是,由于微控制单元830是独立于中央处理器820的处理模块,且其输入和输出都是由可信运行环境824下的中央处理器820来控制的,所以微控制单元830也是安全性较高的处理模块,可认为微控制单元830也是处于安全运行环境中的,也即处于第二运行环境中。
通常情况下,安全性要求较高的操作行为需要在第二运行环境中执行,其他操作行为则可在第一运行环境下执行。本申请实施例中,中央处理器820可通过可信运行环境824控制SECURE SPI/I2C向微控制单元830中的SPI/I2C模块834发送人脸识别指令。微控制单元830在接收到人脸识别指令后,若判断人脸识别指令的安全等级高于等级阈值,则通过PWM模块832发射脉冲波控制摄像头模组810中泛光灯814开启来采集红外图像、控制摄像头模组810中镭射灯818开启来采集散斑图像。摄像头模组810可将采集到的红外图像和散斑图像传送给微控制单元830中Depth Engine模块838,Depth Engine模块838可根据散斑图像计算深度图像,并将红外图像和深度图像发送给中央处理器820的可信运行环境824中。中央处理器820的可信运行环境824会根据接收到的红外图像和深度图像进行人脸识别处理。
若判断人脸识别指令的安全等级低于等级阈值,则通过PWM模块832发射脉冲波控制摄像头模组810中镭射灯818开启来采集散斑图像,并通过RGB摄像头816来采集可见光图像。摄像头模组810将采集的可见光图像直接发送到中央处理器820的自然运行环境822中,将散斑图像传送给微控制单元830中Depth Engine模块838,Depth Engine模块838可根据散斑图像计算深度图像,并将深度图像发送给中央处理器820的可信运行环境824。再由可信运行环境824将深度图像发送到自然运行环境822中,在自然运行环境822中根据可见光图像和深度图像进行人脸识别处理。
图11为一个实施例中数据处理装置的结构示意图。如图11所示,该数据处理装置900包括模型获取模块902、模型传输模块904和模型存储模块906。其中:
模型获取模块902,用于获取第一运行环境中存储的人脸识别模型。
模型传输模块904,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区。
模型存储模块906,用于将所述初始化后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储;其中,所述第一运行环境中的存储空间大于所述第二运行环境中的存储空间,所述人脸识别模型用于对图像进行人脸识别处理。即:模型总处理模块包括模型传输模块和模型存储模块。
上述实施例提供的数据处理装置,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再通过共享缓冲区传输给第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率。
图12为另一个实施例中数据处理装置的结构示意图。如图12所示,该数据处理装置1000包括模型接收模块1002、模型获取模块1004、模型传输模块1006、模型存储模块1008和人脸识别模块1010。其中:
模型接收模块1002,用于终端接收服务器发送的人脸识别模型,并将所述人脸识别模型存储到所述终端的第一运行环境中。
模型获取模块1004,用于检测到所述终端重启时,获取所述第一运行环境中存储的人脸识别模型。
模型传输模块1006,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区。
模型存储模块1008,用于将所述初始化后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储;其中,所述第一运行环境中的存储空间大于所述第二运行环境中的存储空间,所述人脸识别模型用于对图像进行人脸识别处理。
人脸识别模块1010,用于当检测到人脸识别指令时,判断所述人脸识别指令的安全等级;若所述安全等级低于等级阈值,则在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理;若所述安全等级高于等级阈值,则在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理;其中,所述第二运行环境的安全性高于所述第一运行环境的安全性。
上述实施例提供的数据处理装置,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再通过共享缓冲区传输给第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率。根据人脸识别指令的安全等级选择在第一运行环境或第二运行环境中进行处理,避免将所有应用都放在第二运行环境中处理,可以降低第二运行环境的资源占用率。
在一个实施例中,模型传输模块1006还用于将初始化后的人脸识别模型进行加密处理,并将加密处理后的人脸识别模型传入到共享缓冲区。
在一个实施例中,模型传输模块1006还用于获取第二运行环境中的剩余存储空间;若所述剩余存储空间小于空间阈值,则在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区。
在一个实施例中,模型传输模块1006还用于若所述剩余存储空间大于或等于空间阈值,则将所述人脸识别模型传入到共享缓冲区,并将所述人脸识别模型从所述共享缓冲区传入到第二运行环境中。
在一个实施例中,模型存储模块1008还用于在所述第二运行环境中将所述人脸识别模型进行初始化,并将初始化之前的人脸识别模型删除,保留初始化之后的人脸识别模型。
在一个实施例中,模型存储模块1008还用于将所述加密处理后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储,并在所述第二运行环境中对所述加密处理后的人脸识别模型进行解密处理。
在一个实施例中,人脸识别模块1010还用于控制摄像头模组采集第一目标图像和散斑图像,并将所述第一目标图像发送到第一运行环境中,将所述散斑图像发送到所述第二运行环境中;在所述第二运行环境中根据所述散斑图像计算得到深度图像,并将所述深度图像发送到所述第一运行环境中;通过所述第一运行环境中的人脸识别模型,对所述第一目标图像和深度图像进行人脸识别处理。
在一个实施例中,人脸识别模块1010还用于控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到所述第二运行环境中;在所述第二运行环境中根据所述散斑图像计算得到深度图像;通过所述第二运行环境中的人脸识别模型,对所述第二目标图像和深度图像进行人脸识别处理。
上述数据处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将数据处理装置按照需要划分为不同的模块,以完成上述数据处理装置的全部或部分功能。
第二实施方式:
图3为一个实施例中电子设备的内部结构示意图。如图3所示,该电子设备100包括通过系统总线140连接的处理器110、存储器120和网络接口130。其中,该处理器110用于提供计算和控制能力,支撑整个电子设备100的运行。存储器120用于存储数据、程序等,存储器120上存储至少一个计算机程序1224,该计算机程序1224可被处理器110执行,以实现本申请实施例中提供的适用于电子设备100的数据处理方法。存储器120可包括磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random-Access-Memory,RAM)等。例如,在一个实施例中,存储器120包括非易失性存储介质122及内存储器124。非易失性存储介质122存储有操作系统1222和计算机程序1224。该计算机程序1224可被处理器110所执行,以用于实现以下各个实施例所提供的一种数据处理方法。内存储器124为非易失性存储介质122中的操作系统1222、计算机程序1224 提供高速缓存的运行环境。网络接口130可以是以太网卡或无线网卡等,用于与外部的电子设备100进行通信。该电子设备100可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
图13为一个实施例中数据处理方法的流程图。如图13所示,该数据处理方法包括步骤2202至步骤2206。其中:
步骤2202,获取第一运行环境中存储的人脸识别模型。
具体的,电子设备可包括处理器,处理器可以对数据进行存储、计算、传输等处理。电子设备中的处理器可以在不同的环境中运行,例如处理器可以在TEE(Trusted Execution Environment,可信执行环境)中运行,也可以在REE(Rich Execution Environment,自然运行环境)中运行,在TEE中运行时,数据的安全性更高;在REE中运行时,数据的安全性更低。
电子设备可以对处理器的资源进行分配,对不同的运行环境划分不同的资源。例如,一般情况下电子设备中的安全性要求较高的进程会比较少,普通进程会比较多,那么电子设备就可以将处理器的小部分资源划分到安全性较高的运行环境中,将大部分资源划分到安全性没那么高的运行环境中。
人脸识别模型是用于对图像中的人脸进行识别处理的算法模型,一般通过文件的形式进行存储。可以理解的是,由于对图像中的人脸进行识别的算法比较复杂,所以存储人脸识别模型时所占用的存储空间也比较大。电子设备对处理器划分不同的运行环境后,划分到第一运行环境中的存储空间要多余划分到第二运行环境中的存储空间,因此电子设备会将人脸识别模型存储在第一运行环境中,以保证第二运行环境中有足够的空间来对数据进行处理。
步骤2204,在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理。
需要说明的是,在对图像进行人脸识别处理之前,需要将人脸识别模型进行初始化。如果将人脸识别模型存储在第二运行环境中,那么存储人脸识别模型需要占用第二运行环境中的存储空间,对人脸识别模型进行初始化也需要占占用第二运行环境中的存储空间,这样就会造成第二运行环境的资源消耗过大,影响数据处理的效率。
例如,人脸识别模型占用20M内存,对人脸识别模型进行初始化时需要另外的10M内存,如果存储和初始化都在第二运行环境中进行,那么就需要占用第二运行环境的总共30M内存。而如果将人脸识别模型存储在第一运行环境中,并在第一运行环境中初始化,再将初始化后的人脸识别模型发送到第二运行环境中,那么就只需要占用第二运行环境中的10M内存,大大减少了第二运行环境中的资源占用率。
电子设备将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型进行初始化,再将初始化之后的人脸识别模型传到第二运行环境中,可以减少对第二运行环境中的存储空间的占用。将人脸识别模型初始化之后,还可以进一步对初始化后的人脸识别模型进行压缩,然后将压缩后的人脸识别模型发送到第二运行环境中进行存储,进一步减少第二运行环境中的资源占用,提高数据的处理速度。
步骤2206,将压缩后的人脸识别模型从第一运行环境传入到第二运行环境中进行存储;其中,第一运行环境的存储空间大于第二运行环境的存储空间,人脸识别模型用于对图像进行人脸识别处理。即:在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储,包括:在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理;将压缩后的人脸识别模型从第一运行环境传入到第二运行环境中进行存储;其中,人脸识别模型用于对图像进行人脸识别处理。
在一个实施例中,可以在检测到满足初始化条件时,开始执行步骤2202。例如,把人脸识别模型存储在第一运行环境中,电子设备可以在开机的时候将人脸识别模型进行初始化,也可以在检测到需要进行人脸识别处理的应用程序被打开时就将人脸识别模型初始化,还可以在检测到人脸识别指令的时候将人脸识别模型进行初始化,然后将初始化好的人脸识别模型压缩之后再传入第二运行环境中。
在本申请提供的其他实施例中,在第一运行环境将人脸识别模型进行初始化之前,可以获取第二运行环境中的剩余存储空间;若剩余存储空间小于空间阈值,则在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理。空间阈值可以根据需要进行设置,一般为人脸识别模型所占用的存储空间与对人脸识别模型进行初始化时所占用的存储空间的总和。
若第二运行环境中的剩余存储空间比较大,可以将人脸识别模型直接发送到第二运行环境中,并在第二运行环境中进行初始化处理,初始化完成后再将原始的人脸识别模型删除,这样可以保证数据的安全性。则上述数据处理方法具体还可以包括:若剩余存储空间大于或等于空间阈值,则在第一运行环境中将人脸识别模型进行压缩处理,并将压缩后的人脸识别模型传入到第二运行环境中;在第二运行环境中将压缩后的人脸识别模型进行初始化,并将初始化之前的人脸识别模型删除,保留初始化之后的人脸识别模型。
可以理解的是,人脸识别模型中一般可以包括多个处理模块,每个处理模块完成的处理不同,这多个处理模块可以是相互独立的。例如,可包括人脸检测模块、人脸匹配模块和活体检测模块。其中,一部分模块可能对安全性要求比较低,一部分模块可能对安全性要求比较高。因此,可以将安全性要求比较低的处理模块放在第一运行环境中进行初始化,安全性要求比较高的处理模块放在第二运行环境中进行初始化。
具体的,步骤2204可以包括:在第一运行环境中将人脸识别模型中的第一模块进行第一初始化,并将第一初始化后的人脸识别模型进行压缩处理。步骤2206之后还可以包括:将压缩后的人脸识别模型中的第二模块进行第二初始化,其中第二模块为人脸识别模型中除第一模块之外的模块,第一模块的安全性低于第二模块的安全性。例如,第一模块可以是人脸检测模块,第二模块可以是人脸匹配模块和活体检测模块,第一模块对安全的要求比较低,所以放在第一运行环境中初始化。第二模块对安全的要求比较高,所以放在第二运行环境中初始化。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型压缩之后传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型压缩之后再传到第二运行环境中,进一步地提高了数据处理速度。
图14为另一个实施例中数据处理方法的流程图。如图14所示,该数据处理方法包括步骤2302至步骤2316。其中:
步骤2302,获取第一运行环境中存储的人脸识别模型。
一般情况下,在进行人脸识别处理之前,会将人脸识别模型进行训练,使人脸识别模型的识别精度更高。在对模型进行训练的过程中,会获取一个训练图像集合,将训练图像集合中的图像作为模型的输入,并根据训练过程中得到的训练结果不断调整模型的训练参数,以此得到模型的最佳参数。训练图像集合中包含的图像越多,训练得到的模型越精确,但耗时也会相应地增加。
在一个实施例中,电子设备可以是与用户交互的终端,而由于终端资源有限,所以可在服务器上将人脸识别模型进行训练。服务器将人脸识别模型训练好之后,再将训练好的人脸识别模型发送给终端。终端接收到训练好后的人脸识别模型之后,再将训练好的人脸识别模型存储到第一运行环境中。则步骤2302之前还可以包括:终端接收服务器发送的人脸识别模型,并将人脸识别模型存储到终端的第一运行环境中。
终端中可以包括第一运行环境和第二运行环境,终端可在第二运行环境中对图像进行人脸识别处理,但由于终端划分到第一运行环境下的存储空间比划分到第二运行环境中的存储空间大,所以终端可以将接收到的人脸识别模型存放在第一运行环境的存储空间中。在一个实施例中,可以在每次检测到终端重启的时候,再将第一运行环境中存储的人脸识别模型加载到第二运行环境中,这样需要对图像进行人脸识别处理时,就可以直接调用第二运行环境中加载好的人脸识别模型进行处理。则步骤2302具体可以包括:当检测到终端重启时,获取第一运行环境中存储的人脸识别模型。
可以理解的是,人脸识别模型是可以更新的,当人脸识别模型更新时,服务器会将更新后的人脸识别模型发送给终端,终端接收到更新之后的人脸识别模型后,将更新之后的人脸识别模型存储在第一运行环境中,覆盖原来的人脸识别模型。然后控制终端进行重启,终端重启后,再获取更新后的人脸识别模型,并将更新后的人脸识别模型进行初始化。
步骤2304,在第一运行环境中将人脸识别模型进行初始化,并获取第二运行环境中用于存储人脸识别模型的目标空间容量,以及初始化后的人脸识别模型的数据量。
在通过人脸识别模型进行人脸识别处理之前,需要将人脸识别模型进行初始化。初始化过程中, 可将人脸识别模型中的参数、模块等设置为默认状态。由于对模型进行初始化的过程也需要占用内存,因此终端可在第一运行环境中将人脸识别模型进行初始化,然后将初始化后的人脸识别模型发送到第二运行环境中,这样就可以直接在第二运行环境中进行人脸识别处理,而不需要占用额外的内存去对模型进行初始化。
将人脸识别模型初始化之后,可进一步将初始化好的人脸识别模型进行压缩处理。具体的,可以获取第二运行环境中用于存储人脸识别模型的目标空间容量,以及初始化后的人脸识别模型的数据量,根据目标空间容量和数据量进行压缩处理。需要说明的是,可在第二运行环境中划定一块专门用于存储人脸识别模型的存储空间,这样其他数据就不能占用这一块存储空间。目标空间容量就是只这一块专门用于存储人脸识别模型的存储空间的容量,人脸识别模型的数据量是指人脸识别模型的数据大小。
步骤2306,根据目标空间容量和数据量计算压缩系数。
根据目标空间容量和数据量可以计算压缩系数,然后根据计算得到的压缩系数对人脸识别模型进行压缩处理。当目标空间容量小于数据量的时候,说明第二运行环境中没有足够的存储空间用于存储人脸识别模型,则可以将根据目标空间容量和数据量将人脸识别模型进行相应的压缩,再将压缩后的人脸识别模型存放到第二运行环境中。
在一个实施例中,计算压缩系数的步骤具体可以包括:若目标空间容量小于数据量,则将目标空间容量与数据量的比值作为压缩系数。例如,目标空间容量是20M,人脸识别模型的数据量为31.5M,那么压缩系数就为31.5/20=1.575,即将人脸识别模型压缩1.575倍。当目标空间容量大于或等于数据量时,为了提高数据的传输速度,可以根据预设压缩系数对人脸识别模型进行压缩处理,也可以不对人脸识别模型进行压缩处理,在此不做限定。
步骤2308,将初始化后的人脸识别模型进行压缩系数对应的压缩处理。
得到压缩系数后,可以根据压缩系数对初始化后的人脸识别模型进行压缩处理,压缩之后的人脸识别模型可存储在第二运行环境中。可以理解的是,人脸识别模型一旦被压缩,对应的人脸识别处理的精度就会降低,这样就无法保证识别的准确率。所以为了保证人脸识别的准确率,可以设置一个最大压缩限度,对人脸识别模型的压缩不能超过该最大压缩限度。
在一个实施例中,可设置一个压缩阈值,压缩系数大于该压缩阈值时,认为压缩后的人脸识别模型进行人脸识别处理的精度较低。具体的,步骤2308可包括:若上述压缩系数小于压缩阈值,则将初始化后的人脸识别模型进行压缩系数对应的压缩处理;若上述压缩系数大于或等于压缩阈值,则将初始化后的人脸识别模型进行压缩阈值对应的压缩处理。根据压缩阈值进行压缩处理之后,电子设备可以根据压缩后的人脸识别模型的数据大小,重新分配第二运行环境中用于存储压缩后的人脸识别模型的存储空间。
图15为一个实施例中人脸识别模型进行压缩处理的示意图。如图15所示,人脸识别模型2402以文件形式存储,总共30M。将人脸识别模型2402进行压缩处理后,形成压缩后的人脸识别模型2404。压缩后的人脸识别模型2404也是以文件形式存储,总共20M。
步骤2310,将压缩后的人脸识别模型从第一运行环境传入到共享缓冲区,并将压缩后的人脸识别模型从共享缓冲区传入到第二运行环境中进行存储。
共享缓冲区(Share Buffer)是第一运行环境和第二运行环境传输数据的通道,第一运行环境和第二运行环境都可以对共享缓冲区进行访问。需要说明的是,电子设备可以对共享缓冲区进行配置,可根据需求设置共享缓冲区的空间大小。例如,电子设备可以将共享缓冲区的存储空间可以设置为5M,也可以设置为10M。
图5为一个实施例中实现数据处理方法的系统示意图。如图5所示,该系统中包括第一运行环境302、共享缓冲区304和第二运行环境306。第一运行环境302和第二运行环境306可以通过共享缓冲区304进行数据传输。人脸识别模型存储在第一运行环境302中,系统可以获取第一运行环境302中存储的人脸识别模型,并对获取的人脸识别模型进行初始化处理,然后将初始化后的人脸识别模型进行压缩处理,并将压缩后的人脸识别模型传入到共享缓冲区304中,通过共享缓冲区304将初始化后的人脸识别模型传入到第二运行环境306中。
步骤2312,当检测到人脸识别指令时,判断人脸识别指令的安全等级。
第一运行环境和第二运行环境中都存储了人脸识别模型,终端可在第一运行环境中进行人脸识别 处理,也可以在第二运行环境中进行人脸识别处理。具体的,终端可根据触发人脸识别处理的人脸识别指令,判断是在第一运行环境中进行人脸识别处理,还是在第二运行环境中进行人脸识别处理。
人脸识别指令是由终端的上层应用发起的,上层应用发起人脸识别指令时,可以将发起人脸识别指令的时间、应用标识、操作标识等信息写入到人脸识别中。应用标识可用于标示发起人脸识别指令的应用程序,操作标识可用于标示需要人脸识别结果进行的应用操作。例如,可以通过人脸识别结果进行支付、解锁、美颜等应用操作,则人脸识别指令中的操作标识就用于标示支付、解锁、美颜等应用操作。
安全等级用于表示应用操作的安全性高低,安全等级越高,则应用操作对安全性的要求越高。例如,支付操作对安全性的要求比较高,美颜操作对安全性的要求就比较低,那么支付操作的安全等级就高于美颜操作的安全等级。安全等级可以直接写入到人脸识别指令中,终端检测到人脸识别指令后,直接读取人脸识别指令中的安全等级。也可以预先建立操作标识的对应关系,在检测到人脸识别指令后,通过人脸识别指令中的操作标识获取对应的安全等级。
步骤2314,若安全等级低于等级阈值,则在第一运行环境中根据人脸识别模型进行人脸识别处理。
当检测到安全等级低于等级阈值时,认为发起人脸识别处理的应用操作的安全性要求较低,则可以直接在第一运行环境中根据人脸识别模型进行人脸识别处理。具体的,人脸识别处理可以但不限于包含人脸检测、人脸匹配、活体检测中的一种或多种,人脸检测是指检测图像中是否存在人脸的过程,人脸匹配是指将检测到的人脸与预设的人脸进行匹配的过程,活体检测是指检测图像中的人脸是否为活体的过程。
步骤2316,若安全等级高于等级阈值,则在第二运行环境中根据人脸识别模型进行人脸识别处理;其中,第二运行环境的安全性高于第一运行环境的安全性。
当检测到安全等级高于等级阈值时,认为发起人脸识别处理的应用操作的安全性要求较高,则可以在第二运行环境中根据人脸识别模型进行人脸识别处理。具体的,终端可将人脸识别指令发送给第二运行环境,通过第二运行环境来控制摄像头模组采集图像。采集的图像会首先发送到第二运行环境中,在第二运行环境中判断应用操作的安全等级,若安全等级低于等级阈值,则将采集的图像发送到第一运行环境中进行人脸识别处理;若安全等级高于等级阈值,则在第二运行环境中对采集的图像进行人脸识别处理。
具体的,如图7所示,在第一运行环境中进行人脸识别处理时,包括:
步骤502,控制摄像头模组采集第一目标图像和散斑图像,并将第一目标图像发送到第一运行环境中,将散斑图像发送到第二运行环境中。
终端中安装的应用程序可发起人脸识别指令,并将人脸识别指令发送到第二运行环境中。在第二运行环境中检测到人脸识别指令的安全等级低于等级阈值时,就可以控制摄像头模组采集第一目标图像和散斑图像。摄像头模组采集到的第一目标图像可以直接发给第一运行环境,并将采集的散斑图像发送到第二运行环境中。
在一个实施例中,第一目标图像可以是可见光图像,也可以是其他类型的图像,在此不做限定。当第一目标图像为可见光图像时,摄像头模组中可包括RGB(Red Green Blue,红绿蓝)摄像头,通过RGB摄像头采集第一目标图像。摄像头模组中还可包括镭射灯和激光摄像头,终端可控制镭射灯开启,然后通过激光摄像头采集镭射灯发射的激光散斑照射到物体上所形成的散斑图像。
具体的,当激光照射在平均起伏大于波长数量级的光学粗糙表面上时,这些表面上无规分布的面元散射的子波相互叠加使反射光场具有随机的空间光强分布,呈现出颗粒状的结构,这就是激光散斑。形成的激光散斑具有高度随机性,因此不同的激光发射器发射出来的激光所生成的激光散斑不同。当形成的激光散斑照射到不同深度和形状的物体上时,生成的散斑图像是不一样的。通过不同的镭射灯形成的激光散斑具有唯一性,从而得到的散斑图像也具有唯一性。
步骤504,在第二运行环境中根据散斑图像计算得到深度图像并将深度图像发送到第一运行环境中。
终端为了保护数据的安全,会保证散斑图像一直在安全的环境中进行处理,所以终端会将散斑图像传到第二运行环境下进行处理。深度图像是用于表示被拍摄物体深度信息的图像,根据散斑图像计算可以得到深度图像。终端可以控制摄像头模组同时采集第一目标图像和散斑图像,根据散斑图像计算得到的深度图像就可以表示第一目标图像中的物体的深度信息。
可在第二运行环境中根据散斑图像和参考图像计算得到深度图像。深度图像是激光散斑照射到参考平面时所采集的图像,所以参考图像是带有参考深度信息的图像。首先可根据散斑图像中的散斑点相对于参考图像中的散斑点的位置偏移量计算相对深度,相对深度可以表示实际拍摄物体到参考平面的深度信息。然后再根据获取的相对深度和参考深度计算物体的实际深度信息。具体的,将参考图像与散斑图像进行比较得到偏移信息,偏移信息用于表示散斑图像中散斑点相对于参考图像中对应散斑点的水平偏移量;根据偏移信息和参考深度信息计算得到深度图像。
图8为一个实施例中计算深度信息的原理图。如图8所示,镭射灯602可以生成激光散斑,激光散斑经过物体进行反射后,通过激光摄像头604获取形成的图像。在摄像头的标定过程中,镭射灯602发射的激光散斑会经过参考平面608进行反射,然后通过激光摄像头604采集反射光线,通过成像平面610成像得到参考图像。参考平面608到镭射灯602的参考深度为L,该参考深度为已知的。在实际计算深度信息的过程中,镭射灯602发射的激光散斑会经过物体606进行反射,再由激光摄像头604采集反射光线,通过成像平面610成像得到实际的散斑图像。则可以得到实际的深度信息的计算公式为:
Figure PCTCN2019082696-appb-000002
其中,L是镭射灯602到与参考平面608之间的距离,f为激光摄像头604中透镜的焦距,CD为镭射灯602到激光摄像头604之间的距离,AB为物体606的成像与参考平面608的成像之间的偏移距离。AB可为像素偏移量n与像素点的实际距离p的乘积。当物体604到镭射灯602之间的距离Dis大于参考平面608到镭射灯602之间的距离L时,AB为负值;当物体604到镭射灯602之间的距离Dis小于参考平面608到镭射灯602之间的距离L时,AB为正值。
步骤506,通过第一运行环境中的人脸识别模型,对第一目标图像和深度图像进行人脸识别处理。
在第二运行环境中计算得到深度图像之后,可以将计算得到的深度图像发送到第一运行环境中,然后在第一运行环境中根据第一目标图像和深度图像进行人脸识别处理,第一运行环境再将人脸识别结果发送给上层应用,上层应用可以根据人脸识别结果进行相应的应用操作。
例如,在对图像进行美颜处理的时候,通过第一目标图像可以检测到人脸所在的位置和区域。由于第一目标图像和深度图像是对应的,那么就可以通过深度图像的对应区域获取人脸的深度信息,通过人脸的深度信息可以构建人脸三维特征,从而根据人脸三维特征对人脸进行美颜处理。
在本申请提供的其他实施例中,如图9所示,在第二运行环境中进行人脸识别处理时,具体包括:
步骤702,控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到第二运行环境中。
在一个实施例中,第二目标图像可以为红外图像,摄像头模组中可包括泛光灯、镭射灯和激光摄像头,终端可控制泛光灯开启,然后通过激光摄像头采集泛光灯照射物体所形成的红外图像作为第二目标图像。终端还可以控制镭射灯开启,然后通过激光摄像头采集镭射灯照射物体所形成的散斑图像。
采集第二目标图像和散斑图像之间的时间间隔要比较短,才能保证采集到的第二目标图像和散斑图像的一致性,避免第二目标图像和散斑图像之间存在较大的误差,提高了对图像处理的准确性。具体地,控制摄像头模组采集第二目标图像,并控制摄像头模组采集散斑图像;其中,采集第二目标图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。
可分别设置泛光灯控制器和镭射灯控制器,通过两路PWM(Pulse Width Modulation,脉冲宽度调制)分别连接泛光灯控制器和镭射灯控制器,当需要控制泛光灯开启或镭射灯开启时,可通过PWM向泛光灯控制器发射脉冲波控制泛光灯开启或向镭射灯控制器发射脉冲波控制镭射灯开启,通过PWM分别向两个控制器发射脉冲波来控制采集第二目标图像和散斑图像之间的时间间隔。可以理解的是,第二目标图像可以为红外图像,也可以是其他类型的图像,在此不做限定。例如,第二目标图像也可以为可见光图像。
步骤704,在第二运行环境中根据散斑图像计算得到深度图像。
需要说明的是,当人脸识别指令的安全等级高于等级阈值时,认为发起人脸识别指令的应用操作的安全性要求较高,则需要在安全性较高的环境中进行人脸识别处理,才能保证数据处理的安全性。摄像头模组采集的第二目标图像和散斑图像直接发送到第二运行环境,然后在第二运行环境中根据散 斑图像计算深度图像。
步骤706,通过第二运行环境中的人脸识别模型,对第二目标图像和深度图像进行人脸识别处理。
在一个实施例中,在第二运行环境中进行人脸识别处理时,可根据第二目标图像进行人脸检测,检测第二目标图像中是否包含目标人脸。若第二目标图像中包含目标人脸,则将检测到的目标人脸与预设人脸进行匹配。若检测到的目标人脸与预设人脸匹配,再根据深度图像获取目标人脸的目标深度信息,根据目标深度信息检测目标人脸是否为活体。
在对目标人脸进行匹配的时候,可以提取目标人脸的人脸属性特征,再将提取的人脸属性特征与预设人脸的人脸属性特征进行匹配,若匹配值超过匹配阈值,则认为人脸匹配成功。例如,可以提取人脸的偏转角度、亮度信息、五官特征等特征作为人脸属性特征,若目标人脸的人脸属性特征与预设人脸的人脸属性特征匹配度超过90%,则认为人脸匹配成功。
一般地,在人脸认证的过程中,假设拍摄的为照片或雕塑中的人脸时,提取的人脸属性特征也可能认证成功。那么为了提高准确率,可以根据采集的深度图像进行活体检测处理,这样必须保证采集的是人脸是活体人脸才能认证成功。可以理解的是,采集的第二目标图像可以表示人脸的细节信息,采集深度图像则可以表示对应的深度信息,根据深度图像就可以进行活体检测。例如,被拍摄的人脸为照片中的人脸的话,根据深度图像就可以判断采集的人脸不是立体的,则可以认为采集的人脸为非活体的人脸。
具体地,根据上述校正深度图像进行活体检测包括:深度图像中查找与上述目标人脸对应的人脸深度信息,若上述深度图像中存在与上述目标人脸对应的人脸深度信息,且上述人脸深度信息符合人脸立体规则,则上述目标人脸为活体人脸。上述人脸立体规则是带有人脸三维深度信息的规则。
在一个实施例中,还可以采用人工智能模型对上述第二目标图像和深度图像进行人工智能识别,获取目标人脸对应的活体属性特征,并根据获取的活体属性特征判断上述目标人脸是否为活体人脸图像。活体属性特征可以包括目标人脸对应的肤质特征、纹理的方向、纹理的密度、纹理的宽度等,若上述活体属性特征符合人脸活体规则,则认为上述目标人脸具有生物活性,即为活体人脸。
可以理解的是,在进行人脸检测、人脸匹配、活体检测等处理时,处理顺序可以根据需要进行调换。例如,可以先对人脸进行认证,再检测人脸是否为活体。也可以先检测人脸是否为活体,再对人脸进行认证。
在本申请提供的实施例中,为保证数据的安全,在传输人脸识别模型的时候,可以将压缩后的人脸识别模型进行加密处理,并将加密处理后的人脸识别模型从第一运行环境传入到第二运行环境;在第二运行环境中对加密处理后的人脸识别模型进行解密处理,并将解密处理后的人脸识别模型进行存储。
第一运行环境可以是普通运行环境,第二运行环境为安全运行环境,第二运行环境的安全性要高于第一运行环境。第一运行环境一般用于对安全性较低的应用操作进行处理,第二运行环境一般用于对安全性较高的应用操作进行处理。例如,拍摄、游戏等安全性要求不高的操作可以在第一运行环境中进行,支付、解锁等安全性要求较高操作可以在第二运行环境中进行。
第二运行环境一般用于进行安全性要求较高的应用操作,因此在向第二运行环境中发送人脸识别模型的时候,也需要保证人脸识别模型的安全性。在第一运行环境将人脸识别模型压缩处理之后,可将压缩后的人脸识别模型进行加密处理,然后将加密后处理后的人脸识别模型通过共享缓冲区发送到第二运行环境中。
加密处理后的人脸识别模型从第一运行环境传入到共享缓冲区之后,再从共享缓冲区传入到第二运行环境中。第二运行环境再将接收到的加密处理后的人脸识别模型进行解密处理。对人脸识别模型进行加密处理的算法在本实施例中不做限定。例如,可以是根据DES(Data Encryption Standard,数据加密标准)、MD5(Message-Digest Algorithm 5,信息-摘要算法5)、HAVAL(Diffie-Hellman,密钥交换算法)等加密算法进行加密处理的。
在一个实施例中,在第二运行环境中生成人脸识别模型之后,还可以包括:当检测到人脸识别模型未被调用的时长超过时长阈值,或检测到终端被关闭时,将第二运行环境中的人脸识别模型删除。这样可以将第二运行环境中的存储空间释放,以节省电子设备的空间。
进一步地,可以在电子设备运行过程中检测运行情况,根据电子设备的运行情况将人脸识别模型 占用的存储空间进行释放。具体的,当检测到电子设备处于卡顿状态,且人脸识别模型未被调用的时长超过时长阈值时,将第二运行环境中的人脸识别模型删除。
人脸识别模型被释放之后,可以在检测到电子设备恢复正常运行状态,或检测到人脸识别指令时,获取第一运行环境中存储的人脸识别模型;在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理;将压缩后的人脸识别模型从第一运行环境传入到第二运行环境中进行存储。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型压缩之后传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型压缩之后再传到第二运行环境中,进一步地提高了数据处理速度。另外,根据人脸识别指令的安全等级选择在第一运行环境或第二运行环境中进行处理,避免将所有应用都放在第二运行环境中处理,可以降低第二运行环境的资源占用率。
应该理解的是,虽然图13、图14、图7、图9的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图13、图14、图7、图9中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图10为一个实施例中实现数据处理方法的硬件结构图。如图10所示,该电子设备中可包括摄像头模组810、中央处理器(Central Processing Unit,CPU)820和微控制单元(Microcontroller Unit,MCU)830,上述摄像头模组810中包括激光摄像头812、泛光灯814、RGB摄像头816和镭射灯818。微控制单元830包括PWM(Pulse Width Modulation,脉冲宽度调制)模块832、SPI/I2C(Serial Peripheral Interface/Inter-Integrated Circuit,串行外设接口/双向二线制同步串行接口)模块834、RAM(Random Access Memory,随机存取存储器)模块836、Depth Engine模块838。其中,中央处理器820可以为多核运行模式,中央处理器820中的CPU内核可以在TEE或REE下运行。TEE和REE均为ARM模块(Advanced RISC Machines,高级精简指令集处理器)的运行模式。中央处理器820中的自然运行环境822可为第一运行环境,安全性较低。中央处理器820中的可信运行环境824为第二运行环境,安全性较高。可理解的是,由于微控制单元830是独立于中央处理器820的处理模块,且其输入和输出都是由可信运行环境824下的中央处理器820来控制的,所以微控制单元830也是安全性较高的处理模块,可认为微控制单元830也是处于安全运行环境中的,即微控制单元830也处于第二运行环境中。
通常情况下,安全性要求较高的操作行为需要在第二运行环境中执行,其他操作行为则可在第一运行环境下执行。本申请实施例中,中央处理器820可通过可信运行环境824控制SECURE SPI/I2C向微控制单元830中的SPI/I2C模块834发送人脸识别指令。微控制单元830在接收到人脸识别指令后,若判断人脸识别指令的安全等级高于等级阈值,则通过PWM模块832发射脉冲波控制摄像头模组810中泛光灯814开启来采集红外图像、控制摄像头模组810中镭射灯818开启来采集散斑图像。摄像头模组810可将采集到的红外图像和散斑图像传送给微控制单元830中Depth Engine模块838,Depth Engine模块838可根据散斑图像计算深度图像,并将红外图像和深度图像发送给中央处理器820的可信运行环境824中。中央处理器820的可信运行环境824会根据接收到的红外图像和深度图像进行人脸识别处理。
若判断人脸识别指令的安全等级低于等级阈值,则通过PWM模块832发射脉冲波控制摄像头模组810中镭射灯818开启来采集散斑图像,并通过RGB摄像头816来采集可见光图像。摄像头模组810将采集的可见光图像直接发送到中央处理器820的自然运行环境822中,将散斑图像传送给微控制单元830中Depth Engine模块838,Depth Engine模块838可根据散斑图像计算深度图像,并将深度图像发送给中央处理器820的可信运行环境824。再由可信运行环境824将深度图像发送到自然运行环境822中,在自然运行环境822中根据可见光图像和深度图像进行人脸识别处理。
图11为一个实施例中数据处理装置的结构示意图。如图11所示,该数据处理装置900包括模型 获取模块902、模型传输模块904和模型存储模块906。其中:
模型获取模块902,用于获取第一运行环境中存储的人脸识别模型。
模型传输模块904,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理。
模型存储模块906,用于将所述压缩后的人脸识别模型从所述第一运行环境传入到第二运行环境中进行存储;其中,所述第一运行环境的存储空间大于所述第二运行环境的存储空间,所述人脸识别模型用于对图像进行人脸识别处理。即:模型总处理模块包括模型传输模块和模型存储模块。
上述实施例提供的数据处理装置,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型压缩之后传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型压缩之后再传到第二运行环境中,进一步地提高了数据处理速度。
图16为另一个实施例中数据处理装置的结构示意图。如图16所示,该数据处理装置1030包括模型获取模块1032、模型传输模块1034、模型存储模块1036和人脸识别模块1038。其中:
模型获取模块1032,用于获取第一运行环境中存储的人脸识别模型。
模型传输模块1034,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理。
模型存储模块1036,用于将所述压缩后的人脸识别模型从所述第一运行环境传入到第二运行环境中进行存储;其中,所述第一运行环境的存储空间大于所述第二运行环境的存储空间,所述人脸识别模型用于对图像进行人脸识别处理。
人脸识别模块1038,用于将当检测到人脸识别指令时,判断所述人脸识别指令的安全等级;若所述安全等级低于等级阈值,则在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理;若所述安全等级高于等级阈值,则在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理;其中,所述第二运行环境的安全性高于所述第一运行环境的安全性。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型压缩之后传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型压缩之后再传到第二运行环境中,进一步地提高了数据处理速度。另外,根据人脸识别指令的安全等级选择在第一运行环境或第二运行环境中进行处理,避免将所有应用都放在第二运行环境中处理,可以降低第二运行环境的资源占用率。
在一个实施例中,模型传输模块1034还用于获取第二运行环境中用于存储人脸识别模型的目标空间容量,以及初始化后的人脸识别模型的数据量;
根据所述目标空间容量和数据量计算压缩系数;
将初始化后的人脸识别模型进行所述压缩系数对应的压缩处理。
在一个实施例中,模型传输模块1034还用于若所述目标空间容量小于所述数据量,则将所述目标空间容量与数据量的比值作为压缩系数。
在一个实施例中,模型存储模块1036还用于将压缩后的人脸识别模型从所述第一运行环境传入到共享缓冲区,并将所述压缩后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储。
在一个实施例中,模型存储模块1036还用于将压缩后的人脸识别模型进行加密处理,并将加密处理后的人脸识别模型从所述第一运行环境传入到第二运行环境;在所述第二运行环境中对所述加密处理后的人脸识别模型进行解密处理,并将解密处理后的人脸识别模型进行存储。
在一个实施例中,模型存储模块1038还用于控制摄像头模组采集第一目标图像和散斑图像,并将所述第一目标图像发送到第一运行环境中,将所述散斑图像发送到所述第二运行环境中;在所述第二运行环境中根据所述散斑图像计算得到深度图像,并将所述深度图像发送到所述第一运行环境中;通过所述第一运行环境中的人脸识别模型,对所述第一目标图像和深度图像进行人脸识别处理。
在一个实施例中,模型存储模块1038还用于控制摄像头模组采集第二目标图像和散斑图像,并将 第二目标图像和散斑图像发送到所述第二运行环境中;在所述第二运行环境中根据所述散斑图像计算得到深度图像;通过所述第二运行环境中的人脸识别模型,对所述第二目标图像和深度图像进行人脸识别处理。
上述数据处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将数据处理装置按照需要划分为不同的模块,以完成上述数据处理装置的全部或部分功能。
第三实施方式:
图3为一个实施例中电子设备的内部结构示意图。如图3所示,该电子设备100包括通过系统总线140连接的处理器110、存储器120和网络接口130。其中,该处理器110用于提供计算和控制能力,支撑整个电子设备100的运行。存储器120用于存储数据、程序等,存储器120上存储至少一个计算机程序1224,该计算机程序1224可被处理器110执行,以实现本申请实施例中提供的适用于电子设备100的数据处理方法。存储器120可包括磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random-Access-Memory,RAM)等。例如,在一个实施例中,存储器120包括非易失性存储介质122及内存储器124。非易失性存储介质122存储有操作系统1222和计算机程序1224。该计算机程序1224可被处理器110所执行,以用于实现以下各个实施例所提供的一种数据处理方法。内存储器124为非易失性存储介质122中的操作系统1222、计算机程序1224提供高速缓存的运行环境。网络接口130可以是以太网卡或无线网卡等,用于与外部的电子设备100进行通信。该电子设备100可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
图17为一个实施例中数据处理方法的流程图。如图17所示,该数据处理方法包括步骤3202至步骤3206。其中:
步骤3202,获取第一运行环境中存储的人脸识别模型。
具体的,电子设备可包括处理器,处理器可以对数据进行存储、计算、传输等处理。电子设备中的处理器可以在不同的环境中运行,例如处理器可以在TEE(Trusted Execution Environment,可信执行环境)中运行,也可以在REE(Rich Execution Environment,自然运行环境)中运行,在TEE中运行时,数据的安全性更高;在REE中运行时,数据的安全性更低。
电子设备可以对处理器的资源进行分配,对不同的运行环境划分不同的资源。例如,一般情况下电子设备中的安全性要求较高的进程会比较少,普通进程会比较多,那么电子设备就可以将处理器的小部分资源划分到安全性较高的运行环境中,将大部分资源划分到安全性没那么高的运行环境中。
人脸识别模型是用于对图像中的人脸进行识别处理的算法模型,一般通过文件的形式进行存储。可以理解的是,由于对图像中的人脸进行识别的算法比较复杂,所以存储人脸识别模型时所占用的存储空间也比较大。电子设备对处理器划分不同的运行环境后,划分到第一运行环境中的存储空间要多余划分到第二运行环境中的存储空间,因此电子设备会将人脸识别模型存储在第一运行环境中,以保证第二运行环境中有足够的空间来对数据进行处理。
步骤3204,在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包。
需要说明的是,在对图像进行人脸识别处理之前,需要将人脸识别模型进行初始化。如果将人脸识别模型存储在第二运行环境中,那么存储人脸识别模型需要占用第二运行环境中的存储空间,对人脸识别模型进行初始化也需要占占用第二运行环境中的存储空间,这样就会造成第二运行环境的资源消耗过大,影响数据处理的效率。
例如,人脸识别模型占用20M内存,对人脸识别模型进行初始化时需要另外的10M内存,如果存储和初始化都在第二运行环境中进行,那么就需要占用第二运行环境的总共30M内存。而如果将人脸识别模型存储在第一运行环境中,并在第一运行环境中初始化,再将初始化后的人脸识别模型发送到第二运行环境中,那么就只需要占用第二运行环境中的10M内存,大大减少了第二运行环境中的资源占用率。
电子设备将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型进行初始化,再将初始化之后的人脸识别模型传到第二运行环境中,可以减少对第二运行环境中的存储空间的占用。进一步的,在将人脸识别模型初始化之后,可以将初始化后的人脸识别模型分割成至少两个模型数据包,从而将初始化后的人脸识别模型进行分段传输。
步骤3206,依次将模型数据包从第一运行环境传入到第二运行环境,并在第二运行环境中根据模型数据包生成目标人脸识别模型;其中,第一运行环境的存储空间大于第二运行环境的存储空间,目标人脸识别模型用于对图像进行人脸识别处理。即:在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储,包括:在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包;依次将模型数据包从第一运行环境传入到第二运行环境,并在第二运行环境中根据模型数据包生成目标人脸识别模型;其中,目标人脸识别模型用于对图像进行人脸识别处理。
具体的,人脸识别模型是以文件的形式存储的,第一运行环境将初始化后的人脸识别模型分割成模型数据包之后,会将得到的模型数据包依次发送给第二运行环境。模型数据包传到第二运行环境后,会将模型数据包拼接到一起,生成目标人脸识别模型。例如,可以将人脸识别模型按照不同功能模块进行分割,传到第二运行环境后,再将各个功能模块对应的模型数据包进行拼接,生成最后的目标人脸识别模型。
在一个实施例中,可以在检测到满足初始化条件时,开始执行步骤3202。例如,把人脸识别模型存储在第一运行环境中,电子设备可以在开机的时候将人脸识别模型进行初始化,也可以在检测到需要进行人脸识别处理的应用程序被打开时就将人脸识别模型初始化,还可以在检测到人脸识别指令的时候将人脸识别模型进行初始化,然后将初始化好的人脸识别模型压缩之后再传入第二运行环境中。
在本申请提供的其他实施例中,在第一运行环境将人脸识别模型进行初始化之前,可以获取第二运行环境中的剩余存储空间;若剩余存储空间小于空间阈值,则在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包。空间阈值可以根据需要进行设置,一般为人脸识别模型所占用的存储空间与对人脸识别模型进行初始化时所占用的存储空间的总和。
若第二运行环境中的剩余存储空间比较大,可以将人脸识别模型直接发送到第二运行环境中,并在第二运行环境中进行初始化处理,初始化完成后再将原始的人脸识别模型删除,这样可以保证数据的安全性。则上述数据处理方法具体还可以包括:若剩余存储空间大于或等于空间阈值,则在第一运行环境中将人脸识别模型分割成至少两个模型数据包,并上述模型数据包传入到第二运行环境中;在第二运行环境中将所述模型数据包生成目标人脸识别模型,并将上述目标人脸识别模型进行初始化;将初始化之前的目标人脸识别模型删除,保留初始化之后的目标人脸识别模型。第二运行环境中生成目标人脸识别模型之后,就可以直接根据目标人脸识别模型进行人脸识别处理。
可以理解的是,人脸识别模型中一般可以包括多个处理模块,每个处理模块完成的处理不同,这多个处理模块可以是相互独立的。例如,可包括人脸检测模块、人脸匹配模块和活体检测模块。其中,一部分模块可能对安全性要求比较低,一部分模块可能对安全性要求比较高。因此,可以将安全性要求比较低的处理模块放在第一运行环境中进行初始化,安全性要求比较高的处理模块放在第二运行环境中进行初始化。
具体的,步骤3204可以包括:在第一运行环境中将人脸识别模型进行第一初始化,并将第一初始化后的人脸识别模型分割成至少两个模型数据包。步骤3206之后还可以包括:将目标人脸识别模型中的第二模块进行第二初始化,其中第二模块为人脸识别模型中除第一模块之外的模块,第一模块的安全性低于第二模块的安全性。例如,第一模块可以是人脸检测模块,第二模块可以是人脸匹配模块和活体检测模块,第一模块对安全的要求比较低,所以放在第一运行环境中初始化。第二模块对安全的要求比较高,所以放在第二运行环境中初始化。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型分成至少两个模型数据包,然后再将数据包传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型分成多个数据包进行传输,提高了数据的传输的效率。
图18为另一个实施例中数据处理方法的流程图。如图18所示,该数据处理方法包括步骤3302至步骤3314。其中:
步骤3302,获取第一运行环境中存储的人脸识别模型。
一般情况下,在进行人脸识别处理之前,会将人脸识别模型进行训练,使人脸识别模型的识别精度更高。在对模型进行训练的过程中,会获取一个训练图像集合,将训练图像集合中的图像作为模型的输入,并根据训练过程中得到的训练结果不断调整模型的训练参数,以此得到模型的最佳参数。训练图像集合中包含的图像越多,训练得到的模型越精确,但耗时也会相应地增加。
在一个实施例中,电子设备可以是与用户交互的终端,而由于终端资源有限,所以可在服务器上将人脸识别模型进行训练。服务器将人脸识别模型训练好之后,再将训练好的人脸识别模型发送给终端。终端接收到训练好后的人脸识别模型之后,再将训练好的人脸识别模型存储到第一运行环境中。则步骤3302之前还可以包括:终端接收服务器发送的人脸识别模型,并将人脸识别模型存储到终端的第一运行环境中。
终端中可以包括第一运行环境和第二运行环境,终端可在第二运行环境中对图像进行人脸识别处理,但由于终端划分到第一运行环境下的存储空间比划分到第二运行环境中的存储空间大,所以终端可以将接收到的人脸识别模型存放在第一运行环境的存储空间中。在一个实施例中,可以在每次检测到终端重启的时候,再将第一运行环境中存储的人脸识别模型加载到第二运行环境中,这样需要对图像进行人脸识别处理时,就可以直接调用第二运行环境中加载好的人脸识别模型进行处理。则步骤3302具体可以包括:当检测到终端重启时,获取第一运行环境中存储的人脸识别模型。
可以理解的是,人脸识别模型是可以更新的,当人脸识别模型更新时,服务器会将更新后的人脸识别模型发送给终端,终端接收到更新之后的人脸识别模型后,将更新之后的人脸识别模型存储在第一运行环境中,覆盖原来的人脸识别模型。然后控制终端进行重启,终端重启后,再获取更新后的人脸识别模型,并将更新后的人脸识别模型进行初始化。
步骤3304,在第一运行环境中将人脸识别模型进行初始化,并获取共享缓冲区的空间容量,并根据空间容量将人脸识别模型分割成至少两个模型数据包。
在通过人脸识别模型进行人脸识别处理之前,需要将人脸识别模型进行初始化。初始化过程中,可将人脸识别模型中的参数、模块等设置为默认状态。由于对模型进行初始化的过程也需要占用内存,因此终端可在第一运行环境中将人脸识别模型进行初始化,然后将初始化后的人脸识别模型发送到第二运行环境中,这样就可以直接在第二运行环境中进行人脸识别处理,而不需要占用额外的内存去对模型进行初始化。
人脸识别模型是以文件形式存储的,也可以是以其他形式存储的,在此不做限定。人脸识别模型一般可以包括多个功能模型,例如可以包括人脸检测模块、人脸匹配模块、活体检测模块等。那么在对人脸识别模型进行切割的时候,可以按照各个功能模块分割成至少两个模型数据包,这样方便后续重组生成目标人脸识别模型。在其他实施例中,还可以按照其他方式进行分割,不做限定。
步骤3306,对每个模型数据包赋予对应的数据编号,按照数据编号依次将模型数据包从第一运行环境传入到第二运行环境。
可以理解的是,在存储数据的时候,一般会将数据按照存储的时间先后,将数据按照顺序写入到连续的存储地址中。对人脸识别模型进行分割之后,可以对分割得到的模型数据包进行编号,然后可按照数据编号依次将模型数据包传入到第二运行环境中进行存储。在模型数据包传输完成之后,再将依次将模型数据包进行拼接,生成目标人脸识别模型。
在一个实施例中,第一运行环境和第二运行环境之间的数据传输可以通过共享缓冲区(Share Buffer)来实现,那么第一运行环境切割人脸识别模型的时候,就可以根据共享缓冲区的容量来进行切割。具体的,获取共享缓冲区的空间容量,并根据空间容量将人脸识别模型分割成至少两个模型数据包;其中,模型数据包的数据量小于或等于空间容量。
需要说明的是,共享缓冲区是第一运行环境和第二运行环境传输数据的通道,第一运行环境和第二运行环境都可以对共享缓冲区进行访问。电子设备可以对共享缓冲区进行配置,可根据需求设置共享缓冲区的空间大小。例如,电子设备可以将共享缓冲区的存储空间可以设置为5M,也可以设置为10M。在数据传输的时候,将人脸识别模型根据共享缓冲区的容量进行切割之后再传输,就不需要额外地对共享缓冲区配置更大的容量来传输数据,减少了电子设备的资源占用。
通过共享缓冲区传输人脸识别模型的时候,具体包括:依次将模型数据包从第一运行环境传入到共享缓冲区,并将模型数据包从共享缓冲区传入到第二运行环境。步骤3306具体可以包括:对每个模 型数据包赋予对应的数据编号,按照数据编号依次将模型数据包从第一运行环境传入到共享缓冲区,然后将模型数据包从共享缓冲区传入到第二运行环境。
图5为一个实施例中实现数据处理方法的系统示意图。如图5所示,该系统中包括第一运行环境302、共享缓冲区304和第二运行环境306。第一运行环境302和第二运行环境306可以通过共享缓冲区304进行数据传输。人脸识别模型存储在第一运行环境302中,系统可以获取第一运行环境302中存储的人脸识别模型,并对获取的人脸识别模型进行初始化处理,然后将初始化后的人脸识别模型进行分割,并将分割后形成的模型数据包传入到共享缓冲区304中,通过共享缓冲区304将模型数据包传入到第二运行环境306中。最后在第二运行环境306中将模型数据包拼接成目标人脸识别模型。
步骤3308,在第二运行环境中根据数据编号将模型数据包进行拼接,生成目标人脸识别模型。
具体的,数据编号可以用于表示模型数据包的排列顺序,模型数据包传入到第二运行环境中之后,按照数据编号将模型数据包进行顺序排列,然后按照排列顺序进行拼接,生成目标人脸识别模型。
图19为一个实施例中分割人脸识别模型的示意图。如图19所示,人脸识别模型3502以文件形式存储,将人脸识别模型3502分割成3个模型数据包3504,该模型数据包3504也可以是文件形式。分割后的模型数据包3504的数据量小于人脸识别模型3502的数据量,各个模型数据包3504的数据量可以相同,也可以不同。例如,人脸识别模型3502总共30M,则可以按照数据量大小平均分割,每个模型数据包就为10M。
步骤3310,当检测到人脸识别指令时,判断人脸识别指令的安全等级。
第一运行环境和第二运行环境中都存储了人脸识别模型,终端可在第一运行环境中进行人脸识别处理,也可以在第二运行环境中进行人脸识别处理。具体的,终端可根据触发人脸识别处理的人脸识别指令,判断是在第一运行环境中进行人脸识别处理,还是在第二运行环境中进行人脸识别处理。
人脸识别指令是由终端的上层应用发起的,上层应用发起人脸识别指令时,可以将发起人脸识别指令的时间、应用标识、操作标识等信息写入到人脸识别中。应用标识可用于标示发起人脸识别指令的应用程序,操作标识可用于标示需要人脸识别结果进行的应用操作。例如,可以通过人脸识别结果进行支付、解锁、美颜等应用操作,则人脸识别指令中的操作标识就用于标示支付、解锁、美颜等应用操作。
安全等级用于表示应用操作的安全性高低,安全等级越高,则应用操作对安全性的要求越高。例如,支付操作对安全性的要求比较高,美颜操作对安全性的要求就比较低,那么支付操作的安全等级就高于美颜操作的安全等级。安全等级可以直接写入到人脸识别指令中,终端检测到人脸识别指令后,直接读取人脸识别指令中的安全等级。也可以预先建立操作标识的对应关系,在检测到人脸识别指令后,通过人脸识别指令中的操作标识获取对应的安全等级。
步骤3312,若安全等级低于等级阈值,则在第一运行环境中根据人脸识别模型进行人脸识别处理。
当检测到安全等级低于等级阈值时,认为发起人脸识别处理的应用操作的安全性要求较低,则可以直接在第一运行环境中根据人脸识别模型进行人脸识别处理。具体的,人脸识别处理可以但不限于包含人脸检测、人脸匹配、活体检测中的一种或多种,人脸检测是指检测图像中是否存在人脸的过程,人脸匹配是指将检测到的人脸与预设的人脸进行匹配的过程,活体检测是指检测图像中的人脸是否为活体的过程。
步骤3314,若安全等级高于等级阈值,则在第二运行环境中根据人脸识别模型进行人脸识别处理;其中,第二运行环境的安全性高于第一运行环境的安全性。
当检测到安全等级高于等级阈值时,认为发起人脸识别处理的应用操作的安全性要求较高,则可以在第二运行环境中根据人脸识别模型进行人脸识别处理。具体的,终端可将人脸识别指令发送给第二运行环境,通过第二运行环境来控制摄像头模组采集图像。采集的图像会首先发送到第二运行环境中,在第二运行环境中判断应用操作的安全等级,若安全等级低于等级阈值,则将采集的图像发送到第一运行环境中进行人脸识别处理;若安全等级高于等级阈值,则在第二运行环境中对采集的图像进行人脸识别处理。
具体的,如图7所示,在第一运行环境中进行人脸识别处理时,包括:
步骤502,控制摄像头模组采集第一目标图像和散斑图像,并将第一目标图像发送到第一运行环境中,将散斑图像发送到第二运行环境中。
终端中安装的应用程序可发起人脸识别指令,并将人脸识别指令发送到第二运行环境中。在第二运行环境中检测到人脸识别指令的安全等级低于等级阈值时,就可以控制摄像头模组采集第一目标图像和散斑图像。摄像头模组采集到的第一目标图像可以直接发给第一运行环境,并将采集的散斑图像发送到第二运行环境中。
在一个实施例中,第一目标图像可以是可见光图像,也可以是其他类型的图像,在此不做限定。当第一目标图像为可见光图像时,摄像头模组中可包括RGB(Red Green Blue,红绿蓝)摄像头,通过RGB摄像头采集第一目标图像。摄像头模组中还可包括镭射灯和激光摄像头,终端可控制镭射灯开启,然后通过激光摄像头采集镭射灯发射的激光散斑照射到物体上所形成的散斑图像。
具体的,当激光照射在平均起伏大于波长数量级的光学粗糙表面上时,这些表面上无规分布的面元散射的子波相互叠加使反射光场具有随机的空间光强分布,呈现出颗粒状的结构,这就是激光散斑。形成的激光散斑具有高度随机性,因此不同的激光发射器发射出来的激光所生成的激光散斑不同。当形成的激光散斑照射到不同深度和形状的物体上时,生成的散斑图像是不一样的。通过不同的镭射灯形成的激光散斑具有唯一性,从而得到的散斑图像也具有唯一性。
步骤504,在第二运行环境中根据散斑图像计算得到深度图像,并将深度图像发送到第一运行环境中。
终端为了保护数据的安全,会保证散斑图像一直在安全的环境中进行处理,所以终端会将散斑图像传到第二运行环境下进行处理。深度图像是用于表示被拍摄物体深度信息的图像,根据散斑图像计算可以得到深度图像。终端可以控制摄像头模组同时采集第一目标图像和散斑图像,根据散斑图像计算得到的深度图像就可以表示第一目标图像中的物体的深度信息。
可在第二运行环境中根据散斑图像和参考图像计算得到深度图像。深度图像是激光散斑照射到参考平面时所采集的图像,所以参考图像是带有参考深度信息的图像。首先可根据散斑图像中的散斑点相对于参考图像中的散斑点的位置偏移量计算相对深度,相对深度可以表示实际拍摄物体到参考平面的深度信息。然后再根据获取的相对深度和参考深度计算物体的实际深度信息。具体的,将参考图像与散斑图像进行比较得到偏移信息,偏移信息用于表示散斑图像中散斑点相对于参考图像中对应散斑点的水平偏移量;根据偏移信息和参考深度信息计算得到深度图像。
图8为一个实施例中计算深度信息的原理图。如图8所示,镭射灯602可以生成激光散斑,激光散斑经过物体进行反射后,通过激光摄像头604获取形成的图像。在摄像头的标定过程中,镭射灯602发射的激光散斑会经过参考平面608进行反射,然后通过激光摄像头604采集反射光线,通过成像平面610成像得到参考图像。参考平面608到镭射灯602的参考深度为L,该参考深度为已知的。在实际计算深度信息的过程中,镭射灯602发射的激光散斑会经过物体606进行反射,再由激光摄像头604采集反射光线,通过成像平面610成像得到实际的散斑图像。则可以得到实际的深度信息的计算公式为:
Figure PCTCN2019082696-appb-000003
其中,L是镭射灯602到与参考平面608之间的距离,f为激光摄像头604中透镜的焦距,CD为镭射灯602到激光摄像头604之间的距离,AB为物体606的成像与参考平面608的成像之间的偏移距离。AB可为像素偏移量n与像素点的实际距离p的乘积。当物体604到镭射灯602之间的距离Dis大于参考平面608到镭射灯602之间的距离L时,AB为负值;当物体604到镭射灯602之间的距离Dis小于参考平面608到镭射灯602之间的距离L时,AB为正值。
步骤506,通过第一运行环境中的人脸识别模型,对第一目标图像和深度图像进行人脸识别处理。
在第二运行环境中计算得到深度图像之后,可以将计算得到的深度图像发送到第一运行环境中,然后在第一运行环境中根据第一目标图像和深度图像进行人脸识别处理,第一运行环境再将人脸识别结果发送给上层应用,上层应用可以根据人脸识别结果进行相应的应用操作。
例如,在对图像进行美颜处理的时候,通过第一目标图像可以检测到人脸所在的位置和区域。由于第一目标图像和深度图像是对应的,那么就可以通过深度图像的对应区域获取人脸的深度信息,通过人脸的深度信息可以构建人脸三维特征,从而根据人脸三维特征对人脸进行美颜处理。
在本申请提供的其他实施例中,如图9所示,在第二运行环境中进行人脸识别处理时,具体包括:
步骤702,控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到第二运行环境中。
在一个实施例中,第二目标图像可以为红外图像,摄像头模组中可包括泛光灯、镭射灯和激光摄像头,终端可控制泛光灯开启,然后通过激光摄像头采集泛光灯照射物体所形成的红外图像作为第二目标图像。终端还可以控制镭射灯开启,然后通过激光摄像头采集镭射灯照射物体所形成的散斑图像。
采集第二目标图像和散斑图像之间的时间间隔要比较短,才能保证采集到的第二目标图像和散斑图像的一致性,避免第二目标图像和散斑图像之间存在较大的误差,提高了对图像处理的准确性。具体地,控制摄像头模组采集第二目标图像,并控制摄像头模组采集散斑图像;其中,采集第二目标图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。
可分别设置泛光灯控制器和镭射灯控制器,通过两路PWM(Pulse Width Modulation,脉冲宽度调制)分别连接泛光灯控制器和镭射灯控制器,当需要控制泛光灯开启或镭射灯开启时,可通过PWM向泛光灯控制器发射脉冲波控制泛光灯开启或向镭射灯控制器发射脉冲波控制镭射灯开启,通过PWM分别向两个控制器发射脉冲波来控制采集第二目标图像和散斑图像之间的时间间隔。可以理解的是,第二目标图像可以为红外图像,也可以是其他类型的图像,在此不做限定。例如,第二目标图像也可以为可见光图像。
步骤704,在第二运行环境中根据散斑图像计算得到深度图像。
需要说明的是,当人脸识别指令的安全等级高于等级阈值时,认为发起人脸识别指令的应用操作的安全性要求较高,则需要在安全性较高的环境中进行人脸识别处理,才能保证数据处理的安全性。摄像头模组采集的第二目标图像和散斑图像直接发送到第二运行环境,然后在第二运行环境中根据散斑图像计算深度图像。
步骤706,通过第二运行环境中的人脸识别模型,对第二目标图像和深度图像进行人脸识别处理。
在一个实施例中,在第二运行环境中进行人脸识别处理时,可根据第二目标图像进行人脸检测,检测第二目标图像中是否包含目标人脸。若第二目标图像中包含目标人脸,则将检测到的目标人脸与预设人脸进行匹配。若检测到的目标人脸与预设人脸匹配,再根据深度图像获取目标人脸的目标深度信息,根据目标深度信息检测目标人脸是否为活体。
在对目标人脸进行匹配的时候,可以提取目标人脸的人脸属性特征,再将提取的人脸属性特征与预设人脸的人脸属性特征进行匹配,若匹配值超过匹配阈值,则认为人脸匹配成功。例如,可以提取人脸的偏转角度、亮度信息、五官特征等特征作为人脸属性特征,若目标人脸的人脸属性特征与预设人脸的人脸属性特征匹配度超过90%,则认为人脸匹配成功。
一般地,在人脸认证的过程中,假设拍摄的为照片或雕塑中的人脸时,提取的人脸属性特征也可能认证成功。那么为了提高准确率,可以根据采集的深度图像进行活体检测处理,这样必须保证采集的是人脸是活体人脸才能认证成功。可以理解的是,采集的第二目标图像可以表示人脸的细节信息,采集深度图像则可以表示对应的深度信息,根据深度图像就可以进行活体检测。例如,被拍摄的人脸为照片中的人脸的话,根据深度图像就可以判断采集的人脸不是立体的,则可以认为采集的人脸为非活体的人脸。
具体地,根据上述校正深度图像进行活体检测包括:深度图像中查找与上述目标人脸对应的人脸深度信息,若上述深度图像中存在与上述目标人脸对应的人脸深度信息,且上述人脸深度信息符合人脸立体规则,则上述目标人脸为活体人脸。上述人脸立体规则是带有人脸三维深度信息的规则。
在一个实施例中,还可以采用人工智能模型对上述第二目标图像和深度图像进行人工智能识别,获取目标人脸对应的活体属性特征,并根据获取的活体属性特征判断上述目标人脸是否为活体人脸图像。活体属性特征可以包括目标人脸对应的肤质特征、纹理的方向、纹理的密度、纹理的宽度等,若上述活体属性特征符合人脸活体规则,则认为上述目标人脸具有生物活性,即为活体人脸。
可以理解的是,在进行人脸检测、人脸匹配、活体检测等处理时,处理顺序可以根据需要进行调换。例如,可以先对人脸进行认证,再检测人脸是否为活体。也可以先检测人脸是否为活体,再对人脸进行认证。
在本申请提供的实施例中,为保证数据的安全,在传输人脸识别模型的时候,可以将压缩后的人脸识别模型进行加密处理,并将加密处理后的人脸识别模型从第一运行环境传入到第二运行环境;在 第二运行环境中对加密处理后的人脸识别模型进行解密处理,并将解密处理后的人脸识别模型进行存储。
第一运行环境可以是普通运行环境,第二运行环境为安全运行环境,第二运行环境的安全性要高于第一运行环境。第一运行环境一般用于对安全性较低的应用操作进行处理,第二运行环境一般用于对安全性较高的应用操作进行处理。例如,拍摄、游戏等安全性要求不高的操作可以在第一运行环境中进行,支付、解锁等安全性要求较高操作可以在第二运行环境中进行。
第二运行环境一般用于进行安全性要求较高的应用操作,因此在向第二运行环境中发送人脸识别模型的时候,也需要保证人脸识别模型的安全性。在第一运行环境将人脸识别模型压缩处理之后,可将压缩后的人脸识别模型进行加密处理,然后将加密后处理后的人脸识别模型通过共享缓冲区发送到第二运行环境中。
加密处理后的人脸识别模型从第一运行环境传入到共享缓冲区之后,再从共享缓冲区传入到第二运行环境中。第二运行环境再将接收到的加密处理后的人脸识别模型进行解密处理。对人脸识别模型进行加密处理的算法在本实施例中不做限定。例如,可以是根据DES(Data Encryption Standard,数据加密标准)、MD5(Message-Digest Algorithm 5,信息-摘要算法5)、HAVAL(Diffie-Hellman,密钥交换算法)等加密算法进行加密处理的。
在一个实施例中,在第二运行环境中生成目标人脸识别模型之后,还可以包括:当检测到目标人脸识别模型未被调用的时长超过时长阈值,或检测到终端被关闭时,将第二运行环境中的目标人脸识别模型删除。这样可以将第二运行环境中的存储空间释放,以节省电子设备的空间。
进一步地,可以在电子设备运行过程中检测运行情况,根据电子设备的运行情况将目标人脸识别模型占用的存储空间进行释放。具体的,当检测到电子设备处于卡顿状态,且目标人脸识别模型未被调用的时长超过时长阈值时,将第二运行环境中的目标人脸识别模型删除。
目标人脸识别模型被释放之后,可以在检测到电子设备恢复正常运行状态,或检测到人脸识别指令时,获取第一运行环境中存储的人脸识别模型;然后在第一运行环境中将人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包;依次将模型数据包从第一运行环境传入到第二运行环境,并在第二运行环境中根据模型数据包生成目标人脸识别模型。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型分成至少两个模型数据包,然后再将数据包传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型分成多个数据包进行传输,提高了数据的传输的效率。另外,根据人脸识别指令的安全等级选择在第一运行环境或第二运行环境中进行处理,避免将所有应用都放在第二运行环境中处理,可以降低第二运行环境的资源占用率。
应该理解的是,虽然图17、图18、图7、图9的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图17、图18、图7、图9中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图10为一个实施例中实现数据处理方法的硬件结构图。如图10所示,该电子设备中可包括摄像头模组810、中央处理器(Central Processing Unit,CPU)820和微控制单元(Microcontroller Unit,MCU)830,上述摄像头模组810中包括激光摄像头812、泛光灯814、RGB摄像头816和镭射灯818。微控制单元830包括PWM(Pulse Width Modulation,脉冲宽度调制)模块832、SPI/I2C(Serial Peripheral Interface/Inter-Integrated Circuit,串行外设接口/双向二线制同步串行接口)模块834、RAM(Random Access Memory,随机存取存储器)模块836、Depth Engine模块838。其中,中央处理器820可以为多核运行模式,中央处理器820中的CPU内核可以在TEE或REE下运行。TEE和REE均为ARM模块(Advanced RISC Machines,高级精简指令集处理器)的运行模式。中央处理器820中的自然运行环境822可为第一运行环境,安全性较低。中央处理器820中的可信运行环境824为第二运行环境,安 全性较高。可理解的是,由于微控制单元830是独立于中央处理器820的处理模块,且其输入和输出都是由可信运行环境824下的中央处理器820来控制的,所以微控制单元830也是安全性较高的处理模块,可认为微控制单元830也是处于安全运行环境中的,即微控制单元830也处于第二运行环境中。
通常情况下,安全性要求较高的操作行为需要在第二运行环境中执行,其他操作行为则可在第一运行环境下执行。本申请实施例中,中央处理器820可通过可信运行环境824控制SECURE SPI/I2C向微控制单元830中的SPI/I2C模块834发送人脸识别指令。微控制单元830在接收到人脸识别指令后,若判断人脸识别指令的安全等级高于等级阈值,则通过PWM模块832发射脉冲波控制摄像头模组810中泛光灯814开启来采集红外图像、控制摄像头模组810中镭射灯818开启来采集散斑图像。摄像头模组810可将采集到的红外图像和散斑图像传送给微控制单元830中Depth Engine模块838,Depth Engine模块838可根据散斑图像计算深度图像,并将红外图像和深度图像发送给中央处理器820的可信运行环境824中。中央处理器820的可信运行环境824会根据接收到的红外图像和深度图像进行人脸识别处理。
若判断人脸识别指令的安全等级低于等级阈值,则通过PWM模块832发射脉冲波控制摄像头模组810中镭射灯818开启来采集散斑图像,并通过RGB摄像头816来采集可见光图像。摄像头模组810将采集的可见光图像直接发送到中央处理器820的自然运行环境822中,将散斑图像传送给微控制单元830中Depth Engine模块838,Depth Engine模块838可根据散斑图像计算深度图像,并将深度图像发送给中央处理器820的可信运行环境824。再由可信运行环境824将深度图像发送到自然运行环境822中,在自然运行环境822中根据可见光图像和深度图像进行人脸识别处理。
图20为一个实施例中数据处理装置的结构示意图。如图20所示,该数据处理装置1020包括模型获取模块1022、模型分割模块1024和模型传输模块1026。其中:
模型获取模块1022,用于获取第一运行环境中存储的人脸识别模型。
模型分割模块1024,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包。
模型传输模块1026,用于依次将所述模型数据包从所述第一运行环境传入到第二运行环境,并在所述第二运行环境中根据所述模型数据包生成目标人脸识别模型;其中,所述第一运行环境的存储空间大于所述第二运行环境的存储空间,所述目标人脸识别模型用于对图像进行人脸识别处理。即:模型总处理模块包括模型分割模块和模型传输模块。
上述实施例提供的数据处理装置,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型分成至少两个模型数据包,然后再将数据包传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型分成多个数据包进行传输,提高了数据的传输的效率。
图21为另一个实施例中数据处理装置的结构示意图。如图21所示,该数据处理装置1040包括模型获取模块1042、模型分割模块1044和模型传输模块1046和人脸识别模块1048。其中:
模型获取模块1042,用于获取第一运行环境中存储的人脸识别模型。
模型分割模块1044,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包。
模型传输模块1046,用于依次将所述模型数据包从所述第一运行环境传入到第二运行环境,并在所述第二运行环境中根据所述模型数据包生成目标人脸识别模型;其中,所述第一运行环境的存储空间大于所述第二运行环境的存储空间,所述目标人脸识别模型用于对图像进行人脸识别处理。
人脸识别模块1048,用于当检测到人脸识别指令时,判断所述人脸识别指令的安全等级;若所述安全等级低于等级阈值,则在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理;若所述安全等级高于等级阈值,则在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理;其中,所述第二运行环境的安全性高于所述第一运行环境的安全性。
上述实施例提供的数据处理方法,可以将人脸识别模型存储在第一运行环境中,然后在第一运行环境中将人脸识别模型初始化之后,再将初始化后的人脸识别模型分成至少两个模型数据包,然后再 将数据包传到第二运行环境中。由于第二运行环境中的存储空间小于第一运行环境中的存储空间,所以在第一运行环境中将人脸识别模型进行初始化,可以提高人脸识别模型的初始化效率,降低第二运行环境中的资源占用率,提高数据处理速度。同时将人脸识别模型分成多个数据包进行传输,提高了数据的传输的效率。另外,根据人脸识别指令的安全等级选择在第一运行环境或第二运行环境中进行处理,避免将所有应用都放在第二运行环境中处理,可以降低第二运行环境的资源占用率。
在一个实施例中,模型分割模块1044还用于获取所述共享缓冲区的空间容量,并根据所述空间容量将所述人脸识别模型分割成至少两个模型数据包;其中,所述模型数据包的数据量小于或等于所述空间容量。
在一个实施例中,模型传输模块1046还用于依次将所述模型数据包从所述第一运行环境传入到共享缓冲区,并将所述模型数据包从所述共享缓冲区传入到第二运行环境。
在一个实施例中,模型传输模块1046还用于对每个模型数据包赋予对应的数据编号,按照所述数据编号依次将所述模型数据包从所述第一运行环境传入到第二运行环境;在所述第二运行环境中根据所述数据编号将所述模型数据包进行拼接,生成目标人脸识别模型。
在一个实施例中,模型传输模块1046还用于将所述模型数据包进行加密处理,并将加密处理后的模型数据包从所述第一运行环境传入到第二运行环境;在所述第二运行环境中对所述加密处理后的模型数据包进行解密处理。
在一个实施例中,人脸识别模块1048还用于控制摄像头模组采集第一目标图像和散斑图像,并将所述第一目标图像发送到第一运行环境中,将所述散斑图像发送到所述第二运行环境中;在所述第二运行环境中根据所述散斑图像计算得到深度图像,并将所述深度图像发送到所述第一运行环境中;通过所述第一运行环境中的人脸识别模型,对所述第一目标图像和深度图像进行人脸识别处理。
在一个实施例中,人脸识别模块1048还用于控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到所述第二运行环境中;在所述第二运行环境中根据所述散斑图像计算得到深度图像;通过所述第二运行环境中的人脸识别模型,对所述第二目标图像和深度图像进行人脸识别处理。
上述数据处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将数据处理装置按照需要划分为不同的模块,以完成上述数据处理装置的全部或部分功能。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行上述第一实施方式、第二实施方式和第三实施方式提供的数据处理方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一实施方式、第二实施方式和第三实施方式提供的数据处理方法。
请参阅图22,本申请实施例还提供了一种计算机可读存储介质300,其上存储有计算机程序,计算机程序被处理器210执行时实现上述第一实施方式、第二实施方式和第三实施方式的数据处理方法。
请参阅图23,本申请实施例还提供了一种电子设备400,电子设备400包括存储器420及处理器410,存储器420中储存有计算机可读指令,指令被处理器410执行时,使得处理器410执行上述第一实施方式、第二实施方式和第三实施方式的数据处理方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (28)

  1. 一种数据处理方法,其特征在于,所述方法包括:
    获取第一运行环境中存储的人脸识别模型;
    在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储;其中,所述第一运行环境中的存储空间大于所述第二运行环境中的存储空间。
  2. 根据权利要求1所述的方法,其特征在于,所述在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储,包括:
    在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区;
    将所述初始化后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储,所述人脸识别模型用于对图像进行人脸识别处理。
  3. 根据权利要求2所述的方法,其特征在于,所述获取第一运行环境中存储的人脸识别模型之前,还包括:
    终端接收服务器发送的人脸识别模型,并将所述人脸识别模型存储到所述终端的第一运行环境中;
    所述获取第一运行环境中存储的人脸识别模型,包括:
    当检测到所述终端重启时,获取所述第一运行环境中存储的人脸识别模型。
  4. 根据权利要求2所述的方法,其特征在于,所述将初始化后的人脸识别模型传入到共享缓冲区,包括:
    将初始化后的人脸识别模型进行加密处理,并将加密处理后的人脸识别模型传入到共享缓冲区;
    所述将所述初始化后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储,包括:
    将所述加密处理后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储,并在所述第二运行环境中对所述加密处理后的人脸识别模型进行解密处理。
  5. 根据权利要求2所述的方法,其特征在于,所述在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区,包括:
    获取第二运行环境中的剩余存储空间;
    若所述剩余存储空间小于空间阈值,则在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区;
    所述方法,还包括:
    若所述剩余存储空间大于或等于空间阈值,则将所述人脸识别模型传入到共享缓冲区,并将所述人脸识别模型从所述共享缓冲区传入到第二运行环境中;
    在所述第二运行环境中将所述人脸识别模型进行初始化,并将初始化之前的人脸识别模型删除,保留初始化之后的人脸识别模型。
  6. 根据权利要求2至5任一项所述的方法,其特征在于,所述将所述初始化后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储之后,还包括:
    当检测到人脸识别指令时,判断所述人脸识别指令的安全等级;
    若所述安全等级低于等级阈值,则在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理;
    若所述安全等级高于等级阈值,则在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理;其中,所述第二运行环境的安全性高于所述第一运行环境的安全性。
  7. 根据权利要求6所述的方法,其特征在于,所述在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理,包括:
    控制摄像头模组采集第一目标图像和散斑图像,并将所述第一目标图像发送到第一运行环境中,将所述散斑图像发送到所述第二运行环境中;
    在所述第二运行环境中根据所述散斑图像计算得到深度图像,并将所述深度图像发送到所述第一运行环境中;
    通过所述第一运行环境中的人脸识别模型,对所述第一目标图像和深度图像进行人脸识别处理。
  8. 根据权利要求6所述的方法,其特征在于,所述在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理,包括:
    控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到所述第二运行环境中;
    在所述第二运行环境中根据所述散斑图像计算得到深度图像;
    通过所述第二运行环境中的人脸识别模型,对所述第二目标图像和深度图像进行人脸识别处理。
  9. 根据权利要求1所述的方法,其特征在于,所述在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储,包括:
    在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理;
    将所述压缩后的人脸识别模型从所述第一运行环境传入到第二运行环境中进行存储,所述人脸识别模型用于对图像进行人脸识别处理。
  10. 根据权利要求9所述的方法,其特征在于,所述将初始化后的人脸识别模型进行压缩处理,包括:
    获取第二运行环境中用于存储人脸识别模型的目标空间容量,以及初始化后的人脸识别模型的数据量;
    根据所述目标空间容量和数据量计算压缩系数;
    将初始化后的人脸识别模型进行所述压缩系数对应的压缩处理。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述目标空间容量和数据量计算压缩系数,包括:
    若所述目标空间容量小于所述数据量,则将所述目标空间容量与数据量的比值作为压缩系数。
  12. 根据权利要求9所述的方法,其特征在于,所述将所述压缩后的人脸识别模型从所述第一运行环境传入到第二运行环境中进行存储,包括:
    将压缩后的人脸识别模型从所述第一运行环境传入到共享缓冲区,并将所述压缩后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储。
  13. 根据权利要求9所述的方法,其特征在于,所述将所述压缩后的人脸识别模型从所述第一运行环境传入到第二运行环境中进行存储,包括:
    将压缩后的人脸识别模型进行加密处理,并将加密处理后的人脸识别模型从所述第一运行环境传入到第二运行环境;
    在所述第二运行环境中对所述加密处理后的人脸识别模型进行解密处理,并将解密处理后的人脸识别模型进行存储。
  14. 根据权利要求9至13中任一项所述的方法,其特征在于,所述将所述压缩后的人脸识别模型从所述第一运行环境传入到第二运行环境中进行存储之后,还包括:
    当检测到人脸识别指令时,判断所述人脸识别指令的安全等级;
    若所述安全等级低于等级阈值,则在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理;
    若所述安全等级高于等级阈值,则在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理;其中,所述第二运行环境的安全性高于所述第一运行环境的安全性。
  15. 根据权利要求14所述的方法,其特征在于,所述在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理,包括:
    控制摄像头模组采集第一目标图像和散斑图像,并将所述第一目标图像发送到第一运行环境中,将所述散斑图像发送到所述第二运行环境中;
    在所述第二运行环境中根据所述散斑图像计算得到深度图像,并将所述深度图像发送到所述第一运行环境中;
    通过所述第一运行环境中的人脸识别模型,对所述第一目标图像和深度图像进行人脸识别处理;
    所述在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理,包括:
    控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到所述第二运 行环境中;
    在所述第二运行环境中根据所述散斑图像计算得到深度图像;
    通过所述第二运行环境中的人脸识别模型,对所述第二目标图像和深度图像进行人脸识别处理。
  16. 根据权利要求1所述的方法,其特征在于,所述在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储,包括:
    在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包;
    依次将所述模型数据包从所述第一运行环境传入到第二运行环境,并在所述第二运行环境中根据所述模型数据包生成目标人脸识别模型,所述目标人脸识别模型用于对图像进行人脸识别处理。
  17. 根据权利要求16所述的方法,其特征在于,所述依次将所述模型数据包从所述第一运行环境传入到第二运行环境,包括:
    依次将所述模型数据包从所述第一运行环境传入到共享缓冲区,并将所述模型数据包从所述共享缓冲区传入到第二运行环境。
  18. 根据权利要求17所述的方法,其特征在于,所述将初始化后的人脸识别模型分割成至少两个模型数据包,包括:
    获取所述共享缓冲区的空间容量,并根据所述空间容量将所述人脸识别模型分割成至少两个模型数据包;其中,所述模型数据包的数据量小于或等于所述空间容量。
  19. 根据权利要求16所述的方法,其特征在于,所述依次将所述模型数据包从所述第一运行环境传入到第二运行环境,并在所述第二运行环境中根据所述模型数据包生成目标人脸识别模型包括:
    对每个模型数据包赋予对应的数据编号,按照所述数据编号依次将所述模型数据包从所述第一运行环境传入到第二运行环境;
    在所述第二运行环境中根据所述数据编号将所述模型数据包进行拼接,生成目标人脸识别模型。
  20. 根据权利要求16至19中任一项所述的方法,其特征在于,所述依次将所述模型数据包从所述第一运行环境传入到第二运行环境,包括:
    将所述模型数据包进行加密处理,并将加密处理后的模型数据包从所述第一运行环境传入到第二运行环境;
    在所述第二运行环境中对所述加密处理后的模型数据包进行解密处理。
  21. 根据权利要求20所述的方法,其特征在于,所述将所述初始化后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储之后,还包括:
    当检测到人脸识别指令时,判断所述人脸识别指令的安全等级;
    若所述安全等级低于等级阈值,则在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理;
    若所述安全等级高于等级阈值,则在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理;其中,所述第二运行环境的安全性高于所述第一运行环境的安全性。
  22. 根据权利要求21所述的方法,其特征在于,所述在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理,包括:
    控制摄像头模组采集第一目标图像和散斑图像,并将所述第一目标图像发送到第一运行环境中,将所述散斑图像发送到所述第二运行环境中;
    在所述第二运行环境中根据所述散斑图像计算得到深度图像,并将所述深度图像发送到所述第一运行环境中;
    通过所述第一运行环境中的人脸识别模型,对所述第一目标图像和深度图像进行人脸识别处理;
    所述在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理,包括:
    控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到所述第二运行环境中;
    在所述第二运行环境中根据所述散斑图像计算得到深度图像;
    通过所述第二运行环境中的人脸识别模型,对所述第二目标图像和深度图像进行人脸识别处理。
  23. 一种数据处理装置,其特征在于,所述装置包括:
    模型获取模块,用于获取第一运行环境中存储的人脸识别模型;
    模型总处理模块,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到第二运行环境中进行存储;其中,所述第一运行环境中的存储空间大于所述第二运行环境中的存储空间。
  24. 根据权利要求23所述的装置,其特征在于,所述模型总处理模块包括:
    模型传输模块,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型传入到共享缓冲区;
    模型存储模块,用于将所述初始化后的人脸识别模型从所述共享缓冲区传入到第二运行环境中进行存储,所述人脸识别模型用于对图像进行人脸识别处理。
  25. 根据权利要求23所述的装置,其特征在于,所述模型总处理模块包括:
    模型传输模块,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型进行压缩处理;
    模型存储模块,用于将所述压缩后的人脸识别模型从所述第一运行环境传入到第二运行环境中进行存储,所述人脸识别模型用于对图像进行人脸识别处理。
  26. 根据权利要求23所述的装置,其特征在于,所述模型总处理模块包括:
    模型分割模块,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包;
    模型传输模块,用于依次将所述模型数据包从所述第一运行环境传入到第二运行环境,并在所述第二运行环境中根据所述模型数据包生成目标人脸识别模型,所述目标人脸识别模型用于对图像进行人脸识别处理。
  27. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至22中任一项所述的方法。
  28. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行权利要求1至22中任一项所述的方法。
PCT/CN2019/082696 2018-08-01 2019-04-15 数据处理方法、装置、计算机可读存储介质和电子设备 WO2020024619A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19843800.4A EP3671551A4 (en) 2018-08-01 2019-04-15 DATA PROCESSING METHOD AND DEVICE, COMPUTER-READABLE STORAGE MEDIUM AND ELECTRONIC DEVICE
US16/740,374 US11373445B2 (en) 2018-08-01 2020-01-10 Method and apparatus for processing data, and computer readable storage medium

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201810864802.5A CN109145772B (zh) 2018-08-01 2018-08-01 数据处理方法、装置、计算机可读存储介质和电子设备
CN201810866139.2 2018-08-01
CN201810864804.4 2018-08-01
CN201810864802.5 2018-08-01
CN201810866139.2A CN108985255B (zh) 2018-08-01 2018-08-01 数据处理方法、装置、计算机可读存储介质和电子设备
CN201810864804.4A CN109213610B (zh) 2018-08-01 2018-08-01 数据处理方法、装置、计算机可读存储介质和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/740,374 Continuation US11373445B2 (en) 2018-08-01 2020-01-10 Method and apparatus for processing data, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020024619A1 true WO2020024619A1 (zh) 2020-02-06

Family

ID=69230488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082696 WO2020024619A1 (zh) 2018-08-01 2019-04-15 数据处理方法、装置、计算机可读存储介质和电子设备

Country Status (3)

Country Link
US (1) US11373445B2 (zh)
EP (1) EP3671551A4 (zh)
WO (1) WO2020024619A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612972A (zh) * 2022-03-07 2022-06-10 北京拙河科技有限公司 光场相机的人脸识别方法及系统
CN117633841A (zh) * 2023-12-12 2024-03-01 上海合芯数字科技有限公司 加密模块控制器、加密模块、加密系统和加密处理方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140231501A1 (en) * 2013-02-15 2014-08-21 Samsung Electronics Co., Ltd. Object tracking methdo and appapratus
CN106682650A (zh) * 2017-01-26 2017-05-17 北京中科神探科技有限公司 基于嵌入式深度学习技术的移动终端人脸识别方法和系统
CN107766713A (zh) * 2017-10-18 2018-03-06 广东欧珀移动通信有限公司 人脸模板数据录入控制方法及相关产品
CN108985255A (zh) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备
CN109145772A (zh) * 2018-08-01 2019-01-04 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备
CN109213610A (zh) * 2018-08-01 2019-01-15 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004061702A1 (en) 2002-12-26 2004-07-22 The Trustees Of Columbia University In The City Of New York Ordered data compression system and methods
JP4582186B2 (ja) 2008-04-24 2010-11-17 ソニー株式会社 駆動制御装置、駆動制御方法及びプログラム
US8850535B2 (en) 2011-08-05 2014-09-30 Safefaces LLC Methods and systems for identity verification in a social network using ratings
CN102393970B (zh) 2011-12-13 2013-06-19 北京航空航天大学 一种物体三维建模与渲染系统及三维模型生成、渲染方法
US9779381B1 (en) 2011-12-15 2017-10-03 Jda Software Group, Inc. System and method of simultaneous computation of optimal order point and optimal order quantity
CN102402788A (zh) 2011-12-22 2012-04-04 华南理工大学 一种三维超声图像的分割方法
US10747563B2 (en) 2014-03-17 2020-08-18 Vmware, Inc. Optimizing memory sharing in a virtualized computer system with address space layout randomization (ASLR) enabled in guest operating systems wherein said ASLR is enable during initialization of a virtual machine, in a group, when no other virtual machines are active in said group
CN105446713B (zh) 2014-08-13 2019-04-26 阿里巴巴集团控股有限公司 安全存储方法及设备
CN104361311B (zh) 2014-09-25 2017-09-12 南京大学 多模态在线增量式来访识别系统及其识别方法
US20170154269A1 (en) * 2015-11-30 2017-06-01 Seematics Systems Ltd System and method for generating and using inference models
CN105930731B (zh) 2015-12-21 2018-12-28 中国银联股份有限公司 一种安全应用ta交互的方法及装置
US20170255941A1 (en) 2016-03-01 2017-09-07 Google Inc. Facial Template And Token Pre-Fetching In Hands Free Service Requests
CN105930732B (zh) 2016-04-12 2018-11-06 中国电子科技集团公司第五十四研究所 一种适合vpx设备业务板卡的可信启动方法
CN105930733A (zh) 2016-04-18 2016-09-07 浪潮集团有限公司 一种信任链构建方法和装置
CN107451510B (zh) * 2016-05-30 2023-07-21 北京旷视科技有限公司 活体检测方法和活体检测系统
CN107992729A (zh) 2016-10-26 2018-05-04 中国移动通信有限公司研究院 一种控制方法、终端及用户识别模块卡
CN107169343A (zh) 2017-04-25 2017-09-15 深圳市金立通信设备有限公司 一种控制应用程序的方法及终端
CN107341481A (zh) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 利用结构光图像进行识别
CN107729889B (zh) 2017-11-27 2020-01-24 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN108009999A (zh) 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
US20190236416A1 (en) * 2018-01-31 2019-08-01 Microsoft Technology Licensing, Llc Artificial intelligence system utilizing microphone array and fisheye camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140231501A1 (en) * 2013-02-15 2014-08-21 Samsung Electronics Co., Ltd. Object tracking methdo and appapratus
CN106682650A (zh) * 2017-01-26 2017-05-17 北京中科神探科技有限公司 基于嵌入式深度学习技术的移动终端人脸识别方法和系统
CN107766713A (zh) * 2017-10-18 2018-03-06 广东欧珀移动通信有限公司 人脸模板数据录入控制方法及相关产品
CN108985255A (zh) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备
CN109145772A (zh) * 2018-08-01 2019-01-04 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备
CN109213610A (zh) * 2018-08-01 2019-01-15 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3671551A4 *

Also Published As

Publication number Publication date
EP3671551A4 (en) 2020-12-30
EP3671551A1 (en) 2020-06-24
US20200151436A1 (en) 2020-05-14
US11373445B2 (en) 2022-06-28

Similar Documents

Publication Publication Date Title
TWI736883B (zh) 影像處理方法和電子設備
CN109213610B (zh) 数据处理方法、装置、计算机可读存储介质和电子设备
CN108804895B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN109145653B (zh) 数据处理方法和装置、电子设备、计算机可读存储介质
US11275927B2 (en) Method and device for processing image, computer readable storage medium and electronic device
CN108985255B (zh) 数据处理方法、装置、计算机可读存储介质和电子设备
CN108805024B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN110324521B (zh) 控制摄像头的方法、装置、电子设备及存储介质
JP6756037B2 (ja) ユーザ本人確認の方法、装置及びシステム
US20200151425A1 (en) Image Processing Method, Image Processing Device, Computer Readable Storage Medium and Electronic Device
US20210158509A1 (en) Liveness test method and apparatus and biometric authentication method and apparatus
CN108711054B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2019196684A1 (zh) 数据传输方法、装置、计算机可读存储介质、电子设备和移动终端
WO2020024619A1 (zh) 数据处理方法、装置、计算机可读存储介质和电子设备
TW201944290A (zh) 人臉識別方法以及移動終端
CN108712400B (zh) 数据传输方法、装置、计算机可读存储介质和电子设备
WO2020015403A1 (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2019196669A1 (zh) 基于激光的安全验证方法、装置及终端设备
CN108846310B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN109145772B (zh) 数据处理方法、装置、计算机可读存储介质和电子设备
US11170204B2 (en) Data processing method, electronic device and computer-readable storage medium
US11308636B2 (en) Method, apparatus, and computer-readable storage medium for obtaining a target image
CN110770742A (zh) 基于面部特征点的摇动动作识别系统和方法
WO2019244663A1 (ja) 顔認証システム、端末装置、顔認証方法、及びコンピュータ読み取り可能な記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19843800

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019843800

Country of ref document: EP

Effective date: 20200320

NENP Non-entry into the national phase

Ref country code: DE