CN109145772B - Data processing method and device, computer readable storage medium and electronic equipment - Google Patents

Data processing method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN109145772B
CN109145772B CN201810864802.5A CN201810864802A CN109145772B CN 109145772 B CN109145772 B CN 109145772B CN 201810864802 A CN201810864802 A CN 201810864802A CN 109145772 B CN109145772 B CN 109145772B
Authority
CN
China
Prior art keywords
face recognition
operating environment
recognition model
image
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810864802.5A
Other languages
Chinese (zh)
Other versions
CN109145772A (en
Inventor
郭子青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810864802.5A priority Critical patent/CN109145772B/en
Publication of CN109145772A publication Critical patent/CN109145772A/en
Priority to EP19843800.4A priority patent/EP3671551A4/en
Priority to PCT/CN2019/082696 priority patent/WO2020024619A1/en
Priority to US16/740,374 priority patent/US11373445B2/en
Application granted granted Critical
Publication of CN109145772B publication Critical patent/CN109145772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a data processing method, a data processing device, a computer readable storage medium and an electronic device. The method comprises the following steps: acquiring a face recognition model stored in a first operating environment; initializing the face recognition model in the first operating environment, and compressing the initialized face recognition model; transmitting the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image. The data processing method, the data processing device, the computer readable storage medium and the electronic equipment can improve the data processing efficiency.

Description

Data processing method and device, computer readable storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, a computer-readable storage medium, and an electronic device.
Background
The application of the face recognition technology is gradually applied to the work and life of people, for example, face images can be collected for payment authentication and unlocking authentication, and the face images shot can be beautified. The face in the image can be detected through the face recognition technology, and the face in the image can be recognized as the face of a person, so that the identity of a user can be recognized. Because the algorithm of face recognition is complex, the storage space occupied by the algorithm model for face recognition processing is also large.
Disclosure of Invention
The embodiment of the application provides a data processing method and device, a computer readable storage medium and electronic equipment, which can improve the data processing efficiency.
A method of data processing, the method comprising:
acquiring a face recognition model stored in a first operating environment;
initializing the face recognition model in the first operating environment, and compressing the initialized face recognition model;
transmitting the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
A data processing apparatus, the apparatus comprising:
the model acquisition module is used for acquiring a face recognition model stored in a first operating environment;
the model transmission module is used for initializing the face recognition model in the first operating environment and compressing the initialized face recognition model;
the model storage module is used for transmitting the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a face recognition model stored in a first operating environment;
initializing the face recognition model in the first operating environment, and compressing the initialized face recognition model;
transmitting the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
An electronic device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of:
acquiring a face recognition model stored in a first operating environment;
initializing the face recognition model in the first operating environment, and compressing the initialized face recognition model;
transmitting the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
The data processing method, the data processing device, the computer readable storage medium and the electronic equipment can store the face recognition model in the first running environment, initialize the face recognition model in the first running environment, compress the initialized face recognition model and transmit the compressed face recognition model to the second running environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the human face recognition model is compressed and then transmitted to a second running environment, and the data processing speed is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram showing an internal structure of an electronic apparatus according to an embodiment;
FIG. 2 is a flow diagram of a data processing method in one embodiment;
FIG. 3 is a flow chart of a data processing method in another embodiment;
FIG. 4 is a system diagram illustrating a method for implementing data processing in one embodiment;
FIG. 5 is a diagram illustrating compression of a face recognition model in one embodiment;
FIG. 6 is a flowchart of a data processing method in yet another embodiment;
FIG. 7 is a schematic diagram of computing depth information in one embodiment;
FIG. 8 is a flowchart of a data processing method in yet another embodiment;
FIG. 9 is a diagram of a hardware configuration for implementing a data processing method in one embodiment;
FIG. 10 is a schematic diagram showing the structure of a data processing apparatus according to an embodiment;
fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and at least one computer program is stored on the memory, and can be executed by the processor to realize the data processing method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a data processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
FIG. 2 is a flow diagram of a data processing method in one embodiment. As shown in fig. 2, the data processing method includes steps 202 to 206. Wherein:
step 202, a face recognition model stored in a first operating environment is obtained.
In particular, the electronic device may include a processor, and the processor may store, calculate, transmit, and the like, data. The processor in the electronic device may operate in different environments, for example, the processor may operate in a TEE (Trusted Execution Environment) or an REE (Rich Execution Environment), where when the processor operates in the TEE, the security of data is higher; when running in REE, the data is less secure.
The electronic device can allocate resources of the processor, and divide different resources for different operating environments. For example, generally, there are fewer processes with higher security requirements in the electronic device, and there are more common processes, so that the electronic device can divide a small part of resources of the processor into a higher security operating environment, and divide a large part of resources into a less high security operating environment.
The face recognition model is an algorithm model for performing recognition processing on a face in an image, and is generally stored in a file form. It can be understood that, because the algorithm for recognizing the face in the image is relatively complex, the storage space occupied when the face recognition model is stored is relatively large. After the electronic device divides the processor into different operating environments, the storage space divided into the first operating environment is unnecessarily divided into the storage space in the second operating environment, so that the electronic device can store the face recognition model in the first operating environment to ensure that the second operating environment has enough space to process data.
Step 204, initializing the face recognition model in the first operating environment, and compressing the initialized face recognition model.
Before the face recognition processing is performed on the image, the face recognition model needs to be initialized. If the face recognition model is stored in the second operating environment, the storage space in the second operating environment needs to be occupied for storing the face recognition model, and the storage space in the second operating environment needs to be occupied for initializing the face recognition model, so that the resource consumption of the second operating environment is too large, and the efficiency of data processing is influenced.
For example, the face recognition model occupies 20M of memory, an additional 10M of memory is required for initializing the face recognition model, and if the storage and initialization are both performed in the second operating environment, a total of 30M of memory of the second operating environment is required. If the face recognition model is stored in the first operating environment, initialized in the first operating environment and then sent to the second operating environment, only 10M of memory in the second operating environment needs to be occupied, and the resource occupancy rate in the second operating environment is greatly reduced.
The electronic equipment stores the face recognition model in the first operating environment, initializes the face recognition model in the first operating environment, and transmits the initialized face recognition model to the second operating environment, so that the occupation of the storage space in the second operating environment can be reduced. After the face recognition model is initialized, the initialized face recognition model can be further compressed, and then the compressed face recognition model is sent to the second operating environment to be stored, so that the resource occupation in the second operating environment is further reduced, and the data processing speed is improved.
Step 206, transmitting the compressed face recognition model from the first operation environment to the second operation environment for storage; the storage space of the first operation environment is larger than that of the second operation environment, and the target face recognition model is used for carrying out face recognition processing on the image.
In one embodiment, execution of step 202 may begin upon detection of satisfaction of an initialization condition. For example, the face recognition model is stored in the first operating environment, the electronic device may initialize the face recognition model when starting up, initialize the face recognition model when detecting that an application program requiring face recognition processing is opened, initialize the face recognition model when detecting a face recognition instruction, compress the initialized face recognition model, and transmit the compressed face recognition model to the second operating environment.
In other embodiments provided by the application, before the face recognition model is initialized in the first operating environment, the remaining storage space in the second operating environment may be obtained; and if the residual storage space is smaller than the space threshold, initializing the face recognition model in the first operating environment, and compressing the initialized face recognition model. The space threshold may be set as required, and is generally the sum of the storage space occupied by the face recognition model and the storage space occupied when the face recognition model is initialized.
If the remaining storage space in the second operating environment is large, the face recognition model can be directly sent to the second operating environment, initialization processing is carried out in the second operating environment, and the original face recognition model is deleted after initialization is completed, so that the data security can be ensured. The data processing method may further include: if the residual storage space is larger than or equal to the space threshold, compressing the face recognition model in the first operating environment, and transmitting the compressed face recognition model into the second operating environment; initializing the compressed face recognition model in a second running environment, deleting the face recognition model before initialization, and keeping the face recognition model after initialization.
It will be appreciated that a face recognition model may generally include a plurality of processing modules, each performing a different process, and that the plurality of processing modules may be independent of each other. For example, a face detection module, a face matching module, and a liveness detection module may be included. Some of the modules may have relatively low security requirements, and some of the modules may have relatively high security requirements. Therefore, the processing module with lower security requirement can be initialized in the first operating environment, and the processing module with higher security requirement can be initialized in the second operating environment.
Specifically, step 204 may include: and performing first initialization on a first module in the face recognition model in a first running environment, and compressing the face recognition model after the first initialization. Step 206 may also be followed by: and performing second initialization on a second module in the compressed face recognition model, wherein the second module is a module except the first module in the face recognition model, and the security of the first module is lower than that of the second module. For example, the first module may be a face detection module, the second module may be a face matching module and a living body detection module, and the first module has a low requirement on security and is initialized in the first operating environment. The second module has a high requirement on security and is initialized in the second operating environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, compress the initialized face recognition model, and transmit the compressed face recognition model to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the human face recognition model is compressed and then transmitted to a second running environment, and the data processing speed is further improved.
Fig. 3 is a flowchart of a data processing method in another embodiment. As shown in fig. 3, the data processing method includes steps 302 to 316. Wherein:
step 302, a face recognition model stored in a first operating environment is obtained.
Generally, before face recognition processing, a face recognition model is trained, so that the recognition accuracy of the face recognition model is higher. In the process of training the model, a training image set is obtained, images in the training image set are used as the input of the model, and the training parameters of the model are continuously adjusted according to the training result obtained in the training process, so that the optimal parameters of the model are obtained. The more images included in the training image set, the more accurate the model obtained by training, but the time consumption is increased correspondingly.
In one embodiment, the electronic device may be a terminal that interacts with the user, and the face recognition model may be trained on the server due to limited terminal resources. And after the face recognition model is trained by the server, the trained face recognition model is sent to the terminal. And after the terminal receives the trained face recognition model, storing the trained face recognition model in a first operating environment. Step 302 may also be preceded by: the terminal receives the face recognition model sent by the server and stores the face recognition model into a first operating environment of the terminal.
The terminal can comprise a first operation environment and a second operation environment, the terminal can perform face recognition processing on the image in the second operation environment, but the terminal can store the received face recognition model in the storage space of the first operation environment because the storage space of the terminal divided into the first operation environment is larger than the storage space of the terminal divided into the second operation environment. In an embodiment, each time the restart of the terminal is detected, the face recognition model stored in the first operating environment may be loaded into the second operating environment, so that when the face recognition processing needs to be performed on the image, the face recognition model loaded in the second operating environment may be directly called for processing. Step 302 may specifically include: and when the terminal is detected to be restarted, acquiring the face recognition model stored in the first operating environment.
It can be understood that the face recognition model can be updated, when the face recognition model is updated, the server sends the updated face recognition model to the terminal, and after the terminal receives the updated face recognition model, the updated face recognition model is stored in the first operating environment to cover the original face recognition model. And then the terminal is controlled to restart, and after the terminal is restarted, the updated face recognition model is obtained and initialized.
And 304, initializing the face recognition model in the first operating environment, and acquiring the target space capacity for storing the face recognition model in the second operating environment and the data volume of the initialized face recognition model.
Before the face recognition processing is performed by the face recognition model, the face recognition model needs to be initialized. In the initialization process, parameters, modules and the like in the face recognition model can be set to be in default states. Because the memory is also occupied in the process of initializing the model, the terminal can initialize the face recognition model in the first operating environment and then send the initialized face recognition model to the second operating environment, so that the face recognition processing can be directly carried out in the second operating environment without occupying extra memory to initialize the model.
After the face recognition model is initialized, the initialized face recognition model can be further compressed. Specifically, a target space capacity for storing the face recognition model in the second operating environment and a data volume of the initialized face recognition model may be obtained, and the compression processing may be performed according to the target space capacity and the data volume. It should be noted that a storage space dedicated to storing the face recognition model may be defined in the second operating environment, so that other data cannot occupy the storage space. The target space capacity is the capacity of the storage space which is specially used for storing the face recognition model, and the data size of the face recognition model refers to the data size of the face recognition model.
And step 306, calculating a compression coefficient according to the target space capacity and the data quantity.
And calculating a compression coefficient according to the target space capacity and the data volume, and then compressing the face recognition model according to the calculated compression coefficient. When the target space capacity is smaller than the data volume, it is indicated that there is not enough storage space for storing the face recognition model in the second operating environment, the face recognition model may be correspondingly compressed according to the target space capacity and the data volume, and then the compressed face recognition model is stored in the second operating environment.
In one embodiment, the step of calculating the compression coefficient may specifically include: and if the target space capacity is smaller than the data volume, taking the ratio of the target space capacity to the data volume as a compression coefficient. For example, the target space capacity is 20M, and the data size of the face recognition model is 31.5M, then the compression factor is 31.5/20 ═ 1.575, that is, the face recognition model is compressed by 1.575 times. When the target space capacity is greater than or equal to the data volume, in order to increase the data transmission speed, the face recognition model may be compressed according to a preset compression coefficient, or the face recognition model may not be compressed, which is not limited herein.
And 308, performing compression processing corresponding to a compression coefficient on the initialized face recognition model.
After the compression coefficient is obtained, the initialized face recognition model can be compressed according to the compression coefficient, and the compressed face recognition model can be stored in the second operating environment. It can be understood that once the face recognition model is compressed, the accuracy of the corresponding face recognition process is reduced, so that the accuracy of the recognition cannot be ensured. Therefore, in order to ensure the accuracy of face recognition, a maximum compression limit can be set, and the compression on the face recognition model cannot exceed the maximum compression limit.
In one embodiment, a compression threshold may be set, and when the compression coefficient is greater than the compression threshold, the accuracy of the face recognition processing performed by the compressed face recognition model is considered to be low. Specifically, step 308 may include: if the compression coefficient is smaller than the compression threshold, performing compression processing corresponding to the compression coefficient on the initialized face recognition model; and if the compression coefficient is greater than or equal to the compression threshold, performing compression processing corresponding to the compression threshold on the initialized face recognition model. After the compression processing is performed according to the compression threshold, the electronic device may reallocate the storage space for storing the compressed face recognition model in the second operating environment according to the size of the data of the compressed face recognition model.
FIG. 4 is a diagram illustrating compression of a face recognition model in one embodiment. As shown in fig. 4, the face recognition model 402 is stored in a file for a total of 30M. After the face recognition model 402 is compressed, a compressed face recognition model 404 is formed. The compressed face recognition model 404 is also stored in a file, for a total of 20M.
And 310, transmitting the compressed face recognition model from the first running environment to a shared buffer area, and transmitting the compressed face recognition model from the shared buffer area to the second running environment for storage.
The shared Buffer (Share Buffer) is a channel for the first operating environment and the second operating environment to transmit data, and the first operating environment and the second operating environment can both access the shared Buffer. It should be noted that the electronic device may configure the shared buffer, and may set the space size of the shared buffer according to the requirement. For example, the electronic device may set the storage space of the shared buffer to be 5M or 10M.
FIG. 5 is a system diagram illustrating a method for implementing data processing in one embodiment. As shown in FIG. 5, the system includes a first runtime environment 502, a shared buffer 504, and a second runtime environment 506. The first runtime environment 502 and the second runtime environment 506 may perform data transfers through the shared buffer 504. The face recognition model is stored in the first operating environment 502, and the system may acquire the face recognition model stored in the first operating environment 502, initialize the acquired face recognition model, compress the initialized face recognition model, transmit the compressed face recognition model into the shared buffer 504, and transmit the initialized face recognition model into the second operating environment 506 through the shared buffer 504.
In step 312, when the face recognition command is detected, the security level of the face recognition command is determined.
The face recognition models are stored in the first running environment and the second running environment, and the terminal can perform face recognition processing in the first running environment and can also perform face recognition processing in the second running environment. Specifically, the terminal may determine whether to perform face recognition processing in the first operating environment or to perform face recognition processing in the second operating environment according to a face recognition instruction that triggers the face recognition processing.
The face recognition instruction is initiated by an upper application of the terminal, and when the upper application initiates the face recognition instruction, information such as time for initiating the face recognition instruction, an application identifier, an operation identifier and the like can be written into the face recognition. The application identifier may be used to identify an application program that initiates the face recognition instruction, and the operation identifier may be used to identify an application operation that needs a face recognition result to perform. For example, the application operations such as payment, unlocking, beautifying and the like can be performed through the face recognition result, and the operation identifier in the face recognition instruction is used for indicating the application operations such as payment, unlocking, beautifying and the like.
The security level is used to indicate that the security of the application operation is low, and the higher the security level is, the higher the requirement of the application operation on the security is. For example, if the security requirement of the payment operation is high and the security requirement of the beauty operation is low, the security level of the payment operation is higher than that of the beauty operation. The security level can be directly written into the face recognition instruction, and after the terminal detects the face recognition instruction, the security level in the face recognition instruction is directly read. The corresponding relation of the operation identifiers can also be established in advance, and after the face recognition instruction is detected, the corresponding safety level is obtained through the operation identifiers in the face recognition instruction.
And step 314, if the security level is lower than the level threshold, performing face recognition processing according to the face recognition model in the first operating environment.
When the security level is lower than the level threshold value, the security requirement of the application operation initiating the face recognition processing is considered to be low, and the face recognition processing can be directly performed in the first running environment according to the face recognition model. Specifically, the face recognition processing may include, but is not limited to, one or more of face detection, face matching, and living body detection, where the face detection refers to a process of detecting whether a face exists in an image, the face matching refers to a process of matching a detected face with a preset face, and the living body detection refers to a process of detecting whether a face in an image is a living body.
Step 316, if the security level is higher than the level threshold, performing face recognition processing according to the face recognition model in a second operating environment; the safety of the second operation environment is higher than that of the first operation environment.
When the security level is higher than the level threshold, the security requirement of the application operation initiating the face recognition processing is considered to be high, and the face recognition processing can be performed according to the face recognition model in the second running environment. Specifically, the terminal can send the face recognition instruction to a second operation environment, and the camera module is controlled to collect images through the second operation environment. The collected image is firstly sent to a second running environment, the safety level of the application operation is judged in the second running environment, and if the safety level is lower than a level threshold value, the collected image is sent to the first running environment for face recognition processing; and if the safety level is higher than the level threshold, performing face recognition processing on the acquired image in a second running environment.
Specifically, as shown in fig. 6, when performing the face recognition processing in the first operating environment, the method includes:
step 602, controlling the camera module to collect a first target image and a speckle image, and sending the first target image to a first operating environment and sending the speckle image to a second operating environment.
The application installed in the terminal can initiate a face recognition instruction and send the face recognition instruction to the second operating environment. When the safety level of the detected face recognition instruction in the second operating environment is lower than the level threshold, the camera module can be controlled to acquire the first target image and the speckle image. The first target image collected by the camera module can be directly sent to the first operation environment, and the collected speckle pattern is sent to the second operation environment.
In one embodiment, the first target image may be a visible light image or other types of images, which are not limited herein. When the first target image is a visible light image, the camera module may include an RGB (Red Green Blue ) camera, and the first target image is collected by the RGB camera. Still can include radium-shine lamp and laser camera in the camera module, the steerable radium-shine lamp of terminal is opened, then gathers the laser speckle that radium-shine lamp transmitted through the laser camera and shines the speckle image that forms on the object.
Specifically, when laser is irradiated on an optically rough surface with average fluctuation larger than the wavelength order, wavelets scattered by randomly distributed surface elements on the surface are mutually superposed to enable a reflected light field to have random spatial light intensity distribution and present a granular structure, namely laser speckle. The laser speckles formed are highly random, and therefore, the laser speckles generated by the laser emitted by different laser emitters are different. When the resulting laser speckle is projected onto objects of different depths and shapes, the resulting speckle images are not identical. The laser speckles formed by different laser lamps have uniqueness, so that the obtained speckle images also have uniqueness.
And step 604, calculating to obtain a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment.
The terminal can ensure that the speckle images are processed in a safe environment all the time in order to protect the safety of data, so the terminal can transmit the speckle images to a second operating environment for processing. The depth image is an image representing depth information of a subject, and is calculated from the speckle image. The terminal can control the camera module to simultaneously acquire the first target image and the speckle image, and the depth information of the object in the first target image can be represented according to the depth image obtained by the speckle image calculation.
A depth image may be computed from the speckle image and the reference image in a second operating environment. The depth image is an image acquired when laser speckle is irradiated to a reference plane, so the reference image is an image with reference depth information. First, the relative depth can be calculated according to the position offset of the speckle point in the speckle image relative to the scattered spot in the reference image, and the relative depth can represent the depth information of the actual shooting object to the reference plane. And then calculating the actual depth information of the object according to the acquired relative depth and the reference depth. Specifically, the reference image is compared with the speckle image to obtain offset information, and the offset information is used for representing the horizontal offset of the speckle point in the speckle image relative to the corresponding scattered spot in the reference image; and calculating to obtain a depth image according to the offset information and the reference depth information.
FIG. 6 is a schematic diagram of computing depth information in one embodiment. As shown in fig. 6, the laser lamp 602 may generate laser speckles, which are reflected by an object and then captured by the laser camera 604 to form an image. In the calibration process of the camera, laser speckles emitted by the laser lamp 602 are reflected by the reference plane 608, reflected light is collected by the laser camera 604, and a reference image is obtained by imaging through the imaging plane 610. The reference depth L from the reference plane 608 to the laser lamp 602 is known. In the process of actually calculating the depth information, laser speckles emitted by the laser lamp 602 are reflected by the object 606, reflected light is collected by the laser camera 604, and an actual speckle image is obtained by imaging through the imaging plane 610. The calculation formula for obtaining the actual depth information is as follows:
Figure BDA0001750634850000131
where L is the distance between the laser light 602 and the reference plane 608, f is the focal length of the lens in the laser camera 604, CD is the distance between the laser light 602 and the laser camera 604, and AB is the offset distance between the image of the object 606 and the image of the reference plane 608. AB may be the product of the pixel offset n and the actual distance p of the pixel. When the distance Dis between the object 604 and the laser lamp 602 is greater than the distance L between the reference plane 606 and the laser lamp 602, AB is a negative value; AB is positive when the distance Dis between the object 604 and the laser lamp 602 is less than the distance L between the reference plane 606 and the laser lamp 602.
And 606, performing face recognition processing on the first target image and the depth image through a face recognition model in the first running environment.
After the depth image is obtained through calculation in the second running environment, the depth image obtained through calculation can be sent to the first running environment, then face recognition processing is carried out according to the first target image and the depth image in the first running environment, the first running environment sends a face recognition result to the upper layer application, and the upper layer application can carry out corresponding application operation according to the face recognition result.
For example, when the image is subjected to the beauty processing, the position and the area where the face is located can be detected by the first target image. Because the first target image and the depth image are corresponding, the depth information of the face can be obtained through the corresponding area of the depth image, the three-dimensional feature of the face can be constructed through the depth information of the face, and therefore the face can be beautified according to the three-dimensional feature of the face.
In other embodiments provided in the present application, as shown in fig. 8, when performing face recognition processing in the second operating environment, the method specifically includes:
and step 802, controlling the camera module to collect a second target image and a speckle image, and sending the second target image and the speckle image to a second operating environment.
In one embodiment, the second target image can be an infrared image, the camera module can comprise a floodlight, a laser lamp and a laser camera, the floodlight can be controlled by the terminal to be turned on, and then the infrared image formed by irradiating an object through the floodlight is collected through the laser camera to serve as the second target image. The terminal can also control the laser lamp to be started, and then a laser camera is used for collecting speckle images formed by the laser lamp irradiating objects.
The time interval between the collection of the second target image and the speckle image is short, so that the consistency of the collected second target image and the speckle image can be ensured, a larger error between the second target image and the speckle image is avoided, and the accuracy of image processing is improved. Specifically, the camera module is controlled to collect a second target image, and the camera module is controlled to collect a speckle image; wherein a time interval between a first time of acquiring the second target image and a second time of acquiring the speckle image is less than a first threshold.
The floodlight controller and the laser lamp controller can be respectively arranged and are respectively connected through two paths of Pulse Width Modulation (PWM), when the floodlight is required to be controlled to be started or the laser lamp is required to be started, the floodlight can be controlled to be started by transmitting Pulse waves to the floodlight controller through the PWM or the laser lamp controller is controlled to be started by transmitting the Pulse waves to the laser lamp controller, and the time interval between the acquisition of the second target image and the speckle image is controlled by transmitting the Pulse waves to the two controllers through the PWM. It is understood that the second target image may be an infrared image, or may be other types of images, and is not limited herein. For example, the second target image may also be a visible light image.
And step 804, calculating to obtain a depth image according to the speckle image in the second operating environment.
It should be noted that, when the security level of the face recognition instruction is higher than the level threshold, it is considered that the security requirement of the application operation initiating the face recognition instruction is higher, and then the face recognition processing needs to be performed in an environment with higher security, so as to ensure the security of data processing. And the second target image and the speckle image acquired by the camera module are directly sent to a second operation environment, and then the depth image is calculated according to the speckle image in the second operation environment.
Step 806, performing face recognition processing on the second target image and the depth image through the face recognition model in the second operating environment.
In one embodiment, when the face recognition processing is performed in the second operating environment, the face detection may be performed according to the second target image, and whether the second target image includes the target face or not may be detected. And if the second target image contains the target face, matching the detected target face with a preset face. And if the detected target face is matched with the preset face, acquiring target depth information of the target face according to the depth image, and detecting whether the target face is a living body according to the target depth information.
When the target face is matched, the face attribute features of the target face can be extracted, the extracted face attribute features are matched with the face attribute features of the preset face, and if the matching value exceeds the matching threshold value, the face matching is considered to be successful. For example, the characteristics of the human face, such as the deflection angle, the brightness information, the facial features and the like, can be extracted as the human face attribute characteristics, and if the matching degree of the human face attribute characteristics of the target human face and the human face attribute characteristics of the preset human face exceeds 90%, the human face matching is considered to be successful.
Generally, in the process of face authentication, if a face in a photo or sculpture is taken, the extracted face attribute features may be successfully authenticated. In order to improve the accuracy, the living body detection processing can be performed according to the acquired depth image, so that it is necessary to ensure that the acquired face is a living body face before the authentication is successful. It can be understood that the acquired second target image may represent detail information of a human face, and the acquired depth image may represent corresponding depth information, and living body detection may be performed according to the depth image. For example, if the photographed face is a face in a photograph, it can be determined from the depth image that the acquired face is not a solid, and the acquired face can be considered to be a non-living face.
Specifically, the performing of the living body detection according to the corrected depth image includes: and searching face depth information corresponding to the target face in the depth image, wherein if the face depth information corresponding to the target face exists in the depth image and the face depth information conforms to a face three-dimensional rule, the target face is a living body face. The face stereo rule is a rule with face three-dimensional depth information.
In an embodiment, an artificial intelligence model may be further used to perform artificial intelligence recognition on the second target image and the depth image, acquire a living body attribute feature corresponding to the target face, and determine whether the target face is a living body face image according to the acquired living body attribute feature. The living body attribute features may include skin characteristics, a direction of a texture, a density of the texture, a width of the texture, and the like corresponding to the target face, and if the living body attribute features conform to a living body rule of the face, the target face is considered to have biological activity, that is, the target face is the living body face.
It is to be understood that, when processing such as face detection, face matching, and living body detection is performed, the processing order may be changed as necessary. For example, the human face may be authenticated first, and then whether the human face is a living body may be detected. Or whether the human face is a living body can be detected firstly, and then the human face is authenticated.
In the embodiment provided by the application, in order to ensure the safety of data, when the face recognition model is transmitted, the compressed face recognition model can be encrypted, and the encrypted face recognition model is transmitted from the first operating environment to the second operating environment; and decrypting the face recognition model after the encryption processing in a second running environment, and storing the face recognition model after the decryption processing.
The first operating environment may be a normal operating environment, the second operating environment may be a safe operating environment, and the second operating environment may be safer than the first operating environment. The first execution environment is generally configured to process application operations with lower security, and the second execution environment is generally configured to process application operations with higher security. For example, operations with low security requirements, such as shooting and gaming, may be performed in a first operating environment, and operations with high security requirements, such as payment and unlocking, may be performed in a second operating environment.
The second operating environment is generally used for performing application operations with high security requirements, and therefore, when the face recognition model is sent to the second operating environment, the security of the face recognition model also needs to be ensured. After the face recognition model is compressed in the first operating environment, the compressed face recognition model may be encrypted, and then the encrypted face recognition model may be sent to the second operating environment through the shared buffer.
And after the encrypted face recognition model is transmitted into the shared buffer area from the first running environment, the encrypted face recognition model is transmitted into the second running environment from the shared buffer area. And the second operating environment decrypts the received face recognition model after encryption. The algorithm for encrypting the face recognition model is not limited in this embodiment. For example, the Encryption processing may be performed according to an Encryption Algorithm such as DES (Data Encryption Standard), MD5(Message-Digest Algorithm 5), HAVAL (Diffie-Hellman, key exchange Algorithm), or the like.
In one embodiment, after generating the target face recognition model in the second operating environment, the method may further include: and deleting the target face recognition model in the second operating environment when detecting that the duration of the non-called target face recognition model exceeds a duration threshold or detecting that the terminal is closed. This frees up storage space in the second operating environment to save space on the electronic device.
Furthermore, the operation condition can be detected in the operation process of the electronic equipment, and the storage space occupied by the target face recognition model is released according to the operation condition of the electronic equipment. Specifically, when it is detected that the electronic device is in a stuck state and the time length of the non-invoked target face recognition model exceeds a time length threshold, the target face recognition model in the second operating environment is deleted.
After the target face recognition model is released, the face recognition model stored in the first operating environment can be acquired when the electronic equipment is detected to recover to a normal operating state or a face recognition instruction is detected; initializing a face recognition model in a first operating environment, and compressing the initialized face recognition model; and transmitting the compressed face recognition model from the first operating environment to the second operating environment for storage.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, compress the initialized face recognition model, and transmit the compressed face recognition model to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the human face recognition model is compressed and then transmitted to a second running environment, and the data processing speed is further improved. In addition, the processing is carried out in the first running environment or the second running environment according to the safety level selection of the face recognition instruction, all the applications are prevented from being processed in the second running environment, and the resource occupancy rate of the second running environment can be reduced.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 6, and 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 6, and 8 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 9 is a hardware configuration diagram for implementing the data processing method in one embodiment. As shown in fig. 9, the electronic device may include a camera module 910, a Central Processing Unit (CPU) 920 and a Micro Control Unit (MCU) 930, where the camera module 910 includes a laser camera 912, a floodlight 914, an RGB camera 916 and a laser light 918. The mcu 930 includes a PWM (Pulse Width Modulation) module 932, an SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) module 934, a RAM (Random Access Memory) module 936, and a Depth Engine module 938. The central processing unit 920 may be in a multi-core operation mode, and a CPU core in the central processing unit 920 may operate under a TEE or a REE. Both the TEE and the REE are running modes of an ARM module (Advanced RISC Machines). The natural operating environment in the cpu 920 may be the first operating environment, and the security is low. The trusted operating environment in the central processing unit 920 is the second operating environment, and the security is high. It is understood that, since the mcu 930 is a processing module independent from the cpu 920 and the input and output of the mcu 930 are controlled by the cpu 920 under the trusted operating environment, the mcu 930 is also a processing module with higher security, and it can be considered that the mcu 930 is also under the secure operating environment, i.e. the mcu 930 is also under the second operating environment.
Generally, the operation behavior with higher security requirement needs to be executed in the second operation environment, and other operation behaviors can be executed in the first operation environment. In this embodiment, the central processing unit 920 may send a face recognition instruction to the SPI/I2C module 934 in the micro control unit 930 through the trusted operating environment control SECURE SPI/I2C. After receiving the face recognition instruction, if the safety level of the face recognition instruction is determined to be higher than the level threshold, the micro control unit 930 transmits a pulse wave through the PWM module 932 to control the opening of the floodlight 914 in the camera module 910 to collect an infrared image, and controls the opening of the laser light 918 in the camera module 910 to collect a speckle image. The camera module 910 can transmit the collected infrared image and speckle image to a Depth Engine module 938 in the micro-control unit 930, and the Depth Engine module 938 can calculate a Depth image according to the speckle image and transmit the infrared image and the Depth image to a trusted operating environment of the central processor 920. The trusted operating environment of the cpu 920 performs face recognition processing according to the received infrared image and depth image.
If the safety level of the face recognition instruction is lower than the level threshold value, the PWM module 932 emits pulse waves to control the laser lamp 918 in the camera module 910 to be turned on to collect speckle images, and the RGB camera 916 collects visible light images. The camera module 910 directly sends the collected visible light image to the natural operation environment of the central processor 920, transmits the speckle image to a Depth Engine module 938 in the micro control unit 930, and the Depth Engine module 938 can calculate the Depth image according to the speckle image and send the Depth image to the trusted operation environment of the central processor 920. And then the depth image is sent to a natural running environment by the credible running environment, and the face recognition processing is carried out in the natural running environment according to the visible light image and the depth image.
FIG. 10 is a block diagram of a data processing apparatus according to an embodiment. As shown in fig. 10, the data processing apparatus 1000 includes a model acquisition module 1002, a model transmission module 1004, and a model storage module 1006. Wherein:
a model obtaining module 1002, configured to obtain a face recognition model stored in a first operating environment.
A model transmission module 1004, configured to initialize the face recognition model in the first operating environment, and compress the initialized face recognition model.
A model storage module 1006, configured to transfer the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
The data processing apparatus provided in the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, compress the initialized face recognition model, and transmit the compressed face recognition model to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the human face recognition model is compressed and then transmitted to a second running environment, and the data processing speed is further improved.
Fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment. As shown in fig. 11, the data processing apparatus 1100 includes a model acquisition module 1102, a model transmission module 1104, a model storage module 1106, and a face recognition module 1108. Wherein:
a model obtaining module 1102, configured to obtain a face recognition model stored in a first operating environment.
A model transmission module 1104, configured to initialize the face recognition model in the first operating environment, and compress the initialized face recognition model.
A model storage module 1106, configured to transfer the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
A face recognition module 1108, configured to determine a security level of a face recognition instruction when the face recognition instruction is detected; if the safety level is lower than a level threshold, performing face recognition processing according to the face recognition model in the first operating environment; if the safety level is higher than a level threshold, performing face recognition processing according to the face recognition model in the second operating environment; and the safety of the second operation environment is higher than that of the first operation environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, compress the initialized face recognition model, and transmit the compressed face recognition model to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the human face recognition model is compressed and then transmitted to a second running environment, and the data processing speed is further improved. In addition, the processing is carried out in the first running environment or the second running environment according to the safety level selection of the face recognition instruction, all the applications are prevented from being processed in the second running environment, and the resource occupancy rate of the second running environment can be reduced.
In one embodiment, the model transmission module 1104 is further configured to obtain a target space capacity for storing the face recognition model in the second operating environment and a data volume of the initialized face recognition model;
calculating a compression coefficient according to the target space capacity and the data quantity;
and performing compression processing corresponding to the compression coefficient on the initialized face recognition model.
In one embodiment, the model transmission module 1104 is further configured to use a ratio of the target space capacity to the data amount as a compression factor if the target space capacity is smaller than the data amount.
In one embodiment, the model storage module 1106 is further configured to transfer the compressed face recognition model from the first runtime environment to a shared buffer, and transfer the compressed face recognition model from the shared buffer to a second runtime environment for storage.
In one embodiment, the model storage module 1106 is further configured to encrypt the compressed face recognition model and transfer the encrypted face recognition model from the first operating environment to the second operating environment; and decrypting the face recognition model after the encryption processing in the second running environment, and storing the face recognition model after the decryption processing.
In one embodiment, the model storage module 1108 is further configured to control the camera module to acquire a first target image and a speckle image, and to transmit the first target image to the first operating environment and the speckle image to the second operating environment; calculating to obtain a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment; and carrying out face recognition processing on the first target image and the depth image through a face recognition model in the first running environment.
In one embodiment, the model storage module 1108 is further configured to control the camera module to acquire a second target image and a speckle image, and send the second target image and the speckle image to the second operating environment; calculating to obtain a depth image according to the speckle image in the second operating environment; and carrying out face recognition processing on the second target image and the depth image through the face recognition model in the second running environment.
The division of the modules in the data processing apparatus is only for illustration, and in other embodiments, the data processing apparatus may be divided into different modules as needed to complete all or part of the functions of the data processing apparatus.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the data processing methods provided by the above-described embodiments.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the data processing method provided by the above embodiments.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of data processing, the method comprising:
acquiring a face recognition model stored in a first operating environment;
initializing the face recognition model in the first operating environment, and compressing the initialized face recognition model;
transmitting the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the face recognition model is used for carrying out face recognition processing on the image.
2. The method according to claim 1, wherein the compressing the initialized face recognition model comprises:
acquiring target space capacity used for storing the face recognition model in a second operating environment and data volume of the initialized face recognition model;
calculating a compression coefficient according to the target space capacity and the data quantity;
and performing compression processing corresponding to the compression coefficient on the initialized face recognition model.
3. The method of claim 2, wherein calculating the compression factor based on the target spatial capacity and the amount of data comprises:
and if the target space capacity is smaller than the data volume, taking the ratio of the target space capacity to the data volume as a compression coefficient.
4. The method of claim 1, wherein the passing the compressed face recognition model from the first operating environment to a second operating environment for storage comprises:
and transmitting the compressed face recognition model from the first running environment to a shared buffer area, and transmitting the compressed face recognition model from the shared buffer area to a second running environment for storage.
5. The method of claim 1, wherein the passing the compressed face recognition model from the first operating environment to a second operating environment for storage comprises:
encrypting the compressed face recognition model, and transmitting the encrypted face recognition model from the first operating environment to a second operating environment;
and decrypting the face recognition model after the encryption processing in the second running environment, and storing the face recognition model after the decryption processing.
6. The method according to any one of claims 1 to 5, wherein after the transferring the compressed face recognition model from the first operating environment to a second operating environment for storage, further comprising:
when a face recognition instruction is detected, judging the safety level of the face recognition instruction;
if the safety level is lower than a level threshold, performing face recognition processing according to the face recognition model in the first operating environment;
if the safety level is higher than a level threshold, performing face recognition processing according to the face recognition model in the second operating environment; and the safety of the second operation environment is higher than that of the first operation environment.
7. The method of claim 6, wherein performing a face recognition process according to the face recognition model in the first operating environment comprises:
controlling a camera module to collect a first target image and a speckle image, sending the first target image to a first operating environment, and sending the speckle image to a second operating environment;
calculating to obtain a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment;
performing face recognition processing on the first target image and the depth image through a face recognition model in the first operating environment;
the performing, in the second operating environment, face recognition processing according to the face recognition model includes:
controlling a camera module to acquire a second target image and a speckle image and sending the second target image and the speckle image to the second operating environment;
calculating to obtain a depth image according to the speckle image in the second operating environment;
and carrying out face recognition processing on the second target image and the depth image through the face recognition model in the second running environment.
8. A data processing apparatus, characterized in that the apparatus comprises:
the model acquisition module is used for acquiring a face recognition model stored in a first operating environment;
the model transmission module is used for initializing the face recognition model in the first operating environment and compressing the initialized face recognition model;
the model storage module is used for transmitting the compressed face recognition model from the first operating environment to a second operating environment for storage; the storage space of the first operating environment is larger than that of the second operating environment, and the face recognition model is used for carrying out face recognition processing on the image.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-7.
CN201810864802.5A 2018-08-01 2018-08-01 Data processing method and device, computer readable storage medium and electronic equipment Active CN109145772B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201810864802.5A CN109145772B (en) 2018-08-01 2018-08-01 Data processing method and device, computer readable storage medium and electronic equipment
EP19843800.4A EP3671551A4 (en) 2018-08-01 2019-04-15 Data processing method and apparatus, computer-readable storage medium and electronic device
PCT/CN2019/082696 WO2020024619A1 (en) 2018-08-01 2019-04-15 Data processing method and apparatus, computer-readable storage medium and electronic device
US16/740,374 US11373445B2 (en) 2018-08-01 2020-01-10 Method and apparatus for processing data, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810864802.5A CN109145772B (en) 2018-08-01 2018-08-01 Data processing method and device, computer readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN109145772A CN109145772A (en) 2019-01-04
CN109145772B true CN109145772B (en) 2021-02-02

Family

ID=64798679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810864802.5A Active CN109145772B (en) 2018-08-01 2018-08-01 Data processing method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN109145772B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3671551A4 (en) 2018-08-01 2020-12-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Data processing method and apparatus, computer-readable storage medium and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004061702A1 (en) * 2002-12-26 2004-07-22 The Trustees Of Columbia University In The City Of New York Ordered data compression system and methods
WO2017151815A1 (en) * 2016-03-01 2017-09-08 Google Inc. Facial template and token pre-fetching in hands free service requests
CN107729889B (en) * 2017-11-27 2020-01-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment

Also Published As

Publication number Publication date
CN109145772A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109213610B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108985255B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN110324521B (en) Method and device for controlling camera, electronic equipment and storage medium
CN111126146B (en) Image processing method, image processing device, computer readable storage medium and electronic apparatus
TWI736883B (en) Method for image processing and electronic device
CN108804895B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108805024B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US11275927B2 (en) Method and device for processing image, computer readable storage medium and electronic device
CN109145653B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108573170B (en) Information processing method and device, electronic equipment and computer readable storage medium
US11256903B2 (en) Image processing method, image processing device, computer readable storage medium and electronic device
KR20190038923A (en) Method, apparatus and system for verifying user identity
CN110191266B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108711054B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108650472B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
TW201944290A (en) Face recognition method and apparatus, and mobile terminal and storage medium
CN108833887B (en) Data processing method and device, electronic equipment and computer readable storage medium
US11373445B2 (en) Method and apparatus for processing data, and computer readable storage medium
CN109145772B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108846310B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US11170204B2 (en) Data processing method, electronic device and computer-readable storage medium
US11308636B2 (en) Method, apparatus, and computer-readable storage medium for obtaining a target image
CN113065507B (en) Method and device for realizing face authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant