CN108564032B - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN108564032B
CN108564032B CN201810327216.7A CN201810327216A CN108564032B CN 108564032 B CN108564032 B CN 108564032B CN 201810327216 A CN201810327216 A CN 201810327216A CN 108564032 B CN108564032 B CN 108564032B
Authority
CN
China
Prior art keywords
image
processing unit
corrected
face
speckle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810327216.7A
Other languages
Chinese (zh)
Other versions
CN108564032A (en
Inventor
周海涛
郭子青
欧锦荣
惠方方
谭筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810327216.7A priority Critical patent/CN108564032B/en
Priority to CN202010344912.6A priority patent/CN111523499B/en
Publication of CN108564032A publication Critical patent/CN108564032A/en
Priority to PCT/CN2019/080428 priority patent/WO2019196683A1/en
Priority to EP19784735.3A priority patent/EP3654243A4/en
Priority to US16/740,914 priority patent/US11256903B2/en
Application granted granted Critical
Publication of CN108564032B publication Critical patent/CN108564032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The method comprises the following steps: if the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling a camera module to acquire a target image according to the image acquisition instruction; correcting the target image to obtain a corrected target image; and sending the corrected target image to the second processing unit, wherein the corrected target image is used for at least one of face detection and face depth information acquisition. According to the method, the image processing efficiency of the second processing unit is improved.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of intelligent electronic devices and structured light technologies, the application of structured light technologies to intelligent electronic devices is becoming more and more common. The electronic equipment can perform face recognition, living body detection, face depth information acquisition and the like according to the infrared image acquired by the structured light, and further supports operations of face unlocking, face payment, face 3D beauty, face bag making according to the face and the like in the electronic equipment.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can improve the efficiency of data processing.
An image processing method comprising:
if the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling a camera module to acquire a target image according to the image acquisition instruction;
correcting the target image to obtain a corrected target image;
and sending the corrected target image to the second processing unit, wherein the corrected target image is used for at least one of face detection and face depth information acquisition.
An image processing apparatus comprising:
the acquisition module is used for controlling the camera module to acquire a target image according to an image acquisition instruction if the first processing unit receives the image acquisition instruction sent by the second processing unit;
the correction module is used for correcting the target image to obtain a corrected target image;
and the sending module is used for sending the corrected target image to the second processing unit, and the corrected target image is used for at least one of face detection and face depth information acquisition.
An electronic device, comprising: the camera comprises a first processing unit, a second processing unit and a camera module, wherein the first processing unit is respectively connected with the second processing unit and the camera module;
the first processing unit is used for receiving an image acquisition instruction sent by the second processing unit and controlling the camera module to acquire a target image according to the image acquisition instruction;
the first processing unit is further used for correcting the target image to obtain a corrected target image;
the first processing unit is further used for sending the corrected target image to the second processing unit;
the second processing unit is used for at least one of detecting the human face and acquiring the depth information of the human face according to the corrected target image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
if the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling a camera module to acquire a target image according to the image acquisition instruction;
correcting the target image to obtain a corrected target image;
and sending the corrected target image to the second processing unit, wherein the corrected target image is used for at least one of face detection and face depth information acquisition.
According to the method, the device, the electronic equipment and the computer readable storage medium in the embodiment of the application, the first processing unit is connected between the second processing unit and the camera module, the first processing unit can preprocess the image acquired by the camera module, and then sends the preprocessed image to the second processing unit, so that the processing efficiency of the second processing unit on the image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an application of an image processing method;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a flow chart of an image processing method in another embodiment;
FIG. 4 is a flowchart of an image processing method in another embodiment;
FIG. 5 is a flowchart of an image processing method in another embodiment;
FIG. 6 is a diagram illustrating a software architecture for implementing an image processing method according to an embodiment;
FIG. 7 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 8 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
fig. 9 is a block diagram showing the configuration of an image processing apparatus according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a diagram illustrating an application scenario of an image processing method according to an embodiment. As shown in fig. 1, the electronic device 10 may include a camera module 110, a second processing unit 120, and a first processing unit 130. The second Processing Unit 120 may be a Central Processing Unit (CPU) module. The first processing Unit 130 may be an MCU (micro controller Unit) module 130. The first processing unit 130 is connected between the second processing unit 120 and the camera module 110, the first processing unit 130 can control the laser camera 112, the floodlight 114 and the laser light 118 in the camera module 110, and the second processing unit 120 can control the RGB (Red/Green/Blue, Red/Green/Blue color mode) camera 116 in the camera module 110.
The camera module 110 includes a laser camera 112, a floodlight 114, an RGB camera 116 and a laser light 118. The laser camera 112 is an infrared camera and is configured to acquire an infrared image. The floodlight 114 is a surface light source capable of emitting infrared light; the laser lamp 118 is a point light source capable of generating laser light and is a point light source with patterns. When the floodlight 114 emits a surface light source, the laser camera 112 can obtain an infrared image according to the reflected light. When the laser lamp 118 emits a point light source, the laser camera 112 can obtain a speckle image according to the reflected light. The speckle image is an image in which the point light source with a pattern emitted by the laser lamp 118 is reflected and the pattern is deformed.
The second processing unit 120 may include a CPU core that operates in a TEE (Trusted Execution Environment) Environment and a CPU core that operates in a REE (natural Execution Environment) Environment. The TEE environment and the REE environment are both running modes of an ARM module (Advanced RISC Machines, Advanced reduced instruction set processor). The security level of the TEE environment is higher, and only one CPU core in the second processing unit 120 can operate in the TEE environment at the same time. Generally, the operation behavior with higher security level in the electronic device 10 needs to be executed in the CPU core in the TEE environment, and the operation behavior with lower security level can be executed in the CPU core in the REE environment.
The first processing unit 130 includes a PWM (Pulse Width Modulation) module 132, an SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) Interface 134, a RAM (Random Access Memory) module 136, and a depth engine 138. The PWM module 132 may emit pulses to the camera module to control the floodlight 114 or the laser 118 to be turned on, so that the laser camera 112 may collect infrared images or speckle images. The SPI/I2C interface 134 is used for receiving the image capturing instruction sent by the second processing unit 120. The depth engine 138 may process the speckle images to obtain a depth disparity map.
When the second processing unit 120 receives a data acquisition request of an application program, for example, when the application program needs to perform face unlocking and face payment, an image acquisition instruction may be sent to the first processing unit 130 through the CPU core operating in the TEE environment. After the first processing unit 130 receives the image acquisition instruction, the PWM module 132 emits a pulse wave to control the floodlight 114 in the camera module 110 to be turned on and acquire an infrared image through the laser camera 112, and to control the laser light 118 in the camera module 110 to be turned on and acquire a speckle image through the laser camera 112. The camera module 110 may send the collected infrared image and speckle image to the first processing unit 130. The first processing unit 130 may process the received infrared image to obtain an infrared disparity map; and processing the received speckle images to obtain a speckle parallax image or a depth parallax image. The processing of the infrared image and the speckle image by the first processing unit 130 refers to correcting the infrared image or the speckle image and removing the influence of internal and external parameters in the camera module 110 on the image. The first processing unit 130 can be set to different modes, and the images output by the different modes are different. When the first processing unit 130 is set to be in the speckle pattern mode, the first processing unit 130 processes the speckle image to obtain a speckle disparity map, and a target speckle image can be obtained according to the speckle disparity map; when the first processing unit 130 is set to the depth map mode, the first processing unit 130 processes the speckle images to obtain a depth disparity map, and a depth image is obtained according to the depth disparity map, wherein the depth image is an image with depth information. The first processing unit 130 may send the infrared disparity map and the speckle disparity map to the second processing unit 120, and the first processing unit 130 may also send the infrared disparity map and the depth disparity map to the second processing unit 120. The second processing unit 120 may obtain an infrared image of the target according to the infrared disparity map, and obtain a depth image according to the depth disparity map. Further, the second processing unit 120 may perform face recognition, face matching, living body detection, and depth information acquisition on the detected face according to the target infrared image and the depth image.
The communication between the first processing unit 130 and the second processing unit 120 is through a fixed security interface to ensure the security of the transmitted data. As shown in fig. 1, the data sent by the second processing unit 120 to the first processing unit 130 is through SECURE SPI/I2C 140, and the data sent by the first processing unit 130 to the second processing unit 120 is through SECURE industrial Processor Interface (secmipi) 150.
In an embodiment, the first processing unit 130 may also obtain a target infrared image according to the infrared disparity map, calculate and obtain a depth image according to the depth disparity map, and then send the target infrared image and the depth image to the second processing unit 120.
In one embodiment, the first processing unit 130 may perform face recognition, face matching, living body detection, and depth information acquisition on the detected face according to the target infrared image and the depth image. The first processing unit 130 sends the image to the second processing unit 120 means that the first processing unit 130 sends the image to a CPU core in the second processing unit 120 under the TEE environment.
In the embodiment of the application, the electronic device may be a mobile phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
FIG. 2 is a flow diagram of a method of image processing in one embodiment. As shown in fig. 2, an image processing method includes:
step 202, if the first processing unit receives the image acquisition instruction sent by the second processing unit, controlling the camera module to acquire the target image according to the image acquisition instruction.
The first processing unit refers to a processor for processing data, such as the MCU module 130 in fig. 1. The second processing unit refers to a processor for processing data, such as the CPU module 120 in fig. 1. The first processing unit is connected between the second processing unit and the camera module and can control the camera module according to the instruction of the first processing unit. The second processing unit can operate in the first operation environment, and when the first processing unit receives an image acquisition instruction sent by the second processing unit in the first operation environment, the camera module can be controlled to acquire a target image according to the received image acquisition instruction. The first operating environment refers to an operating environment with a higher security level, such as a TEE operating environment. Optionally, the electronic device further includes a second operating environment, where the second operating environment refers to an operating environment with a lower security level, such as an REE operating environment. The target image includes an infrared image and a speckle image.
When an application program in the electronic device needs to acquire the face depth information, a data acquisition request may be sent to the second processing unit, where the data acquisition request may include a face depth information acquisition instruction, an RGB image acquisition instruction, and the like. When the second processing unit receives the data acquisition request, if the data acquisition request is detected to include a face depth information acquisition instruction, the current second processing unit is switched to the first operation environment, and an image acquisition instruction is sent to the first processing unit through the second processing unit in the first operation environment. The image acquisition instruction may include acquiring an infrared image and a speckle image. Optionally, the image capturing instruction may further include capturing an RGB image.
When first processing unit received above-mentioned image acquisition instruction, floodlight opened and gathered infrared image, control camera module in the camera module and opened and gather speckle image through the laser camera in the steerable camera module of first processing unit. The first processing unit controls the floodlight to be turned on or the laser light to be turned on by emitting pulses. The floodlight emits infrared light, and the laser lamp emits laser light. The laser emitted by the laser lamp can be diffracted by a collimating mirror and a DOE (diffraction Optical element) diffraction element in the structured light module, the laser lamp emits an image formed by diffraction, and the laser camera generates a speckle image according to the reflected light.
And 204, correcting the target image to obtain a corrected target image.
After the laser camera acquires the infrared image and the speckle image, the infrared image and the speckle image can be sent to the first processing unit, and the first processing unit can respectively correct the infrared image and the speckle image to obtain a corrected infrared image and a corrected speckle image. . The first processing unit corrects the infrared image and the speckle image respectively, which means that internal and external parameters in the infrared image and the speckle image, such as a deflection angle of a laser camera, the laser camera, and the like, are corrected. Obtaining a corrected infrared image after correcting the infrared image, wherein the corrected infrared image is an infrared parallax image; the speckle image after correction can be a speckle disparity map or a depth disparity map. The disparity map is an image representing a disparity value with a standard image, and the standard image, that is, the image with corrected internal and external parameters, can be acquired according to the disparity value in the disparity map. For example, a target infrared image can be obtained according to the infrared disparity map, a target speckle image can be obtained according to the speckle disparity map, and a depth image can be obtained according to the depth disparity map. The target infrared image is an infrared image after internal and external parameters are corrected, the target speckle image is a speckle image after internal and external parameters are corrected, and the depth image is an image with depth information after internal and external parameters are corrected.
The first processing unit can be in different operation modes, and the speckle images are processed in different modes in different operation modes. When the first processing unit is set to be in a depth map mode, the first processing unit processes the speckle images to obtain a depth parallax map; when the first processing unit is set to be in the speckle pattern mode, the first processing unit processes the speckle images to obtain speckle disparity maps, and the target speckle images can be obtained according to the speckle disparity maps.
And step 206, sending the corrected target image to a second processing unit, wherein the corrected target image is used for at least one of face detection and face depth information acquisition.
The first processing unit may send the corrected infrared image and the corrected speckle image to a second processing unit in a first operating environment. For example, the first processing unit sends the infrared disparity map and the depth disparity map to the second processing unit in the TEE environment; or the first processing unit sends the infrared disparity map and the speckle disparity map to a second processing unit which operates in a TEE environment. And when the first processing unit communicates with the second processing unit, the communication channels are all safe channels. For example, the second processing unit sends the image capture instruction to the first processing unit via SECURE SPI/I2C, and the first processing unit sends the image to the second processing unit via SECURE MIPI. The first processing unit only performs data interaction with the second processing unit in the first operating environment, so that the security of data interaction can be ensured.
After the first processing unit sends the corrected infrared image and the corrected speckle image to the second processing unit in the first operating environment, the second processing unit can obtain a target infrared image according to the corrected infrared image and obtain a target speckle image or a depth image according to the corrected speckle image. The second processing unit can perform face detection according to the infrared image and the depth image, and the face detection can comprise face recognition, face matching and living body detection. The face recognition means recognizing whether a face exists in an image, the face matching means matching the face in the image with a pre-stored face, and the living body detection means detecting whether the face in the image has biological activity. When the human face is detected to exist in the image and the human face has biological activity, the second processing unit can also acquire the depth information of the detected human face according to the infrared image and the depth image.
The second processing unit may send the depth information of the face to an application program after acquiring the depth information of the detected face. The application program can perform face unlocking, face payment, face 3D beauty, three-dimensional modeling and the like according to the received depth information of the face.
Generally, when the second processing unit in the electronic device operates in the first operating environment, the processing speed is often limited, and the efficiency of data processing is low. Taking the CPU core of the electronic device as an example, there is only one and only one CPU core in the TEE environment, that is, only one CPU core can be in data in the TEE environment, so that the efficiency of processing data is low.
In the method in the embodiment of the application, the first processing unit is connected between the second processing unit and the camera module, and the first processing unit can be used for preprocessing the image acquired by the camera module and then sending the preprocessed image to the second processing unit, so that the processing efficiency of the second processing unit is improved.
In one embodiment, the corrected target image includes a corrected infrared image and a corrected speckle image; the method for detecting the human face according to the corrected target image comprises the following steps:
and carrying out face recognition according to the corrected infrared image, and detecting whether a first face exists or not. And if the first face exists, acquiring a depth image according to the corrected speckle image. And performing living body detection according to the corrected infrared image and the depth image.
After receiving the corrected infrared image and the corrected speckle image, the second processing unit can acquire a target infrared image according to the corrected infrared image, perform face recognition on the target infrared image, and detect whether a first face exists in the target infrared image. The first face is a face existing in the target infrared image. When the first face exists in the target infrared image, the second processing unit can acquire a depth image through the corrected speckle image, namely, the depth image is acquired through the depth parallax image, and the living body detection is carried out according to the depth image. Wherein performing the in-vivo examination based on the depth image includes: searching a face area corresponding to a first face area in the depth image, and detecting whether the face area corresponding to the first face area has depth information or not, and whether the depth information accords with a face three-dimensional rule or not. And if the face region corresponding to the first face region in the depth image has depth information and the depth information conforms to the face three-dimensional rule, the first face has biological activity. The face stereo rule is a rule with face three-dimensional depth information. Optionally, the second processing unit may further perform artificial intelligence recognition on the target infrared image and the depth image by using an artificial intelligence model, acquire a texture on the surface of the first face, detect whether the direction of the texture, the density of the texture, the width of the texture, and the like meet face rules, and determine that the first face has biological activity if the direction, the density, the width, and the like meet the face rules.
In one embodiment, before acquiring the depth image from the corrected speckle image, the method further includes:
matching the first face with the second face, and determining that the first face is successfully matched with the second face; the second face is a stored face.
After detecting that the first face exists in the target infrared image, the second processing unit can also match the first face with the second face. The second face is a stored face. For example, the face of the owner of the electronic device. The second face may be a face stored on the electronic device side or a face stored on the server side. The second processing unit may use the first face successfully matched with the second face as the target face. And after the first face and the second face are successfully matched, the second processing unit acquires the depth image again, and detects whether the target face has bioactivity according to the target infrared image and the depth image. And when the target face is detected to have biological activity, acquiring the depth information of the target face, and sending the depth information of the target face to an application program.
Optionally, after the first face is acquired, the second processing unit may perform living body detection on the first face to detect whether the first face has biological activity. And when the first face is detected to have biological activity, matching the first face with the biological activity with the second face to obtain a successfully matched target face. And then obtaining the depth information of the target face according to the depth image, and sending the depth information of the target face to an application program.
When the second processing unit receives the data acquisition request, whether the application program only needs the depth information of the face or the depth information of the target face can be identified according to the data acquisition request. For example, when depth information of a face is required for 3D beauty, the second processing unit only needs to send the depth information of the face recognized to the application program, and does not need to recognize whether or not it is a target face. When the depth information of the face is required to be used for unlocking the face, the second processing unit is also required to detect whether the identified face is a target face after identifying the face, and then the depth information of the target face is sent to the application program when the identified face is the target face.
In the method in the embodiment of the application, the second processing unit can determine the target face through the steps of face recognition, face matching, living body detection and the like, and the method is favorable for quickly acquiring the depth information of the target face.
In one embodiment, controlling the camera module to capture the target image according to the image capture instruction comprises:
and controlling the camera module to collect the infrared image according to the image collection instruction. And controlling a camera module to collect speckle images according to the image collecting instruction. And the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value.
The infrared lamp is opened and is gathered infrared image through the laser camera in the steerable camera module of first processing unit, and laser lamp is opened and is gathered speckle image through the laser camera in the steerable camera module of first processing unit still. In order to ensure the consistency of the image contents of the infrared image and the speckle image, the time interval between the first moment when the camera module collects the infrared image and the second moment when the camera module collects the speckle image is smaller than a first threshold value. For example, the time interval between the first time and the second time is less than 5 milliseconds.
The method for controlling the camera module to collect the infrared image and the speckle image by the first processing unit comprises the following steps:
(1) the camera module is internally provided with a floodlight controller and a laser light controller, the first processing unit is respectively connected with the floodlight controller and the laser light controller through two paths of PWM, and when the first processing unit needs to control the floodlight to be started, the floodlight can be controlled to be started by transmitting pulse waves to the floodlight controller through one path of PWM; when the first processing unit needs to control the laser lamp to be started, the laser lamp can be controlled to be started by emitting pulse waves to the laser lamp controller through the other path of PWM. The first processing unit can respectively transmit the time interval of the pulse waves to the floodlight controller and the laser light controller by controlling the two paths of PWM, so that the time interval between the first moment and the second moment is smaller than a first threshold value.
(2) The method comprises the following steps that a controller is arranged in a camera module and used for controlling a floodlight and a laser lamp, a first processing unit is connected with the controller through a PWM, and when the floodlight needs to be controlled to be started, the first processing unit can transmit pulse waves to the floodlight controller through the PWM to control the floodlight to be started; when the first processing unit needs to control the laser lamp to be started, the PWM can be controlled to switch and pulse waves are emitted to the laser lamp controller to control the laser lamp to be started. The first processing unit controls the time interval of the PWM switching so that the time interval between the first time and the second time is smaller than a first threshold value.
According to the method in the embodiment of the application, the time interval between the collected infrared image and the speckle image is lower than the first threshold value, so that the consistency between the collected infrared image and the collected speckle image can be ensured, a larger error between the infrared image and the speckle image is avoided, and the accuracy of data processing is improved.
In one embodiment, the target image includes an infrared image and a speckle image; controlling the camera module to acquire the target image according to the image acquisition instruction comprises the following steps:
and acquiring a time stamp in the image acquisition instruction. Determining that a time interval between a first time of acquiring the infrared image and the time stamp is less than a second threshold. It is determined that a time interval between the second time instant at which the speckle image is acquired and the timestamp is less than a third threshold.
The image acquisition instruction received by the second processing unit also comprises a time stamp. The timestamp may be the time when the application sent the data acquisition request. The second processing unit can send the image acquisition instruction to the first processing unit after receiving the image acquisition instruction, and the first processing unit controls the camera module to acquire the infrared image and the speckle image according to the image acquisition instruction. When the first processing unit controls the camera module to collect the infrared image and the speckle image, it is required to determine that a time interval between a first time for collecting the infrared image and a time stamp is smaller than a second threshold value and a time interval between a second time for collecting the speckle image and the time stamp is smaller than a third threshold value. The second threshold may be the same value or a different value, e.g., 3 seconds, 5 seconds, etc.
When the time interval between the first time when the second processing unit collects the infrared image and the time stamp is smaller than the second threshold value or the time interval between the second time when the second processing unit collects the speckle image and the time stamp is smaller than the third threshold value, the second processing unit can return an invalid instruction to the first processing unit, and the first processing unit can return the invalid instruction to the application program sending the data acquisition request, so that the application program can resend the data acquisition request.
According to the method in the embodiment of the application, the timeliness of the collected infrared image and the collected speckle image can be ensured by controlling the time interval between the first moment of collecting the infrared image and the time stamp in the image collecting instruction and controlling the time interval between the second moment of collecting the speckle image and the time stamp in the image collecting instruction.
In one embodiment, an image processing method includes:
step 302, if the first processing unit receives the image acquisition instruction sent by the second processing unit, controlling the camera module to acquire the target image according to the image acquisition instruction.
And step 304, correcting the target image to obtain a corrected target image.
Step 306, sending the corrected target image to the second processing unit, where the corrected target image is used for at least one of face detection and face depth information acquisition.
And 308, if the image acquisition instruction comprises the acquisition of the visible light image, controlling the camera module to simultaneously acquire the infrared image and the visible light image according to the image acquisition instruction.
When the image acquisition instruction further comprises the step of acquiring visible light images, the second processing unit can control the RGB camera in the camera module to acquire the visible light images. The first processing unit controls the laser camera to collect infrared images and speckle images, and the second processing unit controls the RGB camera to collect visible light images. In order to ensure the consistency of the collected images, a time sequence synchronization line can be added between the laser camera and the RGB camera, so that the camera module can simultaneously collect the infrared images and the visible light images.
According to the method, the camera module is controlled to simultaneously acquire the infrared image and the visible light image, so that the acquired infrared image and the acquired visible light image are consistent, and the accuracy of image processing is improved.
In one embodiment, an image processing method includes:
and step 402, if the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling the camera module to acquire a target image according to the image acquisition instruction.
And step 404, correcting the target image to obtain a corrected target image.
And step 406, sending the corrected target image to a second processing unit, wherein the corrected target image is used for at least one of face detection and face depth information acquisition.
In step 408, if a data acquisition request of the application program is received, the security level of the application program is acquired.
At step 410, an accuracy level corresponding to the security level is found.
And step 412, adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to the application program.
If the second processing unit receives a data acquisition request of the application program, the security level of the application program can be detected. The electronic device can set corresponding security levels for the application programs, and the data precision levels corresponding to the application programs with different security levels are different. For example, the security level of the payment software in the electronic device is higher, the data sent by the second processing unit to the payment software is higher in precision, the security level of the image software is lower, and the data sent by the second processing unit to the image software is lower in precision.
After the security level corresponding to the application program is obtained, the second processing unit can search for the precision level corresponding to the security level. The security level is positively correlated with the accuracy level, that is, the higher the security level of the application program is, the higher the accuracy level corresponding to the security level is. The higher the above-mentioned level of accuracy the sharper the written image. The second unit can adjust the precision of the depth image according to the precision level after acquiring the precision level corresponding to the security level of the application program, and then sends the adjusted depth image to the application program, so that the depth image can be used for face unlocking, face payment or face 3D beauty and the like according to the depth image.
In one embodiment, adjusting the precision of the target image according to the precision level includes:
(1) and adjusting the resolution of the depth image according to the precision level.
(2) And adjusting the number of scattered spots in the speckle image collected by the camera module according to the precision level.
The second processing unit can adjust the resolution of the depth image when adjusting the precision of the depth image. When the precision level of the depth image is high, the resolution of the depth image is high; when the level of precision of the depth image is low, the resolution of the depth image is low. The resolution of the image can be adjusted by adjusting the number of pixels in the image.
Different DOE diffraction elements can be preset by the laser lamp in the camera module, wherein the number of scattered spots formed by diffraction of the different DOE diffraction elements is different. When the precision level corresponding to the application program is higher, the laser lamp can control the DOE diffraction element with more scattered spots to emit laser, so that a speckle image with more scattered spots is obtained; when the precision level corresponding to the application program is low, the laser lamp can control the DOE diffraction element with the small number of scattered spots to emit laser, so that a speckle image with the small number of scattered spots is obtained.
According to the method in the embodiment of the application, the precision of the depth image is adjusted according to the security level of the application program, so that the application programs with different security levels can obtain the depth images with different precisions, the risk that the application program with lower security level reveals data is reduced, and the security of the data is improved
In one embodiment, an image processing method includes:
and step 502, if the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling the camera module to acquire a target image according to the image acquisition instruction.
And step 504, correcting the target image to obtain a corrected target image.
Step 506, the corrected target image is sent to the second processing unit, and the corrected target image is used for at least one of face detection and face depth information acquisition.
Step 508, if a data obtaining request of the application program is received, obtaining the security level of the application program.
Step 510, determining a data channel corresponding to the security level of the application program.
And step 512, sending the depth image to the application program through the corresponding data transmission channel.
The second processing unit can identify the security level of the application program after receiving the data acquisition request of the application program. The second processing unit can transmit the depth image to the application program through the safe channel or the common channel. The security level of the secure channel is different from that of the normal channel. Optionally, the security level of the secure channel is higher, and the security level of the normal channel is lower. When the data is transmitted in the secure channel, the data can be encrypted, so that the data is prevented from being leaked or stolen. The electronic device may set the corresponding data channel according to the security level of the application. Alternatively, the application program with a high security level may correspond to a secure channel, and the application program with a low security level may correspond to a normal channel. For example, the payment class application corresponds to a secure channel and the image class application corresponds to a normal channel. After the data channel corresponding to the security level of the application program is acquired, the second processing unit can send the depth image to the application program through the corresponding data channel, so that the application program can perform the next operation according to the depth image.
According to the method, the corresponding data channel is selected to transmit data according to the security level of the application program, and the security of the application program with higher security level when the data is transmitted is guaranteed. For the application program with low security level, the data is directly transmitted without encryption operation, and the data transmission speed of the application program with low security level is improved.
In one embodiment, an image processing method includes:
(1) and if the first processing unit receives the image acquisition instruction sent by the second processing unit, controlling the camera module to acquire the target image according to the image acquisition instruction.
(2) And correcting the target image to obtain a corrected target image.
(3) And sending the corrected target image to a second processing unit, wherein the corrected target image is used for at least one of face detection and face depth information acquisition.
Optionally, the corrected target image comprises a corrected infrared image and a corrected speckle image; the method for detecting the human face according to the corrected target image comprises the following steps: carrying out face recognition according to the corrected infrared image, and detecting whether a first face exists or not; if the first face exists, acquiring a depth image according to the corrected speckle image; and performing living body detection according to the corrected infrared image and the depth image.
Optionally, before acquiring the depth image from the corrected speckle image, the method further comprises: matching the first face with the second face; determining that the first face and the second face are successfully matched; the second face is a stored face.
Optionally, the controlling the camera module to acquire the target image according to the image acquisition instruction includes: controlling a camera module to acquire an infrared image according to an image acquisition instruction; controlling a camera module to collect speckle images according to the image collecting instruction; and the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value.
Optionally, the controlling the camera module to acquire the target image according to the image acquisition instruction includes: acquiring a timestamp in an image acquisition instruction; determining that a time interval between a first moment of acquiring the infrared image and the timestamp is smaller than a second threshold; it is determined that a time interval between the second time instant at which the speckle image is acquired and the timestamp is less than a third threshold.
Optionally, the target image comprises an infrared image; the method further comprises the following steps: and if the image acquisition instruction comprises the acquisition of a visible light image, controlling the camera module to simultaneously acquire an infrared image and a visible light image according to the image acquisition instruction.
Optionally, the method further includes: if a data acquisition request of an application program is received, acquiring the security level of the application program; searching for a precision level corresponding to the security level; and adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to an application program.
Optionally, adjusting the precision of the target image according to the precision level comprises: adjusting the resolution of the depth image according to the precision level; or the number of scattered spots in the speckle image collected by the camera module is adjusted according to the precision level.
Optionally, the method further includes: if a data acquisition request of an application program is received, acquiring the security level of the application program; determining a data channel corresponding to the security level of the application program; and sending the depth image to the application program through the corresponding data transmission channel.
According to the method in the embodiment of the application, the first processing unit is connected between the second processing unit and the camera module, the first processing unit can preprocess the image acquired by the camera module, and then sends the preprocessed image to the second processing unit, so that the processing efficiency of the second processing unit is improved, and the first processing unit only performs data interaction with the second processing unit in the first operating environment, so that the safety of the data interaction can be ensured.
FIG. 6 is a diagram illustrating a software architecture for implementing an image processing method according to an embodiment. As shown in fig. 6, the software architecture includes an application layer 610, an operating system 620, and a first execution environment 630, where the first execution environment 630 is a trusted execution environment. The hardware layer includes a floodlight & laser 631, a camera 632, and a micro control unit 633. The security services module 634 and encryption module 635 may operate in a first operating environment. The security services module 634 may be a second processing unit operating in a first operating environment, such as a CPU core operating in a TEE environment. The operating system 630 comprises a security management module 621, a face management module 622, a camera driver 623 and a camera frame 624; the application layer 610 includes an application 611. The application 611 may initiate an image capture command, and the electronic device drives the floodlight & laser light 631 and the camera 632 to work through the image capture command. For example, when the operations of payment, unlocking, beautifying and the like are performed by acquiring a human face, the application program may initiate an image acquisition instruction for acquiring a human face image. After the camera acquires the infrared image and the speckle image, whether the currently acquired image is used for safe application operation or non-safe application operation is judged according to the image acquisition instruction. When the acquired depth image is used for security application operations such as payment and unlocking, the acquired infrared image and the speckle image are sent to the micro control unit 633 through a security channel, the micro control unit 633 calculates according to the speckle image to obtain a depth parallax image, and then calculates according to the depth parallax image to obtain the depth image. And transmits the calculated depth image and infrared image to the security service module 634. It is understood that the process of calculating a depth image from the speckle image may also be performed in the security service module 634. The security service module 634 sends the infrared image and the depth image to the encryption module 635, and the encryption module 635 may encrypt the depth image and the infrared image according to a pre-stored speckle image, or encrypt the depth image and the infrared image according to a real-time acquired speckle image, and then send the encrypted depth image and infrared image to the security management module 621. Generally, different application programs 611 all have corresponding security management modules 621, and the security management modules 621 perform decryption processing on the encrypted depth image and the encrypted infrared image, and send the depth image and the infrared image obtained after the decryption processing to corresponding face management modules 622. The face management module 622 performs face detection, recognition, verification, and other processing according to the infrared image and the depth image, and then sends the processing result to the upper application program 611, and the application program 611 performs a security application operation according to the processing result. When the acquired depth image is used for non-security applications such as beauty, AR (augmented reality technology), and the like, the infrared image and the speckle image acquired by the camera 632 may be directly transmitted to the camera driver 623 through a non-security channel, and the camera driver 623 may calculate a disparity map according to the speckle image and calculate a depth image according to the disparity map. The camera driver 623 may send the infrared image and the depth image to the camera framework 624, and then the camera framework 624 sends the infrared image and the depth image to the human face management module 622 or the application 611. Wherein, the switching between the safe channel and the non-safe channel is completed by the micro control unit 633.
FIG. 7 is a block diagram showing an example of the structure of an image processing apparatus. As shown in fig. 7, an image processing apparatus includes:
the acquisition module 702 is configured to, if the first processing unit receives an image acquisition instruction sent by the second processing unit, control the camera module to acquire a target image according to the image acquisition instruction; .
A correcting module 704, configured to correct the target image to obtain a corrected target image;
a sending module 706, configured to send the corrected target image to the second processing unit, where the corrected target image is used for at least one of face detection and obtaining depth information of a face.
Fig. 8 is a block diagram showing the configuration of an image processing apparatus according to another embodiment. As shown in fig. 8, an image processing apparatus includes: an acquisition module 802, a correction module 804, a transmission module 806, and a detection module 808. The acquisition module 802, the correction module 804, and the sending module 806 have the same functions as the corresponding modules in fig. 7.
The corrected target image comprises a corrected infrared image and a corrected speckle image; the method for detecting the human face by the detection module 808 according to the corrected target image comprises the following steps: carrying out face recognition according to the corrected infrared image, and detecting whether a first face exists or not; if the first face exists, acquiring a depth image according to the corrected speckle image; and performing living body detection according to the corrected infrared image and the depth image.
In one embodiment, the detection module 808 is further configured to match the first face with the second face before acquiring the depth image from the corrected speckle image; determining that the first face and the second face are successfully matched; the second face is a stored face.
In one embodiment, the acquiring module 802 controls the camera module to acquire the target image according to the image acquisition instruction, including: controlling a camera module to acquire an infrared image according to an image acquisition instruction; controlling a camera module to collect speckle images according to the image collecting instruction; and the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value.
In one embodiment, the target image includes an infrared image and a speckle image; the acquisition module 802 controls the camera module to acquire the target image according to the image acquisition instruction, including: acquiring a timestamp in an image acquisition instruction; determining that a time interval between a first moment of acquiring the infrared image and the timestamp is smaller than a second threshold; it is determined that a time interval between the second time instant at which the speckle image is acquired and the timestamp is less than a third threshold.
In one embodiment, the target image comprises an infrared image; the acquisition module 802 is further configured to control the camera module to simultaneously acquire the infrared image and the visible light image according to the image acquisition instruction if the image acquisition instruction includes acquiring the visible light image.
Fig. 9 is a block diagram showing the configuration of an image processing apparatus according to another embodiment. As shown in fig. 9, an image processing apparatus includes: the device comprises an acquisition module 902, a correction module 904, a sending module 906, an acquisition module 908 and a search module 910. The acquisition module 902, the correction module 904 and the sending module 906 have the same functions as the corresponding modules in fig. 7.
The obtaining module 908 is configured to obtain the security level of the application program if a data obtaining request of the application program is received.
A searching module 910, configured to search for an accuracy level corresponding to the security level.
The sending module 906 is further configured to adjust the precision of the depth image according to the precision level, and send the adjusted depth image to the application program.
In one embodiment, the sending module 906 adjusting the precision of the target image according to the precision level includes: adjusting the resolution of the depth image according to the precision level; or the number of scattered spots in the speckle image collected by the camera module is adjusted according to the precision level.
In one embodiment, the obtaining module 908 is further configured to obtain the security level of the application program if a data obtaining request of the application program is received.
The lookup module 910 is further configured to determine a data channel corresponding to the security level of the application.
The sending module 906 is further configured to send the depth image to the application program through the corresponding data transmission channel.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the electronic device, or can be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
An embodiment of the present application further provides an electronic device, where the electronic device includes: a first processing unit 120, a second processing unit 130 and a camera module 110.
The first processing unit 120 is respectively connected to the second processing unit 130 and the camera module 110.
The first processing unit 120 is configured to receive an image acquisition instruction sent by the second processing unit 130, and control the camera module 110 to acquire a target image according to the image acquisition instruction.
The first processing unit 120 is further configured to correct the target image to obtain a corrected target image.
The first processing unit 120 is further configured to send the corrected target image to the second processing unit 130.
The second processing unit 130 is configured to perform at least one of face detection and obtaining depth information of a face according to the corrected target image.
In one embodiment, the corrected target image includes a corrected infrared image and a corrected speckle image; the method for the second processing unit 130 to perform face detection according to the corrected target image includes: carrying out face recognition according to the corrected infrared image, and detecting whether a first face exists or not; if the first face exists, acquiring a depth image according to the corrected speckle image; and performing living body detection according to the corrected infrared image and the depth image.
In one embodiment, the second processing unit 130 is further configured to match the first face with the second face before acquiring the depth image according to the corrected speckle image; determining that the first face and the second face are successfully matched; the second face is a stored face.
In one embodiment, the controlling, by the first processing unit 120, the camera module 110 to capture the target image according to the image capturing instruction includes: the first processing unit 120 controls the camera module 110 to collect the infrared image according to the image collecting instruction; the first processing unit 120 controls the camera module 110 to collect speckle images according to the image collecting instruction; and the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value.
In one embodiment, the target image includes an infrared image and a speckle image; the first processing unit 120 controls the camera module 110 to collect the target image according to the image collecting instruction, including: acquiring a timestamp in an image acquisition instruction; determining that a time interval between a first moment of acquiring the infrared image and the timestamp is smaller than a second threshold; it is determined that a time interval between the second time instant at which the speckle image is acquired and the timestamp is less than a third threshold.
In one embodiment, the target image comprises an infrared image; the first processing unit 120 is further configured to control the camera module 110 to simultaneously acquire the infrared image and the visible light image according to the image acquisition instruction if the image acquisition instruction includes acquiring the visible light image.
In one embodiment, the second processing unit 130 is further configured to, if a data obtaining request of the application program is received, obtain a security level of the application program; searching for a precision level corresponding to the security level; and adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to an application program.
In one embodiment, the second processing unit 130 adjusting the precision of the target image according to the precision level includes: adjusting the resolution of the depth image according to the precision level; or adjust the number of scattered spots in the speckle image collected by the camera module 110 according to the accuracy level.
In one embodiment, the second processing unit 130 is further configured to, if a data obtaining request of the application program is received, obtain a security level of the application program; determining a data channel corresponding to the security level of the application program; and sending the depth image to the application program through the corresponding data transmission channel.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (18)

1. An image processing method is characterized by being applied to electronic equipment, wherein the electronic equipment at least comprises a camera module, a first processing unit and a second processing unit, the first processing unit is connected between the second processing unit and the camera module, and the second processing unit is in a trusted operating environment (TEE); the method comprises the following steps:
if the first processing unit receives an image acquisition instruction sent by the second processing unit through a safe serial peripheral interface or a safe bidirectional two-wire system synchronous serial bus interface, controlling the camera module to acquire an infrared image and a speckle image according to the image acquisition instruction; the first processing unit performs data interaction with the second processing unit under the TEE; the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value; the image acquisition instruction is sent by an application program of the electronic equipment;
the first processing unit corrects the internal and external parameters of the infrared image and the speckle image to obtain a corrected infrared image and a corrected speckle image; the corrected infrared image and the corrected speckle image are corresponding disparity maps;
the first processing unit sends the corrected infrared image and the corrected speckle image to the second processing unit through a secure mobile industry processor interface, the corrected infrared image and the corrected speckle image are used for indicating the second processing unit to perform at least one of face detection on the corrected infrared image and the corrected speckle image in the TEE and acquire depth information of a face, and the obtained result is sent to the application program through a secure channel or a common channel; the secure channel or the normal channel is associated with a security level of the application.
2. The method of claim 1, wherein the method for face detection based on the corrected infrared image and the corrected speckles comprises:
carrying out face recognition according to the corrected infrared image, and detecting whether a first face exists or not;
if the first face exists, acquiring a depth image according to the corrected speckle image;
and performing living body detection according to the corrected infrared image and the depth image.
3. The method of claim 2, wherein prior to said acquiring a depth image from the corrected speckle image, the method further comprises:
matching the first face with a second face;
determining that the first face and the second face are successfully matched; the second face is a stored face.
4. The method of claim 1, wherein controlling a camera module to capture infrared images and speckle images according to the image capture instructions further comprises:
acquiring a timestamp in the image acquisition instruction;
determining that a time interval between a first time of acquiring the infrared image and the timestamp is less than a second threshold;
determining that a time interval between a second time instant at which the speckle image is acquired and the timestamp is less than a third threshold.
5. The method of claim 1, further comprising:
and if the image acquisition instruction comprises the acquisition of a visible light image, controlling the camera module to simultaneously acquire the infrared image and the visible light image according to the image acquisition instruction.
6. A method according to claim 2 or 3, characterized in that the method further comprises:
if a data acquisition request of an application program is received, acquiring the security level of the application program;
searching for a precision level corresponding to the security level;
and adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to the application program.
7. The method of claim 6, wherein the adjusting the precision of the depth image according to the precision level comprises:
adjusting the resolution of the depth image according to the precision level;
or adjusting the number of scattered spots in the speckle image collected by the camera module according to the precision level.
8. A method according to claim 2 or 3, characterized in that the method further comprises:
if a data acquisition request of an application program is received, acquiring the security level of the application program;
determining a data channel corresponding to the security level of the application program;
and sending the depth image to the application program through the corresponding data channel.
9. An image processing apparatus characterized by comprising:
the acquisition module is used for controlling the camera module to acquire the infrared image and the speckle image according to an image acquisition instruction if the first processing unit receives the image acquisition instruction sent by the second processing unit through the safe serial peripheral interface or the safe two-way two-wire system synchronous serial bus interface; the first processing unit performs data interaction with the second processing unit under the TEE; the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value; the image acquisition instruction is sent by an application program of the electronic equipment;
the correction module is used for correcting the internal and external parameters of the infrared image and the speckle image by the first processing unit to obtain a corrected infrared image and a corrected speckle image; the corrected infrared image and the corrected speckle image are corresponding disparity maps;
a sending module, configured to send, by the first processing unit, the corrected infrared image and the corrected speckle image to the second processing unit through a secure mobile industry processor interface, where the corrected infrared image and the corrected speckle image are used to instruct the second processing unit to perform at least one of face detection on the corrected infrared image and the corrected speckle image in the TEE and acquire depth information of a face, and send an obtained result to the application program through a secure channel or a common channel; the secure channel or the normal channel is associated with a security level of the application.
10. An electronic device, comprising: the camera comprises a first processing unit, a second processing unit and a camera module, wherein the first processing unit is respectively connected with the second processing unit and the camera module;
the first processing unit is used for receiving an image acquisition instruction sent by the second processing unit through a safe serial peripheral interface or a safe bidirectional two-wire system synchronous serial bus interface and controlling the camera module to acquire an infrared image and a speckle image according to the image acquisition instruction; the first processing unit performs data interaction with the second processing unit under the TEE; the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value; the image acquisition instruction is sent by an application program of the electronic equipment;
the first processing unit is also used for correcting the internal and external parameters of the infrared image and the speckle image to obtain a corrected infrared image and a corrected speckle image; the corrected infrared image and the corrected speckle image are corresponding disparity maps;
the first processing unit is also used for sending the corrected infrared image and the corrected speckle image to the second processing unit through a safe mobile industry processor interface;
the second processing unit is used for carrying out at least one of face detection and face depth information acquisition according to the corrected infrared image and the corrected speckle image, and sending an obtained result to the application program through a safety channel or a common channel; the secure channel or the normal channel is associated with a security level of the application.
11. The electronic device of claim 10, wherein the method for detecting a human face by the second processing unit according to the corrected infrared image and the corrected speckle image comprises:
carrying out face recognition according to the corrected infrared image, and detecting whether a first face exists or not;
if the first face exists, acquiring a depth image according to the corrected speckle image;
and performing living body detection according to the corrected infrared image and the depth image.
12. The electronic device of claim 11, wherein: the second processing unit is further configured to match the first face with a second face before the depth image is acquired according to the corrected speckle image; determining that the first face and the second face are successfully matched; the second face is a stored face.
13. The electronic device of claim 10, wherein the first processing unit controls a camera module to capture infrared images and speckle images according to the image capture instruction, and further comprising:
acquiring a timestamp in the image acquisition instruction;
determining that a time interval between a first time of acquiring the infrared image and the timestamp is less than a second threshold;
determining that a time interval between a second time instant at which the speckle image is acquired and the timestamp is less than a third threshold.
14. The electronic device of claim 10, wherein:
the first processing unit is further used for controlling the camera module to simultaneously collect the infrared image and the visible light image according to the image collection instruction if the image collection instruction comprises a collected visible light image.
15. The electronic device according to claim 11 or 12, characterized in that:
the second processing unit is further used for acquiring the security level of the application program if a data acquisition request of the application program is received; searching for a precision level corresponding to the security level; and adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to the application program.
16. The electronic device of claim 15, wherein the second processing unit to adjust the precision of the depth image according to the precision level comprises:
adjusting the resolution of the depth image according to the precision level;
or adjusting the number of scattered spots in the speckle image collected by the camera module according to the precision level.
17. The electronic device according to claim 11 or 12, characterized in that:
the second processing unit is further used for acquiring the security level of the application program if a data acquisition request of the application program is received; determining a data channel corresponding to the security level of the application program; and sending the depth image to the application program through the corresponding data channel.
18. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of:
if the first processing unit receives an image acquisition instruction sent by the second processing unit through the secure serial peripheral interface or the secure bidirectional two-wire system synchronous serial bus interface, controlling the camera module to acquire an infrared image and a speckle image according to the image acquisition instruction; the first processing unit performs data interaction with the second processing unit under the TEE; the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is smaller than a first threshold value; the image acquisition instruction is sent by an application program of the electronic equipment;
the first processing unit corrects the internal and external parameters of the infrared image and the speckle image to obtain a corrected infrared image and a corrected speckle image; the corrected infrared image and the corrected speckle image are corresponding disparity maps;
the first processing unit sends the corrected infrared image and the corrected speckle image to the second processing unit through a secure mobile industry processor interface, the corrected infrared image and the corrected speckle image are used for indicating the second processing unit to perform at least one of face detection on the corrected infrared image and the corrected speckle image in the TEE and acquire depth information of a face, and the obtained result is sent to the application program through a secure channel or a common channel; the secure channel or the normal channel is associated with a security level of the application.
CN201810327216.7A 2018-04-12 2018-04-12 Image processing method, image processing device, electronic equipment and computer readable storage medium Active CN108564032B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201810327216.7A CN108564032B (en) 2018-04-12 2018-04-12 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN202010344912.6A CN111523499B (en) 2018-04-12 2018-04-12 Image processing method, apparatus, electronic device, and computer-readable storage medium
PCT/CN2019/080428 WO2019196683A1 (en) 2018-04-12 2019-03-29 Method and device for image processing, computer-readable storage medium, and electronic device
EP19784735.3A EP3654243A4 (en) 2018-04-12 2019-03-29 Method and device for image processing, computer-readable storage medium, and electronic device
US16/740,914 US11256903B2 (en) 2018-04-12 2020-01-13 Image processing method, image processing device, computer readable storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810327216.7A CN108564032B (en) 2018-04-12 2018-04-12 Image processing method, image processing device, electronic equipment and computer readable storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202010344912.6A Division CN111523499B (en) 2018-04-12 2018-04-12 Image processing method, apparatus, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108564032A CN108564032A (en) 2018-09-21
CN108564032B true CN108564032B (en) 2020-05-22

Family

ID=63534859

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810327216.7A Active CN108564032B (en) 2018-04-12 2018-04-12 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN202010344912.6A Active CN111523499B (en) 2018-04-12 2018-04-12 Image processing method, apparatus, electronic device, and computer-readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010344912.6A Active CN111523499B (en) 2018-04-12 2018-04-12 Image processing method, apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (2) CN108564032B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196683A1 (en) 2018-04-12 2019-10-17 Oppo广东移动通信有限公司 Method and device for image processing, computer-readable storage medium, and electronic device
CN108696682B (en) * 2018-04-28 2019-07-09 Oppo广东移动通信有限公司 Data processing method, device, electronic equipment and computer readable storage medium
ES2938471T3 (en) 2018-04-28 2023-04-11 Guangdong Oppo Mobile Telecommunications Corp Ltd Data processing method, electronic device and computer-readable storage medium
JP7327355B2 (en) * 2020-11-05 2023-08-16 トヨタ自動車株式会社 Map update device and map update method
CN113014782B (en) * 2021-03-19 2022-11-01 展讯通信(上海)有限公司 Image data processing method and device, camera equipment, terminal and storage medium
CN112967328A (en) * 2021-03-20 2021-06-15 杭州知存智能科技有限公司 Image depth information local dynamic generation method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3241151A1 (en) * 2014-12-29 2017-11-08 Keylemon SA An image face processing method and apparatus
CN105184246B (en) * 2015-08-28 2020-05-19 北京旷视科技有限公司 Living body detection method and living body detection system
CN106682522A (en) * 2016-11-29 2017-05-17 大唐微电子技术有限公司 Fingerprint encryption device and implementation method thereof
CN107105217B (en) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection

Also Published As

Publication number Publication date
CN111523499B (en) 2023-07-18
CN111523499A (en) 2020-08-11
CN108564032A (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN108564032B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108549867B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108764052B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108804895B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108805024B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110191266B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN110324521B (en) Method and device for controlling camera, electronic equipment and storage medium
CN108573170B (en) Information processing method and device, electronic equipment and computer readable storage medium
CN110248111B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
US11256903B2 (en) Image processing method, image processing device, computer readable storage medium and electronic device
CN108650472B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
CN108711054B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
EP3624006A1 (en) Image processing method, apparatus, computer-readable storage medium, and electronic device
CN109213610B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108985255B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108833887B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108846310B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108712400B (en) Data transmission method and device, computer readable storage medium and electronic equipment
EP3621294B1 (en) Method and device for image capture, computer readable storage medium and electronic device
US11170204B2 (en) Data processing method, electronic device and computer-readable storage medium
CN109064503B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108986153B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108810516B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108965716B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108881712B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant