CN108573170B - Information processing method and device, electronic equipment and computer readable storage medium - Google Patents

Information processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN108573170B
CN108573170B CN201810326603.9A CN201810326603A CN108573170B CN 108573170 B CN108573170 B CN 108573170B CN 201810326603 A CN201810326603 A CN 201810326603A CN 108573170 B CN108573170 B CN 108573170B
Authority
CN
China
Prior art keywords
image
processing unit
depth
target
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810326603.9A
Other languages
Chinese (zh)
Other versions
CN108573170A (en
Inventor
周海涛
郭子青
欧锦荣
惠方方
谭筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810326603.9A priority Critical patent/CN108573170B/en
Publication of CN108573170A publication Critical patent/CN108573170A/en
Priority to EP19784735.3A priority patent/EP3654243A4/en
Priority to PCT/CN2019/080428 priority patent/WO2019196683A1/en
Priority to US16/740,914 priority patent/US11256903B2/en
Application granted granted Critical
Publication of CN108573170B publication Critical patent/CN108573170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an information processing method, an information processing device, an electronic device and a computer readable storage medium. If the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling a camera module to acquire a target image according to the image acquisition instruction; acquiring a first image according to the target image; and sending the first image to a security processing unit, processing the first image through the security processing unit to obtain a processing result, and sending the processing result to the second processing unit. The first processing unit and the safety processing unit are used for processing the target image together, so that the data processing speed is improved.

Description

Information processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information processing method, an information processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of electronic equipment and face recognition technology, more and more electronic equipment can support functions such as face unlocking and face payment. In general, in order to ensure the security of personal information, payment security, and the like of a user in an electronic device, it is necessary to ensure the security of collected face data.
Disclosure of Invention
The embodiment of the application provides an information processing method, an information processing device, electronic equipment and a computer readable storage medium, which can improve the data processing speed.
An information processing method comprising:
if the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling a camera module to acquire a target image according to the image acquisition instruction;
acquiring a first image according to the target image;
and sending the first image to a security processing unit, processing the first image through the security processing unit to obtain a processing result, and sending the processing result to the second processing unit.
An information processing apparatus comprising:
the acquisition module is used for controlling the camera module to acquire a target image according to an image acquisition instruction when the first processing unit receives the image acquisition instruction sent by the second processing unit;
the processing module is used for acquiring a first image according to the target image;
and the transmission module is used for sending the first image to a security processing unit, processing the first image through the security processing unit to obtain a processing result, and sending the processing result to the second processing unit.
An electronic device, comprising: the camera comprises a first processing unit, a second processing unit, a safety processing unit and a camera module, wherein the first processing unit is respectively connected with the second processing unit, the safety processing unit and the camera module, and the safety processing unit is connected with the second processing unit;
the first processing unit is used for receiving the image acquisition instruction sent by the second processing unit, controlling the camera module to acquire a target image according to the image acquisition instruction, acquiring a first image according to the target image and sending the first image to the safety processing unit;
the safety processing unit is used for processing the first image to obtain a processing result and sending the processing result to the second processing unit.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the information processing method.
According to the information processing method and device, the electronic device and the computer readable storage medium in the embodiment of the application, after the target image is acquired according to the image acquisition instruction, the target image is processed through the first processing unit and the safety processing unit together, and the data processing speed is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application environment of an information processing method in one embodiment;
FIG. 2 is a schematic diagram showing an internal configuration of an electronic apparatus according to an embodiment;
FIG. 3 is a flow diagram of a method of information processing in one embodiment;
FIG. 4 is a flowchart of an information processing method in another embodiment;
FIG. 5 is a flowchart of an information processing method in another embodiment;
FIG. 6 is a diagram showing a software architecture of an information processing method in one embodiment;
fig. 7 is a block diagram showing the configuration of an information processing apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first processing unit may be referred to as a second processing unit, and similarly, a second processing unit may be referred to as a first processing unit, without departing from the scope of the present application. The first processing unit and the second processing unit are both processing units, but they are not the same processing unit.
Fig. 1 is a diagram illustrating an application scenario of an information processing method according to an embodiment. As shown in fig. 1, the application environment includes an electronic device 100. The electronic device 100 may receive an image acquisition instruction in a trusted operating environment, acquire an infrared image of a face of a user and a target speckle image of structural light emission through the camera module, and process the infrared image and the target speckle image through the first processing unit and the security processing unit to obtain an infrared image and a depth image. The electronic device 100 may be a smartphone, a tablet, a personal digital assistant, a wearable device, or the like. The first processing unit and the safety processing unit are used for processing, so that the processing speed is increased, and the data processing process is in a safety environment, so that the safety of data is ensured.
Fig. 2 is a block diagram of an internal configuration of an electronic device in one embodiment. As shown in fig. 2, the electronic device 200 may include a camera module 210, a first processing unit 220, a second processing unit 230, a security processing unit 240, and the like. The first processing unit 220 is connected to the camera module 210, the second processing unit 230 and the security processing unit 240, respectively.
The camera module 210 may include a first image collector, a first projector, a second image collector, and a second projector. The first image collector, the first projector and the second projector are respectively connected to the first processing unit 220. The second image collector may be connected to the first processing unit 220 or the second processor 230. The first image collector may be a laser camera 212. The first projector may be a floodlight 214. The second image collector may be an RGB (Red/Green/Blue, Red/Green/Blue color mode) camera 216. The second projector may be a laser lamp 218. The laser camera 212 and the RGB camera 216 may each include elements such as a lens and an image sensor. The image sensor is typically a Complementary Metal Oxide Semiconductor (CMOS) or a Charge Coupled Device (CCD). The surface of the image sensor in the laser camera 212 is provided with the optical filters corresponding to the pixels one to realize the intensity extraction of the light rays with different wavelengths, so that the laser camera 212 can collect invisible light images with different wavelengths. The filter may allow light to pass at a wavelength consistent with the wavelength of light emitted by the laser lamp 218, such as infrared light, ultraviolet light, and the like. The RGB camera 216 may use a bayer filter to obtain light intensity information of three channels (R/G/B), and acquire a color image of the target object. The floodlight 214 may be a laser diode, an LED, or the like. The wavelength of the emitted light from the flood lamp 214 is the same as the wavelength of the laser lamp 218. The second projector may include a light source, a lens, and a structured light pattern generator, wherein the light source may be a Surface Emitting Laser (VCSEL) array, a Vertical Cavity Surface Emitting Laser (VCSEL) array, and the structured light pattern generator may be ground glass, Diffractive Optical Elements (DOE), or a combination thereof.
The first processing Unit 220 may be an MCU (micro controller Unit). The MCU may include a PWM (Pulse Width Modulation) 222, SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) 224, a RAM (random Access Memory) 226, and a Depth Engine 228. MCU accessible PWM control floodlight 214 and laser camera 212 are synchronous, and floodlight 214 sends floodlight and shines the target object, acquires through laser camera 212 and obtains floodlight image, if floodlight 214 sends the infrared light, then gathers and obtain infrared image. The MCU controls the laser lamp 218 and the laser camera 212 to be synchronous through PWM, and the laser lamp 218 projects a structural light pattern to a target object and is collected by the laser camera 212 to obtain a target speckle image.
In one embodiment, the laser lamp 218 projects a structured light pattern (with a pattern of speckle particles) onto a reference plane at a known distance from the electronic device, and the structured light pattern is collected by the laser camera 212 as a reference speckle image and stored in the memory of the first processing unit 220, the second processing unit 230, or the security processing unit 240. The memory is a non-volatile memory.
The second processing unit 230 may be a CPU processor. The second processing unit 230 includes a CPU core that operates under a TEE (trusted execution environment) and a CPU core that operates under a REE (natural execution environment). Both the TEE and the REE are running modes of an Advanced riscmarchitectures (Advanced riscmarchitectures, Advanced risc processors). Generally, the operation behavior with higher security in the electronic device needs to be executed under the TEE, and other operation behaviors can be executed under the REE. In the embodiment of the present application, when the second processing unit 230 receives a request for obtaining face information of an application program, for example, when the application program needs to unlock the face information and the application program needs to pay the face information, the CPU core running under the TEE may send an image acquisition instruction to the SPI/I2C interface 224 in the first processing unit 220 through the SECURE SPI/I2C bus 250, and may transmit a pulse wave through the PWM222 to control the floodlight 214 in the camera module 210 to be turned on to acquire an infrared image and control the laser light 218 in the camera module 210 to be turned on to acquire a target speckle image. The camera module 210 may transmit the collected infrared image and Depth image to the Depth Engine238 in the first processing unit 220 for processing. The depth engine238 may calculate the collected target speckle image and the reference speckle image to obtain a parallax image with offset information of corresponding points in the target speckle image and the reference speckle image, and process the parallax image to obtain a depth image. The first processing unit 220 may send the parallax image to the security processing unit 240 through a Mobile Industry Processor Interface (MIPI for short) for processing to obtain a depth image.
The first processing unit 220 performs face recognition according to the acquired infrared image, and detects whether a face exists in the infrared image and whether the detected face is matched with a stored face; and if the human face passes the identification, performing living body detection according to the infrared image and the depth image, and detecting whether the human face has biological activity. In one embodiment, after acquiring the infrared image and the depth image, the first processing unit 220 may perform living body detection and then perform face recognition, or perform face recognition and living body detection simultaneously. After the face recognition passes and the detected face has bioactivity, the first processing unit 220 may send intermediate information of the infrared image and the depth image to the security processing unit 240. The safety processing unit 240 calculates intermediate information of the infrared image and the depth image to obtain depth information of the face, and sends the depth information to a CPU core under the TEE.
The safety processing unit 240 may be an independent processor, or may be a safety area formed in the second processing unit 230 by using a hardware and software isolation method, for example, the second processing unit 230 may be a multi-core processor, and one of the cores is defined as a safety processing unit, and is used to calculate depth information of a human face, match a captured infrared image with a stored infrared image, match a captured depth image with a stored depth image, and the like. The secure processing unit 240 may perform parallel processing or serial processing on the data.
FIG. 3 is a flow diagram of a method for information processing in one embodiment. As shown in fig. 3, an information processing method, which can be executed on the electronic device of fig. 1 or fig. 2, includes steps 302 to 306. Wherein:
step 302, if the first processing unit receives an image acquisition instruction sent by the second processing unit, controlling the camera module to acquire a target image according to the image acquisition instruction.
In particular, the first processing unit 220 is a processor for processing data, which may be, for example, an MCU processor. By controlling the input and output of the first processing unit 220, the first processing unit 220 can safely process data. The second processing unit 230 is a processor for processing data, which may be a CPU processor, for example.
The CPU in the electronic device has 2 operating modes: the system comprises a TEE and a REE, wherein the TEE is a trusted operating environment, and the REE is a natural operating environment. Normally, the CPU operates under the REE, but when the electronic device needs to acquire data with a higher security level, for example, when the electronic device needs to acquire face data, the CPU may switch from the REE to the TEE. When a CPU in the electronic equipment is a single core, the single core can be directly switched from REE to TEE; when the CPU in the electronic equipment has multiple cores, the electronic equipment switches one core from REE to TEE, and other cores still run in REE. The target image refers to an image collected by the camera on the target object, and may include at least one of an infrared image and a target speckle image.
When the second processing unit 230 receives the information that needs the face data and is sent from the application program side, one kernel in the CPU is switched from the REE to the TEE, and the CPU kernel switched to the TEE sends the image acquisition instruction to the first processing unit, thereby ensuring that the instruction input by the first processing unit is safe. When the first processing unit receives the image acquisition command, the first processing unit 220 can control the floodlight 214 in the camera module 210 to be turned on to acquire an infrared image and control the laser light 218 in the camera module 210 to be turned on to acquire a depth image. The floodlight 214 is a point light source that irradiates uniformly in all directions, the light emitted by the floodlight can be infrared light, and the electronic device can collect a human face to obtain an infrared image. The laser emitted by the laser camera 218 can be diffracted by the lens and the DOE to generate a pattern with speckle particles, the pattern with the speckle particles is projected to a target object, the offset of the speckle pattern is generated due to different distances between each point of the target object and the electronic equipment, and the laser camera 212 collects the target object to obtain a target speckle image. The laser camera 212 transmits the collected speckle image of the object to the first processing unit 220.
Step 304, acquiring a first image according to the target image.
In particular, the target image may include at least one of an infrared image and a target speckle image. The first processing unit processes the collected infrared image to obtain an infrared intermediate image, and processes the collected target speckle image to obtain a parallax image or a depth image with depth information and the like containing offset information of corresponding points in the target speckle image and the reference speckle image. The first image may include one or more of an infrared intermediate image, a parallax image, a depth image, and the like.
Step 306, sending the first image to a security processing unit, processing the first image by the security processing unit to obtain a processing result, and sending the processing result to the second processing unit.
Specifically, after the first processing unit 220 sends the first image to the security processing unit 240, the security processing unit 240 processes the first image to obtain a corresponding processing result, for example, processes the parallax image to obtain a depth image, and sends the infrared image and the depth image to the second processing unit 230 for storage. The second processing unit 230 may be in a trusted operating environment or an untrusted operating environment, and when the data is saved in the trusted operating environment, the data may be secured.
In the information processing method in this embodiment, the first processing unit 220 controls the camera module to acquire the target image by receiving the image acquisition instruction sent by the second processing unit 230, and then obtains the first image according to the target image, and sends the first image to the security processing unit for processing, and the acquired target image can be subjected to data processing by the first processing unit and the security processing unit together, so that the data processing speed is increased.
In one embodiment, the information processing method further includes: and if the first processing unit receives an image acquisition instruction sent by the second processing unit in the trusted operating environment, controlling the camera module to acquire the target image according to the image acquisition instruction. Therefore, the acquisition of the instructions and the data processing are carried out in a safe environment, and the safety of the data can be ensured.
In one embodiment, the target image comprises a target speckle image, and said acquiring a first image from the target image comprises: acquiring a reference speckle image with reference depth information; matching the reference speckle image with the target speckle image to obtain a matching result; and obtaining the first image according to the matching result, wherein the first image is a parallax image with offset information of corresponding points in the target speckle image and the reference speckle image.
Specifically, the depth information is a depth value. Selecting a pixel block with a preset size, such as 31pixel x 31pixel, from each point in the target speckle image by taking the point (x, y) as a center, searching a matched block on the reference speckle image, calculating the lateral offset between the coordinate of the matched point on the reference speckle image and the coordinate (x, y), wherein the right offset is positive, the left offset is negative, substituting the calculated offset into formula (1) to obtain the depth value of the point (x, y), and calculating the depth value of each point in sequence to obtain the depth image with the depth information of each point in the target speckle image. The first processing unit can also calculate the transverse offset of each point to obtain a parallax image with the offset of the corresponding point in the target speckle image and the reference speckle image.
In one embodiment, the target image comprises a target speckle image, and said acquiring a first image from the target image comprises: acquiring a reference speckle image with reference depth information; matching the reference speckle image with the target speckle image to obtain a matching result; and obtaining the first image according to the reference depth information and the matching result, wherein the first image is a depth map with depth information.
Specifically, the first processing unit 220 or the security processing unit 240 calculates an offset between the target speckle image and the reference speckle image, and according to the offset, a depth value Z of a spatial point corresponding to each pixel in the target speckle image from the first collector of the electronic device can be calculatedDThe calculation formula is as follows:
Figure BDA0001626752220000091
wherein L is the distance between the first collector and the second projector, f is the focal length of the lens in the first image collector, and Z0The reference speckle image is acquired by a first acquisition device of the electronic equipment, and P is the offset of the corresponding point in the target speckle image and the reference speckle image. P can be obtained by multiplying the amount of pixels of the offset of the corresponding points in the target speckle image and the reference speckle image by the actual distance of one pixel point. When the distance between the target object and the first collector is larger than the distance between the reference plane and the first collector, P is a negative value, and when the distance between the target object and the first collector is smaller than the distance between the reference plane and the first collector, P is a positive value.
The first processing unit 220 calculates the target speckle image and the reference speckle image to obtain a parallax image with offset information. The security processing unit 240 performs calculation to obtain a depth image from the parallax image with the offset information. Wherein the parallax image is an intermediate image.
Specifically, each point in the target speckle image is centered on the point (x, y), a pixel block with a preset size, for example, 31 pixels × 31 pixels, is selected, a matched block is searched on the reference speckle image, a lateral offset between coordinates of the matched point on the reference speckle image and coordinates (x, y) is calculated, the right offset is positive, the left offset is negative, the calculated offset is substituted into formula (1) to obtain a depth value of the point (x, y), and depth values of each point are calculated in sequence, so that a depth image with depth information of each point in the target speckle image can be obtained.
In one embodiment, the target image includes an infrared image, and the information processing method further includes: carrying out face recognition according to the infrared image; when the first image is a parallax image, acquiring a corresponding depth image according to the first image through the safety processing unit, and performing living body detection according to the infrared image and the depth image; and when the first image is a depth image, performing living body detection according to the infrared image and the first image through the safety processing unit.
After the camera module 210 collects the infrared image and the target speckle image, the target speckle image and the infrared image can be transmitted to the first processing unit 220. The first processing unit 220 may perform face detection on the infrared image and the depth image after receiving the infrared image and the depth image. The first processing unit 220 performs face detection according to the infrared image, including: the first processing unit 220 performs face recognition on the infrared image, detects whether a face exists in the infrared image, matches the existing face with a face stored in the electronic device if the face exists, and passes face recognition if the face matching is successful, that is, the detected face is consistent with the stored face. The first processing unit 220 may also perform a living body detection from the infrared image and the depth image. The first processing unit 220 performs the living body detection according to the infrared image and the depth image, including: the first processing unit 220 detects whether a human face exists according to the infrared image, and the first processing unit 220 detects whether the existing human face has depth information according to the depth image. If the human face exists in the infrared image and the human face existing in the infrared image has depth information in the depth image, the human face can be judged to have biological activity. Further, the first processing unit 220 may further perform intelligent recognition on the infrared image and the depth image by using an artificial intelligence model, obtain a detected face texture, and determine whether the face has biological activity according to the face texture. The sequence of the first processing unit 220 for performing face recognition on the infrared image and performing live body detection on the infrared image and the depth image can be changed, that is, the first processing unit 220 can perform face recognition first and then live body detection, and the first processing unit 220 can also perform live body detection first and then face recognition. Further, the safety processing unit 240 may perform face recognition and live body detection, or the first processing unit 220 may perform face recognition, the safety processing unit 240 may perform live body detection, or the first processing unit 220 may perform live body detection, and the safety processing unit 240 may perform face recognition, but is not limited thereto.
The method for the first processing unit 220 or the safety processing unit 240 to perform the living body detection according to the infrared image and the depth image includes: acquiring continuous multiframe infrared images and depth images, detecting whether the face has corresponding depth information according to the infrared images and the depth images, and detecting whether the face changes, such as whether the face blinks, swings, opens the mouth and the like, through the continuous multiframe infrared images and the depth images if the face has the corresponding depth information. And if the corresponding depth information of the face is detected and the face is changed, the face has biological activity, and the face detection is passed. When the first processing unit 220 performs face detection, if the face recognition fails, the living body detection is not performed, or if the living body detection fails, the face recognition is not performed.
According to the method in the embodiment of the application, the infrared image and the depth image are subjected to face recognition and living body detection, whether the detected face is a real person or not can be judged, and the accuracy of face detection is improved.
In an embodiment, the first processing unit 220 may also directly process the infrared image and the target speckle image to obtain depth information of the human face, and then send the obtained infrared image, the obtained target speckle image, and the obtained depth information of the human face to the security processing unit 240.
In one embodiment, controlling the camera module 210 to collect the infrared image and the depth image includes: the time interval between the camera module collecting the infrared image and the depth image is lower than a first threshold value.
The mode that the electronic equipment gathers infrared image and depth image includes: the electronic device turns on the floodlight 214 and collects an infrared image through the laser camera 212; the electronics turn on the laser lamp 218 and capture the depth image through the laser camera 212. When the electronic device collects the infrared image and the depth image, in order to ensure the accuracy of the collected data, the smaller the time interval between the collection of the infrared image and the collection of the depth image is, the better the time interval is.
The method for acquiring the infrared image and the depth image by the camera module comprises the following steps:
(1) set up floodlight controller and laser lamp controller respectively, first processing unit connects floodlight controller and laser lamp controller respectively through two way PWM, and when first processing unit need control floodlight and open or the laser lamp was opened, accessible PWM launches the pulse wave to floodlight controller and controls floodlight and open or launches the pulse wave to laser lamp controller and control the laser lamp and open, sends the pulse wave to two controllers respectively through PWM and controls the time interval between gathering infrared image and the degree of depth image.
(2) The controller is arranged and used for controlling the floodlight and the laser lamp, the first processing unit is connected with the controller through one path of PWM, when the first processing unit needs to control the floodlight to be started or the laser lamp to be started, the first processing unit transmits pulse waves to the controller through PWM to start the floodlight or the laser lamp, and the second processing unit 230 controls the switching between the floodlight and the laser lamp. And controlling the time interval of switching between the floodlight and the laser light to realize that the time interval between the acquisition of the infrared image and the depth image is lower than a first threshold value. The first threshold may be a value set by a user or a value set by an electronic device, for example, 1 ms.
According to the method in the embodiment of the application, the time interval between the collected infrared image and the collected depth image is lower than the first threshold value, the consistency between the collected infrared image and the collected depth image can be ensured, a large error between the infrared image and the collected depth image is avoided, and the accuracy of data processing is improved.
In one embodiment, as shown in fig. 4, the information processing method further includes:
step 402, receiving a data acquisition request of an application program, and acquiring a security level of the application program.
Specifically, the second processing unit receives a data acquisition request initiated by an application program, and then acquires the security level of the application program. An application is a program installed on an electronic device to provide a certain service. The security level of the application may be preset on the electronic device. For example, the security level of the unlock class application, the payment class application, and the like is high. Applications that are not authorized for authentication, such as third party applications that process images, have a low level of security.
Step 404, finding the precision level corresponding to the security level.
Specifically, different security levels are preset corresponding to different image accuracy levels. And checking the corresponding precision level according to the security level of the application program.
And step 406, adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to the application program.
Specifically, after the security level corresponding to the application program is obtained, the second processing unit may adjust the accuracy of the depth image according to the security level of the application program. The precision of the depth image is in a direct proportion relation with the security level of the application program, namely the higher the security level of the application program is, the higher the precision of the depth image is; the lower the security level of the application, the lower the accuracy of the depth image. And after the precision of the depth image is adjusted, the second processing unit sends the adjusted depth image to the application program.
The security level of the application program can be preset by the system, and can adopt a white list form to respectively set the security levels of different application programs. For example, a security level of a first application program authorized to provide beauty treatment may be set high, and a security level of a second application program unauthorized to provide beauty treatment may be set low.
In the embodiment of the application, the depth images with different precision levels are provided according to the security level of the application program, so that the depth image data can be effectively controlled, the security of the depth image data is ensured, and the depth image data and the like are prevented from being acquired by illegal application programs.
In one embodiment, adjusting the precision of the depth image according to the precision level comprises at least one of:
(1) adjusting the resolution of the depth image according to the precision level;
(2) and adjusting the number of the points in the target speckle image collected by the camera module according to the precision level.
When the second processing unit 230 adjusts the precision of the depth image, the resolution of the depth image may be adjusted. When the precision of the depth image is high, the resolution of the depth image is high; when the accuracy of the depth image is low, the resolution of the depth image is low. The resolution of the image can be adjusted by adjusting the number of pixels in the image. The second processing unit 230 may also adjust the accuracy of the depth image by reducing the number of points in the target speckle image collected by the camera module. When the number of points in the target speckle image is large, the precision of the depth image is high; when the target speckle image has fewer points, the depth image has lower precision.
According to the method, the precision of the depth image is adjusted according to the security level of the application program, so that the application programs with different security levels can obtain the depth images with different precisions, the risk that the application program with lower security level reveals data is reduced, and the security of the data is improved.
In one embodiment, as shown in fig. 5, the information processing method further includes:
step 502, receiving a data obtaining request of an application program, and obtaining a security level of the application program.
And 504, selecting a corresponding data transmission channel according to the security level, wherein the data transmission channel comprises a security channel and a common channel.
And step 506, sending the depth image to the application program through the data transmission channel corresponding to the security level.
The second processing unit 230 may identify the security level of the application program after receiving the data acquisition request of the application program. When the data is transmitted from the second processing unit 230 to the application program, a secure channel and a normal channel may be set, where the secure channel has a higher security level and the normal channel has a lower security level. When the data is transmitted in the secure channel, the data can be encrypted. The electronic equipment can set corresponding data transmission channels for application programs with different security levels. The application program with high security level corresponds to the secure channel, and the application program with low security level corresponds to the common channel. For example, the data transmission of the payment application program can adopt a secure channel, and the data transmission of the image application program can adopt a common channel. After acquiring the data transmission channel corresponding to the security level of the application program, the second processing unit 230 sends the depth image to the application program through the corresponding data transmission channel, so that the application program processes the depth image.
In one embodiment, the information processing method further includes: when the first processing unit receives an image acquisition instruction sent by the second processing unit in a first operating environment according to a verification request of an application program, controlling the camera module to acquire an infrared image and a target speckle image according to the image acquisition instruction; acquiring a depth image according to the target speckle image, sending the acquired infrared image and the acquired depth image to the safety processing unit, comparing the acquired infrared image with a stored infrared image by the safety processing unit, comparing the acquired depth image with the stored depth image to obtain a verification result, and sending the verification result to the second processing unit in the first operating environment; and sending the verification result to a server corresponding to the application program through a data security transmission channel.
Specifically, the authentication request of the application program may be a request requiring face authentication. The application program can be an unlocking application, a payment application and the like which need a verification result. The verification result is verification passing or verification failure. The verification passing means that the infrared image is used for face verification passing, and the depth image is used for living body detection passing. The verification failure refers to the failure of infrared image or depth image verification. The first operating environment may be a trusted operating environment.
The second processing unit 230 encrypts the verification result with a key predetermined by the application program to obtain an encrypted file, and sends the encrypted file to the server corresponding to the application program through the data secure transmission channel. For example, the Payment treasure application, the encrypted file is sent to a server corresponding to the Payment treasure.
In the embodiment of the application, the infrared image and the target speckle image are collected according to the verification request of the application program, the face recognition verification is carried out according to the infrared image, the living body detection is carried out according to the depth image, the verification result is sent to the application program, and a server corresponding to the application program judges whether the operation of passing the verification is executed or not according to the verification result.
FIG. 6 is a diagram of a software architecture of an information processing method according to an embodiment. As shown in fig. 6, the software architecture includes an application layer 610, an operating system 620, and a trusted execution environment 630. Wherein the modules in the trusted operating environment 630 include a security services module 634. The hardware layer comprises a floodlight & laser light 631, an infrared camera 632, a micro-control unit 633 and the like. The micro control unit 633 can secure data by controlling its input and output. The mcu 633 can collect the secure infrared image and the target speckle image by controlling the floodlight & laser light 631 and the infrared camera 632, and then transmit the secure infrared image and the target speckle image to the security service module 634 of the trusted operating environment 630. The operating system 630 comprises a security management module 621, a face management module 622, a camera driver 623 and a camera frame 624; the application layer 610 includes an application 611. The application 611 may initiate an image capture command, and the electronic device drives the floodlight & laser light 631 and the infrared camera 632 to operate through the image capture command. For example, when the operations of payment, unlocking, beautifying and the like are performed by acquiring a human face, the application program may initiate an image acquisition instruction for acquiring a human face image. After the camera acquires the infrared image and the target speckle image, whether the currently acquired image is used for safe application operation or non-safe application operation is judged according to the image acquisition instruction. When the acquired depth image is used for security application operation, the acquired infrared image and the target speckle image are sent to the micro control unit 633 through a security channel, the micro control unit 633 calculates according to the target speckle image and the reference speckle image to obtain a disparity map, calculates according to the disparity map to obtain a depth image, and sends the calculated depth image and the infrared image to the security service module 634. It is understood that the process of calculating a depth map from the target speckle image may also be performed in the security service module 634. The security service module 634 sends the infrared image and the depth image to the security management module 621. Generally, different applications 611 have corresponding security management modules 621, and the security management modules 621 send the depth images and the infrared images to the corresponding face management modules 622. The face management module 622 performs face detection, recognition, verification, and other processing according to the infrared image and the depth image, and then sends the processing result to the upper application program 611, and the application program 611 performs a security application operation according to the processing result. When the acquired depth image is used for non-security applications such as beauty, AR (augmented reality technology), and the like, the infrared image and the target speckle image acquired by the infrared camera 632 may be directly sent to the camera driver 623 through a non-security channel, and the camera driver 623 may calculate a disparity map according to the target speckle map and calculate a depth map according to the disparity map. The camera driver 623 may send the infrared image and the depth image to the camera framework 624, and then the camera framework 624 sends the infrared image and the depth image to the human face management module 622 or the application 611. Wherein, the switching between the safe channel and the non-safe channel is completed by the micro control unit 633.
It should be understood that, although the steps in the flowcharts of fig. 3, 4, and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 3, 4, and 5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 7 is a block diagram showing the configuration of an information processing apparatus according to an embodiment. As shown in fig. 7, an information processing apparatus 700 includes an acquisition module 710, a processing module 720, and a transmission module 730.
The acquisition module 710 is configured to control the camera module to acquire a target image according to an image acquisition instruction sent by the second processing unit if the first processing unit receives the image acquisition instruction;
the processing module 720 is configured to obtain a first image according to the target image.
The transmission module 730 is configured to send the first image to a security processing unit, process the first image through the security processing unit to obtain a processing result, and send the processing result to a second processing unit in the first operating environment.
In one embodiment, the target image includes a target speckle image, and the processing module 720 is further configured to obtain a reference speckle image with reference depth information; matching the reference speckle image with the target speckle image to obtain a matching result; and obtaining the first image according to the matching result, wherein the first image is a parallax image with offset information of corresponding points in the target speckle image and the reference speckle image.
In one embodiment, the target image includes a target speckle image, and the processing module 720 is further configured to obtain a reference speckle image with reference depth information; matching the reference speckle image with the target speckle image to obtain a matching result; and obtaining the first image according to the reference depth information and the matching result, wherein the first image is a depth image with depth information.
In one embodiment, the processing module 720 is further configured to perform face recognition based on the infrared image; when the first image is a parallax image, acquiring a corresponding depth image according to the first image through the safety processing unit, and performing living body detection according to the infrared image and the depth image; and when the first image is a depth image, performing living body detection according to the infrared image and the first image through the safety processing unit.
In one embodiment, the processing module 720 is further configured to receive a data obtaining request of an application program, and obtain the security level of the application program; searching for a precision level corresponding to the security level; and adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to the application program.
In one embodiment, the processing module 720 is further configured to adjust the resolution of the depth image according to the accuracy level; or adjusting the number of the points in the target speckle image collected by the camera module according to the precision level.
In one embodiment, the processing module 720 is further configured to receive a data obtaining request of an application program, and obtain the security level of the application program; determining a corresponding data transmission channel according to the security level; and sending the first image to the application program through a corresponding data transmission channel.
In one embodiment, the processing module 720 is further configured to, when the first processing unit receives an image acquisition instruction sent by the second processing unit in the first operating environment according to the verification request of the application program, control the camera module to acquire the infrared image and the target speckle image according to the image acquisition instruction; acquiring a depth image according to the target speckle image, sending the acquired infrared image and the acquired depth image to the safety processing unit, comparing the acquired infrared image with a stored infrared image by the safety processing unit, comparing the acquired depth image with the stored depth image to obtain a verification result, and sending the verification result to the second processing unit in the first operating environment; and sending the verification result to a server corresponding to the application program through a data security transmission channel.
The division of the modules in the information processing apparatus is only for illustration, and in other embodiments, the information processing apparatus may be divided into different modules as needed to complete all or part of the functions of the information processing apparatus.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the information processing method.
The embodiment of the application also provides a computer program product. A computer program product containing instructions which, when run on a computer, cause the computer to perform an information processing method.
The embodiment of the application also provides the electronic equipment. The internal structure of the electronic device is shown in fig. 2. The electronic device includes a camera module 210, a first processing unit 220, a second processing unit 230, and a security processing unit 240. The first processing unit 220 is connected to the camera module 210, the second processing unit 230 and the security processing unit 240, respectively. The secure processing unit 240 is connected to the second processing unit 230.
The first processing unit 220 is configured to receive an image acquisition instruction sent by the second processing unit 230, control the camera module 210 to acquire a target image according to the image acquisition instruction, acquire a first image according to the target image, and send the first image to the security processing unit 240.
The security processing unit 240 is configured to process the first image to obtain a processing result, and send the processing result to the second processing unit 220.
Specifically, the first processing unit 220 may be a microprocessor, the second processing unit 230 may be a central processing unit, and the security processing unit 240 may be an independent embedded processor or may be a secure single-core processor divided among multi-core processors of the second processing unit 230, and the like, and may perform parallel or serial processing on data. The second processing unit 230 performs serial processing on data in the first operating environment. The first operating environment is referred to as a TEE environment. The second processing unit 230 may also process data in an REE environment. The first processing unit 220 calculates the target speckle image and the reference speckle image to obtain a parallax image with offset information. The security processing unit 240 performs calculation to obtain a depth image from the parallax image with the offset information.
In the electronic device in the embodiment of the application, the first processing unit 220 controls the camera module 210 to acquire the target image by receiving the image acquisition instruction sent by the second processing unit 230, and then obtains the first image according to the target image, and sends the first image to the security processing unit for processing, and the acquired target image can be subjected to data processing by the first processing unit 220 and the security processing unit 240 together, so that the speed of data processing is increased.
In one embodiment, the first processing unit 220 may receive an image capturing instruction sent by the second processing unit 230 in the trusted operating environment, and control the camera module 210 to capture the target image according to the image capturing instruction. Therefore, the acquisition of the instructions and the data processing are carried out in a safe environment, and the safety of the data can be ensured.
In one embodiment, the camera module 210 includes a first projector, a second projector and a first image collector respectively connected to the first processing unit 220.
The first processing unit 220 is further adapted to control the first projector to emit infrared light and the second projector to emit a structured light pattern; the first image collector is used for collecting the infrared image and the target speckle image and sending the collected infrared image and the collected target speckle image to the first processing unit.
The first image collector may be a laser camera. The first projector may be a floodlight. The second projector may be a laser lamp. The laser lamp may include a light source, a lens, and a structured light pattern generator. The first image collector and the second projector are positioned on the same plane.
In one embodiment, the target image comprises a target speckle image; the first processing unit 220 is further configured to obtain a reference speckle image, where the reference speckle image has reference depth information, match the reference speckle image with the target speckle image to obtain a matching result, and obtain the first image according to the matching result, where the first image is a parallax image with offset information of corresponding points of the target speckle image and the reference speckle image.
In one embodiment, the target image comprises a target speckle image; the first processing unit 220 is further configured to obtain a reference speckle image, where the reference speckle image has reference depth information, match the reference speckle image with the target speckle image to obtain a matching result, and obtain the first image according to the reference depth information and the matching result, where the first image is a depth image with depth information.
In one embodiment, controlling the camera module to acquire the infrared image and the depth image comprises: the time interval between the camera module collecting the infrared image and the depth image is lower than a first threshold value.
The mode that the electronic equipment gathers infrared image and depth image includes: the electronic device turns on the floodlight 214 and collects an infrared image through the laser camera 212; the electronics turn on the laser lamp 218 and capture the depth image through the laser camera 212. When the electronic device collects the infrared image and the depth image, in order to ensure the accuracy of the collected data, the smaller the time interval between the collection of the infrared image and the collection of the depth image is, the better the time interval is.
In one embodiment, the electronic device may further include a first projector controller and a second projector controller, the first processing unit is connected to the first projector controller and the second projector controller respectively through two PWM paths, when the first processing unit needs to control the first projector to be turned on or the second projector to be turned on, the first processing unit may transmit pulse waves to the first projector controller through PWM to control the floodlight to be turned on or the second projector controller to control the laser light to be turned on, and the first processing unit may transmit pulse waves to the two controllers respectively through PWM to control a time interval between the collection of the infrared image and the depth image.
In one embodiment, the electronic device may include a controller, the controller controls the first projector and the second projector, the first processing unit is connected to the controller through a PWM, when the first processing unit needs to control the first projector to be turned on or the second projector to be turned on, the first processing unit transmits a pulse wave to the controller through the PWM to turn on the first projector or the second projector, and the second processing unit controls switching between the first projector and the second projector. The time interval between the acquisition of the infrared image and the depth image is below a first threshold by controlling the time interval of the switching between the first projector and the second projector. The first threshold may be a value set by a user or a value set by an electronic device, for example, 1 ms.
In one embodiment, the security processing unit 240 is further configured to perform face recognition according to the infrared image;
when the first image is a parallax image, the safety processing unit acquires a corresponding depth image according to the first image, and performs living body detection according to the infrared image and the depth image;
and when the first image is a depth image, the safety processing unit carries out living body detection according to the infrared image and the first image.
In an embodiment, the second processing unit 230 is further configured to receive a data obtaining request of an application program, obtain a security level of the application program, find a precision level corresponding to the security level, adjust precision of the depth image according to the precision level, and send the adjusted depth image to the application program.
In one embodiment, the second processing unit 230 is further configured to adjust a resolution of the depth image according to the precision level.
In one embodiment, the second processing unit 230 is further configured to adjust the number of points in the target speckle image collected by the camera module 210 according to the accuracy level.
In one embodiment, the second processing unit 230 is further configured to receive a data obtaining request of an application program, obtain a security level of the application program, determine a corresponding data transmission channel according to the security level, and send the first image to the application program through the corresponding data transmission channel.
In one embodiment, the first processing unit 220 is further configured to receive an image acquisition instruction sent by the second processing unit 230 in the first operating environment according to a verification request of an application program, control the camera module 210 to acquire an infrared image and a target speckle image according to the image acquisition instruction, acquire a depth image according to the target speckle image, and send the acquired infrared image and the acquired depth image to the security processing unit 240;
the security processing unit 240 is further configured to compare the acquired infrared image with a stored infrared image, compare the acquired depth image with a stored depth image to obtain a verification result, and send the verification result to the second processing unit 230 in the first operating environment;
the second processing unit 230 sends the verification result to the server corresponding to the application program through a data secure transmission channel.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (19)

1. An information processing method characterized by comprising:
if the first processing unit receives an image acquisition instruction sent by the second processing unit, the first processing unit controls the camera module to acquire a target image according to the image acquisition instruction;
the first processing unit acquires a first image according to the target image;
the first processing unit sends the first image to a safety processing unit, the safety processing unit processes the first image to obtain a processing result, and the processing result is sent to the second processing unit;
the image acquisition instruction is sent to the first processing unit by the second processing unit after the natural operation environment is switched to the trusted operation environment, the first processing unit is an MCU, the second processing unit is a CPU, and the safety processing unit is a safety region formed in the second processing unit in a hardware and software isolation mode.
2. The method of claim 1, wherein the target image comprises a target speckle image;
the acquiring of the first image according to the target image comprises:
acquiring a reference speckle image with reference depth information;
matching the reference speckle image with the target speckle image to obtain a matching result;
and obtaining the first image according to the matching result, wherein the first image is a parallax image with offset information of corresponding points in the target speckle image and the reference speckle image.
3. The method of claim 1, wherein the target image comprises a target speckle image;
the acquiring of the first image according to the target image comprises:
acquiring a reference speckle image with reference depth information;
matching the reference speckle image with the target speckle image to obtain a matching result;
and obtaining the first image according to the reference depth information and the matching result, wherein the first image is a depth image with depth information.
4. The method of claim 2 or 3, wherein the target image further comprises an infrared image; the method further comprises the following steps:
carrying out face recognition according to the infrared image;
when the first image is a parallax image, acquiring a corresponding depth image according to the first image through the safety processing unit, and performing living body detection according to the infrared image and the depth image;
and when the first image is a depth image, performing living body detection according to the infrared image and the first image through the safety processing unit.
5. The method of claim 1, further comprising:
receiving a data acquisition request of an application program, and acquiring the security level of the application program;
searching for a precision level corresponding to the security level;
and adjusting the precision of the depth image according to the precision level, and sending the adjusted depth image to the application program.
6. The method of claim 5, wherein the adjusting the precision of the depth image according to the precision level comprises:
adjusting the resolution of the depth image according to the precision level;
or adjusting the number of the points in the target speckle image collected by the camera module according to the precision level.
7. The method of claim 1, further comprising:
receiving a data acquisition request of an application program, and acquiring the security level of the application program;
determining a corresponding data transmission channel according to the security level;
and sending the first image to the application program through a corresponding data transmission channel.
8. The method of claim 1, further comprising:
when the first processing unit receives an image acquisition instruction sent by the second processing unit in a first operating environment according to a verification request of an application program, controlling the camera module to acquire an infrared image and a target speckle image according to the image acquisition instruction;
acquiring a depth image according to the target speckle image, sending the acquired infrared image and the acquired depth image to the safety processing unit, comparing the acquired infrared image with a stored infrared image by the safety processing unit, comparing the acquired depth image with the stored depth image to obtain a verification result, and sending the verification result to the second processing unit in the first operating environment;
and sending the verification result to a server corresponding to the application program through a data security transmission channel.
9. An information processing apparatus characterized by comprising:
the acquisition module is used for controlling the camera module to acquire a target image according to an image acquisition instruction through the first processing unit if the first processing unit receives the image acquisition instruction sent by the second processing unit;
the processing module is used for acquiring a first image according to the target image through the first processing unit;
the transmission module is used for sending the first image to a security processing unit through the first processing unit, processing the first image through the security processing unit to obtain a processing result, and sending the processing result to the second processing unit;
the image acquisition instruction is sent to the first processing unit by the second processing unit after the natural operation environment is switched to the trusted operation environment, the first processing unit is an MCU, the second processing unit is a CPU, and the safety processing unit is a safety region formed in the second processing unit in a hardware and software isolation mode.
10. An electronic device, comprising: the camera comprises a first processing unit, a second processing unit, a safety processing unit and a camera module, wherein the first processing unit is respectively connected with the second processing unit, the safety processing unit and the camera module, and the safety processing unit is connected with the second processing unit;
the first processing unit is used for receiving an image acquisition instruction sent by the second processing unit, controlling a camera module to acquire a target image according to the image acquisition instruction, acquiring a first image according to the target image and sending the first image to the safety processing unit;
the safety processing unit is used for processing the first image to obtain a processing result and sending the processing result to the second processing unit;
the image acquisition instruction is sent to the first processing unit by the second processing unit after the natural operation environment is switched to the trusted operation environment, the first processing unit is an MCU, the second processing unit is a CPU, and the safety processing unit is a safety region formed in the second processing unit in a hardware and software isolation mode.
11. The electronic device of claim 10, wherein the camera module comprises a first projector, a second projector and a first image collector respectively connected to the first processing unit;
the first processing unit is further configured to control the first projector to emit infrared light and the second projector to emit a structured light pattern; the first image collector is used for collecting the infrared image and the target speckle image and sending the collected infrared image and the collected target speckle image to the first processing unit.
12. The electronic device of claim 11, wherein the first processing unit is further configured to obtain a reference speckle image, where the reference speckle image has reference depth information, match the reference speckle image with the target speckle image to obtain a matching result, and obtain the first image according to the matching result, where the first image is a parallax image having offset information of corresponding points of the target speckle image and the reference speckle image.
13. The electronic device of claim 11, wherein the first processing unit is further configured to obtain a reference speckle image with reference depth information, match the reference speckle image with the target speckle image to obtain a matching result, and obtain the first image according to the reference depth information and the matching result, wherein the first image is a depth image with depth information.
14. The electronic device according to claim 12 or 13, wherein the security processing unit is further configured to perform face recognition based on the infrared image;
when the first image is a parallax image, the safety processing unit acquires a corresponding depth image according to the first image, and performs living body detection according to the infrared image and the depth image;
and when the first image is a depth image, the safety processing unit carries out living body detection according to the infrared image and the first image.
15. The electronic device according to claim 10, wherein the second processing unit is further configured to receive a data acquisition request from an application program, acquire a security level of the application program, search for a precision level corresponding to the security level, adjust precision of the depth image according to the precision level, and send the adjusted depth image to the application program.
16. The electronic device of claim 15, wherein the second processing unit is further configured to adjust a resolution of the depth image according to the level of precision;
or the second processing unit is further configured to adjust the number of the points in the target speckle image collected by the camera module according to the accuracy level.
17. The electronic device according to claim 10, wherein the second processing unit is further configured to receive a data acquisition request of an application program, acquire a security level of the application program, determine a corresponding data transmission channel according to the security level, and send the first image to the application program through the corresponding data transmission channel.
18. The electronic device according to claim 10, wherein the first processing unit is further configured to receive an image acquisition instruction sent by the second processing unit in the first operating environment according to a verification request of an application program, control the camera module to acquire an infrared image and a target speckle image according to the image acquisition instruction, acquire a depth image according to the target speckle image, and send the acquired infrared image and the acquired depth image to the security processing unit;
the safety processing unit is further used for comparing the acquired infrared image with a stored infrared image, comparing the acquired depth image with a stored depth image to obtain a verification result, and sending the verification result to the second processing unit in the first operating environment;
and the second processing unit sends the verification result to a server corresponding to the application program through a data security transmission channel.
19. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN201810326603.9A 2018-04-12 2018-04-12 Information processing method and device, electronic equipment and computer readable storage medium Active CN108573170B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201810326603.9A CN108573170B (en) 2018-04-12 2018-04-12 Information processing method and device, electronic equipment and computer readable storage medium
EP19784735.3A EP3654243A4 (en) 2018-04-12 2019-03-29 Method and device for image processing, computer-readable storage medium, and electronic device
PCT/CN2019/080428 WO2019196683A1 (en) 2018-04-12 2019-03-29 Method and device for image processing, computer-readable storage medium, and electronic device
US16/740,914 US11256903B2 (en) 2018-04-12 2020-01-13 Image processing method, image processing device, computer readable storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810326603.9A CN108573170B (en) 2018-04-12 2018-04-12 Information processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108573170A CN108573170A (en) 2018-09-25
CN108573170B true CN108573170B (en) 2020-06-12

Family

ID=63574804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810326603.9A Active CN108573170B (en) 2018-04-12 2018-04-12 Information processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108573170B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196683A1 (en) * 2018-04-12 2019-10-17 Oppo广东移动通信有限公司 Method and device for image processing, computer-readable storage medium, and electronic device
ES2938471T3 (en) 2018-04-28 2023-04-11 Guangdong Oppo Mobile Telecommunications Corp Ltd Data processing method, electronic device and computer-readable storage medium
CN108711054B (en) * 2018-04-28 2020-02-11 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
WO2019205889A1 (en) * 2018-04-28 2019-10-31 Oppo广东移动通信有限公司 Image processing method, apparatus, computer-readable storage medium, and electronic device
CN110191266B (en) * 2018-04-28 2021-08-31 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment and computer readable storage medium
US11403884B2 (en) 2019-01-16 2022-08-02 Shenzhen GOODIX Technology Co., Ltd. Anti-spoofing face ID sensing
CN112215113A (en) * 2020-09-30 2021-01-12 张成林 Face recognition method and device
CN113014782B (en) * 2021-03-19 2022-11-01 展讯通信(上海)有限公司 Image data processing method and device, camera equipment, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373924A (en) * 2015-10-10 2016-03-02 北京思比科微电子技术股份有限公司 System facing terminal equipment and providing safety payment function
CN107292283A (en) * 2017-07-12 2017-10-24 深圳奥比中光科技有限公司 Mix face identification method
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image
WO2018051336A1 (en) * 2016-09-18 2018-03-22 Yeda Research And Development Co. Ltd. Systems and methods for generating 3d images based on fluorescent illumination

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106200891B (en) * 2015-05-08 2019-09-06 阿里巴巴集团控股有限公司 Show the method, apparatus and system of user interface
CN106682522A (en) * 2016-11-29 2017-05-17 大唐微电子技术有限公司 Fingerprint encryption device and implementation method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373924A (en) * 2015-10-10 2016-03-02 北京思比科微电子技术股份有限公司 System facing terminal equipment and providing safety payment function
WO2018051336A1 (en) * 2016-09-18 2018-03-22 Yeda Research And Development Co. Ltd. Systems and methods for generating 3d images based on fluorescent illumination
CN107292283A (en) * 2017-07-12 2017-10-24 深圳奥比中光科技有限公司 Mix face identification method
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image

Also Published As

Publication number Publication date
CN108573170A (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN108573170B (en) Information processing method and device, electronic equipment and computer readable storage medium
US11256903B2 (en) Image processing method, image processing device, computer readable storage medium and electronic device
CN108549867B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108764052B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108804895B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108769509B (en) Control method, apparatus, electronic equipment and the storage medium of camera
CN108805024B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110191266B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN109213610B (en) Data processing method and device, computer readable storage medium and electronic equipment
US11275927B2 (en) Method and device for processing image, computer readable storage medium and electronic device
CN108564032B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108716982B (en) Optical element detection method, optical element detection device, electronic equipment and storage medium
CN108711054B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108985255B (en) Data processing method and device, computer readable storage medium and electronic equipment
KR20180134280A (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
TW201944290A (en) Face recognition method and apparatus, and mobile terminal and storage medium
CN108830141A (en) Image processing method, device, computer readable storage medium and electronic equipment
US11170204B2 (en) Data processing method, electronic device and computer-readable storage medium
CN108764053A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN108833887B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108846310B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108924421B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
WO2020015403A1 (en) Method and device for image processing, computer readable storage medium and electronic device
CN108833885B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108881712B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant