WO2019206020A1 - 图像处理方法、装置、计算机可读存储介质和电子设备 - Google Patents

图像处理方法、装置、计算机可读存储介质和电子设备 Download PDF

Info

Publication number
WO2019206020A1
WO2019206020A1 PCT/CN2019/083260 CN2019083260W WO2019206020A1 WO 2019206020 A1 WO2019206020 A1 WO 2019206020A1 CN 2019083260 W CN2019083260 W CN 2019083260W WO 2019206020 A1 WO2019206020 A1 WO 2019206020A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
speckle
instruction
application
Prior art date
Application number
PCT/CN2019/083260
Other languages
English (en)
French (fr)
Inventor
周海涛
郭子青
欧锦荣
惠方方
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810404509.0A external-priority patent/CN108804895B/zh
Priority claimed from CN201810403000.4A external-priority patent/CN108830141A/zh
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP19791784.2A priority Critical patent/EP3624006A4/en
Publication of WO2019206020A1 publication Critical patent/WO2019206020A1/zh
Priority to US16/671,856 priority patent/US11275927B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/105Multiple levels of security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2113Multi-level security, e.g. mandatory access control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2151Time stamp
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method, an image processing apparatus, a computer readable storage medium, and an electronic device.
  • face features are unique, face recognition technology is more and more widely used in intelligent terminals.
  • Many applications of smart terminals are authenticated by faces, such as unlocking smart terminals through faces and paying for authentication through faces.
  • the smart terminal can also process images containing faces. For example, the facial features are recognized, the expression packs are created according to the facial expressions, or the facial features are processed by the facial features.
  • Embodiments of the present application provide an image processing method, an image processing apparatus, a computer readable storage medium, and an electronic device.
  • the image processing method of the embodiment of the present application includes: determining the security of the application operation corresponding to the image collection instruction if the image acquisition instruction is detected; and collecting an image corresponding to the determination result according to the determination result.
  • the image processing apparatus of the embodiment of the present application includes a detection total module and an acquisition total module.
  • the detecting total module is configured to determine the security of the application operation corresponding to the image capturing instruction if the image capturing instruction is detected.
  • the collection total module is configured to collect an image corresponding to the determination result according to the determination result.
  • a computer readable storage medium of an embodiment of the present application on which a computer program is stored.
  • the image processing method described above is implemented when the computer program is executed by a processor.
  • An electronic device includes a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor executes the image processing method described above.
  • FIG. 1 is an application scenario diagram of an image processing method according to some embodiments of the present application.
  • FIGS. 2 to 4 are flowcharts of an image processing method according to some embodiments of the present application.
  • FIG. 5 is a schematic diagram of calculating depth information according to some embodiments of the present application.
  • FIG. 6 to 11 are flowcharts of an image processing method according to some embodiments of the present application.
  • FIG. 12 is a hardware structural diagram of an image processing method according to some embodiments of the present disclosure.
  • FIG. 13 is a hardware structural diagram of an image processing method according to some embodiments of the present application.
  • FIG. 14 is a schematic diagram of a software architecture for implementing an image processing method according to some embodiments of the present disclosure
  • 15 to 17 are schematic diagrams showing the structure of an image processing apparatus according to some embodiments of the present application.
  • first may be referred to as a second client
  • second client may be referred to as a first client, without departing from the scope of the present application.
  • Both the first client and the second client are clients, but they are not the same client.
  • FIG. 1 is an application scenario diagram of an image processing method in an embodiment.
  • the electronic device 100 is included in the application scenario.
  • the camera module 10 can be installed in the electronic device 100, and a plurality of applications can also be installed.
  • the electronic device 100 detects the image capturing instruction, it determines the security of the application operation corresponding to the image capturing instruction, and collects an image corresponding to the determination result according to the determination result.
  • the electronic device 100 can be a smart phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
  • the electronic device 100 detects an image acquisition instruction and determines whether the application operation corresponding to the image acquisition instruction is a security operation. If the application operation corresponding to the image acquisition instruction is a safe operation, the control camera module 10 collects the infrared image and the speckle image 900 according to the image acquisition instruction. The target image is acquired based on the infrared image and the speckle image 900, and the face recognition processing is performed according to the target image in a safe operating environment. The face recognition result is sent to the target application that initiates the image acquisition instruction, and the face recognition result is used to instruct the target application to perform the application operation.
  • the electronic device 100 when the electronic device 100 detects an image acquisition instruction, it determines whether the application operation corresponding to the image acquisition instruction is an unsafe operation. If the application operation corresponding to the image capture instruction is a non-secure operation, the camera module 10 can be controlled to collect the speckle image 900 according to the image acquisition instruction. The depth image is then calculated from the speckle image 900 and the depth image is sent to the target application that initiated the image acquisition instruction, and the target application can perform the application operation based on the depth image.
  • Image processing methods include:
  • step 001 includes step 011
  • step 002 includes step 012.
  • the image processing method includes steps 011 to 014. among them:
  • step 011 if an image acquisition instruction is detected, it is determined whether the application operation corresponding to the image acquisition instruction is a security operation.
  • the camera module 10 can be mounted on the electronic device 100 and captured by the camera in the installed camera module 10.
  • the camera can be divided into a laser camera 112, a visible light camera, and the like according to the acquired image.
  • the laser camera 112 can acquire an image formed by laser irradiation onto the object
  • the visible light camera can acquire an image formed by the visible light being irradiated onto the object.
  • a plurality of cameras can be mounted on the electronic device 100, and the location of the installation is not limited.
  • a camera can be mounted on the front panel of the electronic device 100, and two cameras can be mounted on the back panel.
  • the camera can also be mounted in the interior of the electronic device 100 in an in-line manner, and then the camera can be turned on by rotating or sliding.
  • the front camera and the rear camera can be installed on the electronic device 100, and the front camera and the rear camera can acquire images from different viewing angles.
  • the front camera can acquire images from the front view of the electronic device 100
  • the rear camera An image can be acquired from the back view of the electronic device 100.
  • the image acquisition instruction refers to an instruction for triggering an image acquisition operation.
  • an application operation refers to an operation that an application needs to complete. After the user opens the application, different application operations can be completed through the application.
  • the application operation may be a payment operation, a shooting operation, an unlocking operation, a game operation, or the like.
  • Application operations with higher security requirements are considered safe operations, and application operations with lower security requirements are considered to be non-secure operations.
  • Step 012 If the application operation corresponding to the image acquisition instruction is a safe operation, the control camera module 10 collects the infrared image and the speckle image according to the image acquisition instruction.
  • the processing unit of the electronic device 100 can receive an instruction from the upper application.
  • the camera module 10 can be controlled to perform an operation, and the infrared image and the speckle image are collected by the camera.
  • the processing unit is connected to the camera, and the image acquired by the camera can be transmitted to the processing unit, and processed by the processing unit for cropping, brightness adjustment, face detection, face recognition, and the like.
  • the camera module 10 can include, but is not limited to, a laser camera 112, a laser lamp 118, and a floodlight 114.
  • the processing unit controls the laser lamp 118 and the floodlight 114 to perform time-sharing operation.
  • the laser lamp 118 is turned on
  • the speckle image 900 is collected by the laser camera 112; when the floodlight 114 is turned on At the time, an infrared image is acquired by the laser camera 112.
  • the laser speckle when the laser is irradiated on the optically rough surface whose average fluctuation is greater than the order of the wavelength, the randomly scattered sub-waves of the surface elements superimposed on each other cause the reflected light field to have a random spatial light intensity distribution, showing Granular structure, this is the laser speckle.
  • the resulting laser speckles are highly random, so the laser beams emitted by different laser emitters (i.e., lasers 118) produce different laser speckles.
  • the formed laser speckle is illuminated onto objects of different depths and shapes, the generated speckle image 900 is different.
  • the laser speckle formed by the different laser emitters 112 is unique, so that the resulting speckle image 900 is also unique.
  • the laser speckle formed by the laser lamp 118 can be irradiated onto the object, and then the speckle image 900 formed by the laser speckle on the object is collected by the laser camera 112.
  • the first processing unit 30 and the second processing unit 22 may be included in the electronic device 100, and both the first processing unit 30 and the second processing unit 22 are operated in a secure operating environment.
  • the secure operating environment may include a first secure environment and a second secure environment, the first processing unit 30 operating in the first secure environment and the second processing unit 22 operating in the second secure environment.
  • the first processing unit 30 and the second processing unit 22 are processing units distributed on different processors and are in different security environments.
  • the first processing unit 30 may be an external MCU (Microcontroller Unit) module, or a security processing module in a DSP (Digital Signal Processing), and the second processing unit 22 may be in a TEE.
  • CPU Central Processing Unit
  • the CPU in the electronic device 100 has two operation modes: TEE and REE (Rich Execution Environment). Normally, the CPU runs under the REE, but when the electronic device 100 needs to acquire data with a higher security level, for example, when the electronic device 100 needs to acquire face data for identification and verification, the CPU can be switched to the TEE for operation.
  • TEE Transmission Engineering Environment
  • REE Row Execution Environment
  • the CPU in the electronic device 100 is a single core, the single core can be directly switched from REE to TEE; when the CPU in the electronic device 100 is multi-core, the electronic device 100 switches one core from REE to TEE, and other cores still run at REE.
  • Step 013 acquiring a target image according to the infrared image and the speckle image 900, and performing face recognition processing according to the target image in a safe running environment.
  • the target image may include an infrared image and a depth image.
  • the image acquisition instruction initiated by the target application may be sent to the first processing unit 30.
  • the camera module 10 may be controlled to collect the speckle image 900 and the infrared.
  • the image is calculated and the depth image is calculated from the speckle image 900.
  • the depth image and the infrared image are then transmitted to the second processing unit 22, and the second processing unit 22 performs face recognition processing based on the depth image and the infrared image.
  • the laser lamp 118 can emit a plurality of laser scattered spots, and when the laser scattered spots are irradiated onto objects of different distances, the positions of the spots presented on the image are different.
  • the electronic device 100 may pre-acquire a standard reference image, which is an image formed by laser speckle illumination on a plane of a known distance, so the scattered spots on the reference image are generally evenly distributed. Then, the electronic device 100 establishes a correspondence relationship between each of the scattered spots in the reference image and the reference depth. It can be understood that the scattered spots on the reference image may not be evenly distributed, which is not limited herein.
  • the electronic device 100 controls the laser lamp 118 to emit laser speckle, and after the laser speckle is irradiated onto the object, the speckle image 900 is acquired by the laser camera 112. Then, each of the scattered speckle images in the speckle image 900 is compared with the scattered speckles in the reference image, and the positional offset of the scattered speckles in the speckle image 900 relative to the corresponding scattered speckles in the reference image is acquired, and the speckles are scattered. The position offset and the reference depth are used to obtain the actual depth information corresponding to the scattered spots.
  • the infrared image acquired by the laser camera 112 corresponds to the speckle image 900, and the speckle image 900 can be used to calculate depth information corresponding to each pixel in the infrared image.
  • the face can be detected and recognized by the infrared image, and the depth information corresponding to the face can be calculated according to the speckle image 900.
  • the relative depth is first calculated according to the positional deviation of the speckle image 900 relative to the scattered speckle of the reference image, and the relative depth may represent the actual photographing object to the reference plane. Depth information; then calculate the actual depth information of the object based on the acquired relative depth and reference depth.
  • the depth image is used to represent the depth information corresponding to the infrared image, and may be the relative depth of the represented object to the reference plane, or the absolute depth of the object to the camera.
  • the face recognition processing refers to a process of recognizing a face included in an image.
  • the face detection process may be first performed according to the infrared image, the region where the face is located in the infrared image is extracted, and the extracted face is identified and processed to distinguish the identity of the face.
  • the depth image corresponds to the infrared image, and the depth information corresponding to the face can be obtained according to the depth image, thereby identifying whether the face is a living body.
  • the identity of the currently collected face can be authenticated.
  • Step 014 Send the face recognition result to the target application that initiates the image collection instruction, and the face recognition result is used to indicate that the target application performs the application operation.
  • the second processing unit 22 may perform face recognition processing according to the depth image and the infrared image, and then transmit the face recognition result to the target application that initiated the image acquisition instruction. It can be understood that when the target application generates the image acquisition instruction, the target application identifier, the instruction initiation time, the acquired image type, and the like are written in the image acquisition instruction. When detecting the image capturing instruction, the electronic device 100 can acquire the corresponding target application according to the target application identifier contained therein.
  • the face recognition result may include a face matching result and a living body detection result, and the face matching result is used to match whether the face in the image matches the preset face, and the living body detection result is used to indicate whether the face included in the image is a living body. human face.
  • the target application can perform corresponding application operations based on the face recognition results. For example, the unlocking is performed according to the face recognition result, and when the face in the captured image matches the preset face, and the face is a living face, the locked state of the electronic device 100 is released.
  • the image processing method provided by the embodiment of FIG. 3 determines whether the application operation corresponding to the image acquisition instruction is a security operation when the image acquisition instruction is detected. If the application operation corresponding to the image acquisition instruction is a safe operation, the infrared image and the speckle image 900 are acquired according to the image acquisition instruction. Then, the captured image is subjected to face recognition processing in a safe operating environment, and the face recognition result is sent to the target application. This ensures that the target application processes the image in a highly secure environment while performing secure operations, thereby ensuring improved image processing security.
  • step 012 includes step 0121 and step 0122 .
  • step 013 includes steps 0131 to 0135.
  • step 014 includes step 0141. among them:
  • step 011 if an image acquisition instruction is detected, it is determined whether the application operation corresponding to the image acquisition instruction is a security operation.
  • step 0121 if the application operation corresponding to the image acquisition instruction is a security operation, the timestamp included in the image acquisition instruction is acquired, and the timestamp is used to indicate the time when the image acquisition instruction is initiated.
  • the application may write a timestamp in the image acquisition instruction, where the timestamp is used to record the time when the application initiates the image acquisition instruction.
  • the first processing unit 30 may acquire a time stamp from the image acquisition instruction, and determine the time at which the image acquisition instruction is generated according to the time stamp.
  • the application can read the time recorded by the clock of the electronic device 100 as a time stamp and write the acquired time stamp to the image capture instruction.
  • the system time can be obtained through the System.currentTimeMillis() function.
  • Step 0122 If the interval duration between the timestamp and the target time is less than the duration threshold, the control camera module 10 collects the infrared image and the speckle image 900 according to the image acquisition instruction, and the target time is used to indicate the time when the image acquisition instruction is detected.
  • the target time refers to the time when the electronic device 100 detects the image capturing instruction, specifically, the time when the first processing unit 30 detects the image capturing instruction.
  • the interval duration between the time stamp and the target time is specifically the time interval from the time when the image acquisition command is initiated to the time when the electronic device 100 detects the image acquisition command. If the interval duration exceeds the duration threshold, it is considered that the response of the command is abnormal, and the acquisition of the image can be stopped, and an exception message is returned to the application. If the interval duration is less than the duration threshold, the camera module 10 is controlled to acquire the infrared image and the speckle image 900.
  • the camera module 10 is composed of a first camera module and a second camera module.
  • the first camera module is used to collect infrared images
  • the second camera module is used to collect the speckle image 900.
  • the first camera module is composed of a floodlight 114 and a laser camera 112.
  • the second camera module is composed of a laser lamp 118 and a laser camera 112.
  • the laser camera 112 and the second camera module of the first camera module may be the same laser camera or a different laser camera, which is not limited herein.
  • the first processing unit 30 controls the first camera module and the second camera module to operate.
  • the first camera module and the second camera module can be processed in parallel or in a time-sharing manner, and the order of the work is not limited.
  • the first camera module may be first controlled to acquire an infrared image
  • the second camera module may be first controlled to collect a speckle image 900.
  • the infrared image and the speckle image 900 are corresponding, that is, the consistency of the infrared image and the speckle image 900 must be guaranteed.
  • the time interval between the first time at which the infrared image is acquired and the second time at which the speckle image 900 is acquired is less than a first threshold.
  • the first threshold is generally a relatively small value.
  • the adjustment can also be made according to the changing rule of the object to be photographed.
  • the faster the subject changes the smaller the first threshold corresponding to the acquisition.
  • the first threshold can be set to a larger value assuming that the subject is in a stationary state for a long time. Specifically, the speed of change of the object to be photographed is acquired, and a corresponding first threshold is acquired according to the speed of change.
  • the user can click the unlock button to initiate an unlock command, and point the front camera to the face for shooting.
  • the mobile phone sends an unlock command to the first processing unit 30, and the first processing unit 30 controls the camera module 10 to operate.
  • the infrared image is acquired by the first camera module, and after the interval of 1 millisecond, the second camera module is controlled to collect the speckle image 900, and the acquired infrared image and the speckle image 900 are used for authentication and unlocking.
  • the camera module 10 is controlled to acquire an infrared image at a first moment, and the camera module 10 is controlled to acquire a speckle image at a second time; the time interval between the first time and the target time is less than a second threshold; The time interval between the time and the target time is less than the third threshold. If the time interval between the first time and the target time is less than the second threshold, the camera module 10 is controlled to collect an infrared image; if the time interval between the first time and the target time is greater than the second threshold, the application may be returned Respond to the timeout prompt and wait for the application to re-initiate the image capture instruction.
  • the first processing unit 30 can control the camera module 10 to collect the speckle image, and the time interval between the second moment and the first moment of the speckle image 900 is less than the first threshold, and at the same time The time interval between the second time and the target time is less than the third threshold. If the time interval between the second time and the first time is greater than the first threshold, or the time interval between the second time and the target time is greater than the third threshold, the application may return a response timeout prompt message and wait for the application. The program re-initiates the image acquisition instruction. It can be understood that the second moment of collecting the speckle image 900 may be greater than the first moment of acquiring the infrared image, or may be smaller than the first moment of acquiring the infrared image, which is not limited herein.
  • the electronic device 100 may separately set a floodlight controller and a laser light controller, and the first processing unit 30 respectively connects the floodlight controller and the laser light controller through two PWMs, when the first processing unit 30 needs to be controlled.
  • PWM pulse wave modulation
  • the electronic device 100 may separately set a floodlight controller and a laser light controller, and the first processing unit 30 respectively connects the floodlight controller and the laser light controller through two PWMs, when the first processing unit 30 needs to be controlled.
  • PWM pulse wave modulation
  • the laser lamp 118 is turned on, and pulse waves are respectively transmitted to the two controllers through the PWM 32 to control the time interval between the acquisition of the infrared image and the speckle image 900.
  • the time interval between the acquired infrared image and the speckle image 900 is lower than the first threshold, which ensures the consistency of the collected infrared image and the speckle image 900, and avoids a large difference between the infrared image and the speckle image 900.
  • the error improves the accuracy of image processing.
  • step 0131 a reference image is obtained, and the reference image is an image with reference depth information obtained by calibration.
  • the electronic device 100 pre-calibrates the laser speckle to obtain a reference image, and stores the reference image in the electronic device 100.
  • the reference image is formed by irradiating a laser speckle to a reference plane, and the reference image is also an image with a plurality of scattered spots, each of which has corresponding reference depth information.
  • the actually collected speckle image 900 can be compared with the reference image, and the actual depth information is calculated according to the offset of the scattered speckle in the actually collected speckle image 900.
  • Figure 5 is a schematic diagram of calculating depth information in one embodiment.
  • the laser lamp 118 can generate a laser speckle, and after the laser speckle is reflected by the object, the formed image is acquired by the laser camera 112.
  • the laser speckle emitted by the laser lamp 118 is reflected by the reference plane 910, and then the reflected light is collected by the laser camera 112, and the reference image is obtained by imaging the imaging plane 920.
  • the reference plane 910 to the laser light 118 has a reference depth of L, which is known.
  • the laser speckle emitted by the laser lamp 118 is reflected by the object 930, and the reflected light is collected by the laser camera 112, and the actual speckle image is obtained by imaging the imaging plane 920. Then the actual depth information can be calculated as:
  • L is the distance between the laser lamp 118 and the reference plane 910
  • f is the focal length of the lens in the laser camera 112
  • CD is the distance between the laser lamp 118 and the laser camera 112
  • AB is the imaging and reference plane of the object 930.
  • the offset distance between the images of 910. AB may be the product of the pixel offset n and the actual distance p of the pixel.
  • Step 0132 comparing the reference image with the speckle image 900 to obtain offset information for indicating the horizontal offset of the scattered speckles in the speckle image 900 relative to the corresponding scattered speckles in the reference image.
  • each pixel point (x, y) in the speckle image 900 is traversed, and a preset size pixel block is selected centering on the pixel point. For example, it is possible to select a pixel block of 31 pixel*31 pixel size. Then searching for the matched pixel block on the reference image, and calculating the horizontal offset of the coordinates of the matched pixel point and the pixel point (x, y) coordinates on the reference image, and shifting to the right is positive, offset to the left Record as negative. Then, the calculated horizontal offset is brought into the formula (1) to obtain the depth information of the pixel point (x, y). By sequentially calculating the depth information of each pixel in the speckle image 900 in this way, the depth information corresponding to each pixel in the speckle image 900 can be obtained.
  • Step 0133 calculating a depth image according to the offset information and the reference depth information, and using the depth image and the infrared image as the target image.
  • the depth image may be used to represent depth information corresponding to the infrared image, and each pixel included in the depth image represents a depth information.
  • each of the scattered spots in the reference image corresponds to one reference depth information, and after obtaining the horizontal offset of the scattered spots in the reference image and the scattered spots in the speckle image 900, the horizontal offset may be calculated according to the horizontal offset.
  • the relative depth information of the object in the speckle image 900 to the reference plane, and then based on the relative depth information and the reference depth information, can calculate the actual depth information of the object to the camera, that is, obtain the final depth image.
  • step 0134 the target image is corrected in a safe operating environment to obtain a corrected target image.
  • the depth image may be calculated from the speckle image 900. It is also possible to separately correct the infrared image and the depth image to obtain a corrected infrared image and a corrected depth image.
  • the face recognition processing is performed based on the corrected infrared image and the corrected depth image.
  • Correcting the infrared image and the depth image separately means correcting the internal and external parameters in the infrared image and the depth image.
  • the laser camera 112 produces a deflection, and the acquired infrared image and depth image need to be corrected for the error caused by the deflection parallax, thereby obtaining a standard infrared image and a depth image.
  • a corrected infrared image can be obtained after the above infrared image is corrected, and the depth image is corrected to obtain a corrected depth image.
  • the infrared parallax image can be calculated according to the infrared image, and then the internal and external parameter correction is performed according to the infrared parallax image to obtain a corrected infrared image.
  • the depth parallax image is calculated according to the depth image, and the internal and external parameter correction is performed according to the depth parallax image to obtain a corrected depth image.
  • Step 0135 the face recognition processing is performed according to the correction target image.
  • the first processing unit 30 may transmit the depth image and the infrared image to the second processing unit 22 for face recognition processing.
  • the second processing unit 22 corrects the depth image and the infrared image before performing face recognition to obtain a corrected depth image and a corrected infrared image, and then performs face recognition processing according to the corrected depth image and the corrected infrared image.
  • the process of face recognition includes a face authentication phase and a living body detection phase, and the face authentication phase refers to a process of recognizing a face identity, and the living body detection phase refers to a process of identifying whether a captured face is a living body.
  • the second processing unit 22 may perform face detection on the corrected infrared image to detect whether there is a face in the corrected infrared image; if the face is corrected in the infrared image, the face included in the corrected infrared image is extracted. The image is matched with the face image stored in the electronic device 100. If the matching is successful, the face authentication is successful.
  • the face attribute feature of the face image may be extracted, and the extracted face attribute feature is matched with the face attribute feature of the face image stored in the electronic device 200, if the matching value is If the matching threshold is exceeded, the face authentication is considered successful.
  • a feature such as a deflection angle, a brightness information, and a facial feature of a face in a face image may be extracted as a feature of a face attribute, and if the degree of matching between the extracted face attribute feature and the stored face attribute feature exceeds 90%, the person is considered Face authentication is successful.
  • the process of authenticating a face it is possible to authenticate whether the face image matches the preset face image according to the acquired infrared image. It is also possible that the authentication is successful if the photographed face, such as a photograph or a sculpture, is taken. Therefore, it is necessary to perform a living body detection process according to the acquired depth image and the infrared image, so that it is necessary to ensure that the face of the living body is collected for the authentication success.
  • the acquired infrared image can represent the detailed information of the face
  • the collected depth image can represent the depth information corresponding to the infrared image
  • the living body detection process can be performed according to the depth image and the infrared image. For example, if the face being photographed is a face in a photo, it can be judged that the collected face is not stereoscopic according to the depth image, and the collected face may be considered to be a non-living face.
  • performing the living body detection according to the corrected depth image includes: searching for the face depth information corresponding to the face image in the corrected depth image, if there is face depth information corresponding to the face image in the depth image, and The face depth information is in accordance with the face stereo rule, and the face image is a living face image.
  • the above-described face stereo rule is a rule with face three-dimensional depth information.
  • the second processing unit may further perform artificial intelligence recognition on the corrected infrared image and the corrected depth image by using an artificial intelligence model, acquire a living attribute feature corresponding to the face image, and determine the face according to the acquired living attribute feature. Whether the image is a living face image.
  • the living property attribute may include a skin texture feature corresponding to the face image, a direction of the texture, a density of the texture, a width of the texture, and the like. If the living property attribute meets the rule of the living body, the face image is considered to be biologically active, that is, Live face image. It can be understood that when the second processing unit 22 performs processing such as face detection, face authentication, and living body detection, the processing sequence can be changed as needed. For example, the face can be authenticated first, and then the face is detected as a living body. It is also possible to first detect whether the face is a living body and then authenticate the face.
  • the method for the second processing unit 22 to perform the living body detection according to the infrared image and the depth image may include: acquiring a continuous multi-frame infrared image and a depth image, and detecting whether the human face has corresponding depth information according to the infrared image and the depth image, if the face is There is corresponding depth information, and then through the continuous multi-frame infrared image and depth image to detect whether there is a change in the face, such as whether the face blinks, swings, opens the mouth, and the like. If it is detected that the face has corresponding depth information and the face changes, the face is determined to be a living face.
  • the first processing unit 30 does not perform the living body detection if the face authentication fails, or the face authentication is not performed if the living body detection fails.
  • step 0141 the face recognition result is encrypted, and the encrypted face recognition result is sent to the target application that initiates the image collection instruction.
  • the face recognition result is encrypted, and the specific encryption algorithm is not limited.
  • it may be based on DES (Data Encryption Standard), MD5 (Message-Digest Algorithm 5), and HAVAL (Diffie-Hellman).
  • the method for performing the encryption processing on the face recognition result in step 0141 may specifically include:
  • Step 14411 Obtain a network security level of the network environment where the electronic device 100 is currently located.
  • an application When an application acquires an image for operation, it generally needs to be networked. For example, when the face is subjected to payment authentication, the face recognition result can be sent to the application, and the application is sent to the corresponding server to complete the corresponding payment operation. When the application sends the face recognition result, it needs to connect to the network, and then send the face recognition result to the corresponding server through the network. Therefore, when the face recognition result is transmitted, the face recognition result can be first encrypted. The network security level of the network environment in which the electronic device 100 is currently located is detected, and the encryption process is performed according to the network security level.
  • step 01422 the encryption level is obtained according to the network security level, and the face recognition result is subjected to encryption processing corresponding to the encryption level.
  • the electronic device 100 pre-establishes a correspondence between the network security level and the encryption level, and obtains a corresponding encryption level according to the network security level, and encrypts the face recognition result according to the encryption level.
  • the face recognition result may be encrypted according to the acquired reference image.
  • the face recognition result may include one or more of a face authentication result, a living body detection result, an infrared image, a speckle image, and a depth image.
  • the reference image is a speckle image that the electronic device 100 collects at the time of the camera module. Since the reference image is highly unique, the reference images acquired by the different electronic devices 100 are different. Therefore, the reference image itself can be used as an encrypted key to encrypt the data.
  • the electronic device 100 can store the reference image in a secure environment, which can prevent data leakage.
  • the acquired reference image is composed of a two-dimensional matrix of pixels, and each pixel has a corresponding pixel value.
  • the face recognition result may be encrypted according to all or part of the pixel points of the reference image.
  • the reference image can be directly superimposed with the target image to obtain an encrypted image.
  • the pixel matrix corresponding to the target image may be multiplied by the pixel matrix corresponding to the reference image to obtain an encrypted image.
  • the pixel value corresponding to one or more pixel points in the reference image may be used as an encryption key to perform encryption processing on the target image.
  • the specific encryption algorithm is not limited in this embodiment.
  • the reference image is generated at the electronic device 100, and the electronic device 100 may pre-store the reference image in a secure environment.
  • the reference image may be read in a secure environment, and The face recognition result is encrypted according to the reference image.
  • the same reference image is stored on the server corresponding to the target application.
  • the server of the target application acquires the reference image. And decrypting the encrypted face recognition result according to the obtained reference image.
  • a reference image collected by a plurality of different electronic devices may be stored in the server of the target application, and the reference image corresponding to each electronic device 100 is different. Therefore, a reference image identifier can be defined for each reference image in the server, and the device identifier of the electronic device 100 is stored, and then the correspondence between the reference image identifier and the device identifier is established.
  • the server receives the face recognition result, the received face recognition result carries the device identifier of the electronic device 100 at the same time.
  • the server may search for the corresponding reference image identifier according to the device identifier, and find a corresponding reference image according to the reference image identifier, and then decrypt the face recognition result according to the found reference image.
  • the method for performing the encryption processing according to the reference image may include: acquiring a pixel matrix corresponding to the reference image, and acquiring the encryption key according to the pixel matrix. Key; encrypts the face recognition result according to the encryption key.
  • the reference image is composed of a two-dimensional pixel matrix, and since the acquired reference image is unique, the pixel matrix corresponding to the reference image is also unique.
  • the pixel matrix itself can be used as an encryption key to encrypt the face recognition result, or the pixel matrix can be converted to obtain an encryption key, and the face recognition result is encrypted by the converted encryption key.
  • a pixel matrix is a two-dimensional matrix composed of a plurality of pixel values, and the position of each pixel value in the pixel matrix can be represented by a two-dimensional coordinate, and the corresponding one can be obtained by one or more position coordinates. A pixel value and combines the obtained one or more pixel values into an encryption key.
  • the face recognition result may be encrypted according to the encryption key.
  • the encryption algorithm is not limited in this embodiment.
  • the encryption key may be directly superimposed or multiplied with the data, or the encryption key may be inserted into the data as a value to obtain the final encrypted data.
  • the electronic device 100 can also adopt different encryption algorithms for different applications. Specifically, the electronic device 100 may pre-establish a correspondence between an application identifier of the application and an encryption algorithm, where the image collection instruction may include a target application identifier of the target application. After receiving the image collection instruction, the target application identifier included in the image collection instruction may be acquired, and the corresponding encryption algorithm is obtained according to the target application identifier, and the face recognition result is encrypted according to the obtained encryption algorithm.
  • the accuracy of the infrared image, the speckle image, and the depth image can also be adjusted before the infrared image, the speckle image, and the depth image are transmitted to the target application.
  • the image processing method shown in FIG. 3 or FIG. 3 may further include: acquiring one or more of an infrared image, a speckle image 900, and a depth image as an image to be transmitted; acquiring a target application that initiates an image acquisition instruction.
  • the application level is obtained according to the application level, and the accuracy of the image to be sent is adjusted according to the accuracy level, and the adjusted image to be sent is sent to the target application.
  • the application level can represent the importance level corresponding to the target application.
  • the electronic device 100 may preset an application level of the application, and establish a correspondence between the application level and the accuracy level, and the corresponding accuracy level may be acquired according to the application level.
  • the application can be divided into four application levels: system security application, system non-security application, third-party security application, and third-party non-security application, and the corresponding accuracy level is gradually reduced.
  • the accuracy of the image to be transmitted may be expressed as the resolution of the image, or the number of scattered spots contained in the speckle image 900, such that the accuracy of the depth image obtained from the speckle image 900 may also be different.
  • adjusting the image accuracy may include: adjusting a resolution of the image to be transmitted according to the accuracy level; or adjusting the number of scattered spots included in the collected speckle image 900 according to the accuracy level.
  • the number of scattered spots included in the speckle image can be adjusted by software or by hardware. When the software mode is adjusted, the scattered spots in the collected speckle image 900 can be directly detected, and some scattered spots are combined or eliminated, so that the number of scattered spots contained in the adjusted speckle image 900 is reduced.
  • the number of laser spots generated by the laser can be adjusted. For example, when the accuracy is high, the number of generated laser scatter spots is 30,000; when the accuracy is low, the number of generated laser scatter spots is 20,000. Thus, the accuracy of the corresponding depth image is correspondingly reduced.
  • different diffractive optical elements may be preset in the laser lamp 118, wherein the number of scattered spots formed by different DOE diffraction is different.
  • the speckle image 900 is generated by diffracting different DOEs according to the accuracy level, and a depth map of different precision is obtained according to the obtained speckle image 900.
  • the application level of the application is high, the corresponding accuracy level is also relatively high.
  • the laser lamp can control the DOE with a large number of scattered spots to emit laser speckle, thereby obtaining a speckle image with a large number of scattered spots;
  • the application level of the application is low, the corresponding accuracy level is also relatively low, and the laser lamp 118 can control the DOE with a small number of scattered spots to emit the laser speckle, thereby obtaining the speckle image 900 with fewer scattered spots.
  • the process of the face recognition processing in step 013 may further include:
  • Step 0136 Acquire an operating environment in which the electronic device 100 is currently located.
  • Step 0137 If the electronic device 100 is currently in a safe operating environment, the face recognition process is performed according to the target image in the safe operating environment.
  • the operating environment of the electronic device 100 includes a safe operating environment and a normal operating environment.
  • the operating environment of the CPU can be divided into TEE and REE.
  • TEE is a safe operating environment.
  • REE is a non-secure operating environment. For some application operations with high security requirements, it needs to be completed in a secure operating environment. For some application operations with lower security requirements, it can be done in a non-secure operating environment.
  • Step 0138 If the electronic device 100 is currently in a non-secure operating environment, the electronic device 100 is switched from the non-secure operating environment to the safe operating environment, and the face recognition process is performed according to the target image in the safe operating environment.
  • the first processing unit 30 and the second processing unit 22 may be included in the electronic device 100, the first processing unit 30 may be an MCU processor, and the second processing unit 22 may be a CPU core. Since the MCU processor is external to the CPU processor, the MCU itself is in a secure environment. Specifically, if it is determined that the application operation corresponding to the image collection instruction is a security operation, it may be determined whether the first processing unit 30 is connected to the second processing unit in a safe operating environment. If yes, the acquired image is directly sent to the second processing unit 22 for processing; if not, the first processing unit 30 is connected to the second processing unit 22 in the secure operating environment, and the acquired image is sent to the first The second processing unit 22 performs processing.
  • the image processing method provided by the embodiment shown in FIG. 3, FIG. 4, FIG. 6 and FIG. 7 can be included in the image acquisition instruction if the application operation corresponding to the image acquisition instruction is determined to be a safe operation when the image acquisition instruction is detected.
  • the timestamp determines whether the response time of the instruction has timed out. If the response time of the instruction does not time out, the image is acquired according to the image acquisition instruction.
  • the captured image can be subjected to face recognition processing in a safe operating environment.
  • the face recognition result is then encrypted, and the encrypted face recognition result is sent to the target application. In this way, the target application can ensure that the image is processed in a high security environment during the security operation, and the data security is improved by the encryption process during the data transmission process, thereby ensuring the image processing is improved. Security.
  • step 001 includes step 021
  • step 002 includes step 022
  • the image processing method includes steps 021 to 024. among them:
  • Step 021 If an image acquisition instruction is detected, determine whether the application operation corresponding to the image acquisition instruction is an unsafe operation.
  • the camera module 10 can be mounted on the electronic device 100 and captured by the camera in the installed camera module 10.
  • the camera can be divided into a laser camera 112, a visible light camera, and the like according to the acquired image.
  • the laser camera 112 can acquire an image formed by laser irradiation onto the object
  • the visible light camera can acquire an image formed by the visible light being irradiated onto the object.
  • a plurality of cameras can be mounted on the electronic device 100, and the location of the installation is not limited.
  • a camera can be mounted on the front panel of the electronic device 100, and two cameras can be mounted on the back panel.
  • the camera can also be mounted in the interior of the electronic device 100 in an in-line manner, and then the camera can be turned on by rotating or sliding.
  • the front camera and the rear camera can be installed on the electronic device 100, and the front camera and the rear camera can acquire images from different viewing angles.
  • the front camera can acquire images from the front view of the electronic device 100
  • the rear camera An image can be acquired from the back view of the electronic device 100.
  • the image acquisition instruction refers to an instruction for triggering an image acquisition operation.
  • an application operation refers to an operation that an application needs to complete. After the user opens the application, different application operations can be completed through the application.
  • the application operation may be a payment operation, a shooting operation, an unlocking operation, a game operation, or the like.
  • Application operations with higher security requirements are considered safe operations, and application operations with lower security requirements are considered to be non-secure operations.
  • Step 022 If the application operation corresponding to the image acquisition instruction is a non-secure operation, the control camera module 10 collects the speckle image according to the image acquisition instruction.
  • the processing unit of the electronic device 100 can receive an instruction from the upper application.
  • the camera module 10 can be controlled to perform work, and the speckle image is collected by the camera.
  • the processing unit is connected to the camera, and the image acquired by the camera can be transmitted to the processing unit, and processed by the processing unit for cropping, brightness adjustment, face detection, face recognition, and the like.
  • the camera module 10 can include, but is not limited to, a laser camera 112 and a laser light 118.
  • the processing unit controls the laser light 118 to turn on, and when the laser light 118 is turned on, the speckle image 900 is acquired by the laser camera 112.
  • the camera module 10 may further include a laser camera 112, a laser lamp 118, and a floodlight 114.
  • the processing unit controls the laser lamp 118 and the floodlight 114 to perform time sharing operation.
  • the laser light 118 is turned on, the speckle image 900 is collected by the laser camera 112; when the floodlight 118 is turned on At the time, an infrared image is acquired by the laser camera 112.
  • the laser speckle when the laser is irradiated on the optically rough surface whose average fluctuation is greater than the order of the wavelength, the randomly scattered sub-waves of the surface elements superimposed on each other cause the reflected light field to have a random spatial light intensity distribution, showing Granular structure, this is the laser speckle.
  • the resulting laser speckles are highly random, so the laser speckles generated by the lasers emitted by different laser emitters (i.e., lasers 118) are different.
  • the formed speckle is illuminated onto objects of different depths and shapes, the generated speckle image 900 is different.
  • the laser speckle formed by the different laser emitters is unique, so that the resulting speckle image 900 is also unique.
  • the laser speckle formed by the laser lamp 118 can be irradiated onto the object, and then the speckle image 900 formed by the laser speckle on the object is collected by the laser camera 112.
  • the first processing unit 30 and the second processing unit 22 may be included in the electronic device 100, and the first processing unit 30 operates in a secure operating environment.
  • the second processing unit 22 can operate in a secure operating environment or in a non-secure operating environment.
  • the first processing unit 30 and the second processing unit 22 are processing units distributed on different processors and are in different security environments.
  • the first processing unit 30 operates in a first secure environment and the second processing unit 22 can operate in a second secure environment.
  • the first processing unit 30 may be an external MCU (Microcontroller Unit) module, or a security processing module in a DSP (Digital Signal Processing), and the second processing unit 22 may be a CPU (Central). Processing Unit (Central Processing Unit)
  • the CPU core can be in the TEE (Trust Execution Environment) or in the REE (Rich Execution Environment).
  • the CPU can be switched to the TEE for operation.
  • the single core can be directly switched from REE to TEE; when the CPU in the electronic device 100 is multi-core, the electronic device 100 switches one core from REE to TEE, and other cores still run at REE.
  • the second processing unit 22 can receive the image acquisition instruction sent by the application, and send the image acquisition instruction to the first processing unit 30, and then the first processing unit 30 controls the camera module to collect the speckle image.
  • step 023 a depth image is calculated according to the speckle image.
  • the laser lamp 118 can emit a plurality of laser scattered spots, and when the laser scattered spots are irradiated onto objects of different distances, the positions of the spots presented on the image are different.
  • the electronic device 100 may pre-acquire a standard reference image, which is an image formed by laser speckle illumination on a plane. Therefore, the scattered spots on the reference image are generally evenly distributed, and then the correspondence between each scattered spot in the reference image and the reference depth is established.
  • the control laser lamp 118 emits laser speckle, and after the laser speckle is irradiated onto the object, the speckle image 900 is acquired by the laser camera 112.
  • each of the scattered spots in the speckle image 900 is compared with the scattered spots in the reference image, and the positional offset of the scattered spots in the speckle image 900 with respect to the corresponding scattered spots in the reference image is acquired, and the scattered spots are scattered.
  • the position offset and the reference depth are used to obtain actual depth information corresponding to the scattered spots.
  • the relative depth is first calculated according to the positional offset of the speckle image 900 relative to the scattered speckle of the reference image, and the relative depth may represent the depth of the actual photographed object to the reference plane. information. Then, the actual depth information of the object is calculated according to the obtained relative depth and the reference depth.
  • the depth image is used to represent the depth information of the captured object, which may be the relative depth of the represented object to the reference plane, or the absolute depth of the object to the camera.
  • Step 024 sending the depth image to the target application that initiates the image acquisition instruction, and the depth image is used to indicate that the target application performs the application operation.
  • the acquired depth image is sent to the target application, and the target application can acquire depth information of the captured object according to the depth image, and then perform corresponding application operations according to the depth image.
  • the electronic device 100 can simultaneously acquire an RGB (Red Green Blue) image and a speckle image 900, and the acquired RGB image and the speckle image 900 are corresponding, and then calculated according to the speckle image 900.
  • the obtained depth image also corresponds to the above RGB image.
  • the target application acquires the RGB image and the depth image
  • the depth value corresponding to each pixel in the RGB image can be obtained according to the depth image, and then the RGB image is three-dimensionally modeled according to the obtained depth value, and AR (Augmented Reality) is enhanced. Real technology), beauty and other treatments.
  • the image processing method provided by the embodiment shown in FIG. 8 when detecting that the application operation corresponding to the image capture instruction is an unsafe operation, the electronic device 100 controls the camera module 10 to collect the speckle image 900 according to the image acquisition instruction, and then according to the scattered image 900
  • the plaque image 900 calculates a depth image and transmits the depth image to the target application for a corresponding application operation. In this way, the application operations of the image acquisition instructions can be classified, and different operations can be performed according to different image acquisition instructions.
  • the acquired image is used for non-secure operation, the acquired image can be processed directly, which improves the efficiency of image processing.
  • step 022 includes step 0221 and step 0222
  • step 023 includes steps 0231 to 0233. among them:
  • Step 021 If an image acquisition instruction is detected, determine whether the application operation corresponding to the image acquisition instruction is an unsafe operation.
  • step 0221 if the application operation corresponding to the image acquisition instruction is a non-secure operation, the timestamp included in the image acquisition instruction is acquired, and the timestamp is used to indicate the time when the image acquisition instruction is initiated.
  • the application may write a timestamp in the image acquisition instruction, where the timestamp is used to record the time when the application initiates the image acquisition instruction.
  • the first processing unit 30 may acquire a time stamp from the image acquisition instruction, and determine the time at which the image acquisition instruction is generated according to the time stamp.
  • the application can read the time recorded by the clock of the electronic device 100 as a time stamp and write the acquired time stamp to the image capture instruction.
  • the system time can be obtained through the System.currentTimeMillis() function.
  • step 0222 if the interval between the timestamp and the target time is less than the duration threshold, the control camera module 10 collects the speckle image 900 according to the image acquisition instruction, and the target time is used to indicate the time when the image acquisition instruction is detected.
  • the target time refers to the time when the electronic device 100 detects the image capturing instruction, specifically, the time when the first processing unit 30 detects the image capturing instruction.
  • the interval duration between the time stamp and the target time is specifically the time interval from the time when the image acquisition command is initiated to the time when the electronic device 100 detects the image acquisition command. If the interval duration exceeds the duration threshold, it is considered that the response of the command is abnormal, and the acquisition of the image can be stopped, and an exception message is returned to the application. If the interval duration is less than the duration threshold, the camera is controlled to collect the speckle image 900.
  • the camera module 10 is composed of a first camera module and a second camera module.
  • the first camera module is used to collect RGB images
  • the second camera module is used to collect the speckle image 900.
  • the first camera module is configured to collect the RGB image according to the image capturing instruction
  • the second camera module is controlled to collect the speckle image 900; wherein, the first moment of acquiring the RGB image and the second moment of acquiring the speckle image 900 The time interval between them is less than the first threshold.
  • the acquired RGB image and the speckle image 900 are corresponding, that is, the consistency of the RGB image and the speckle image 900 must be guaranteed.
  • the time interval between the first time at which the RGB image is acquired and the second time at which the speckle image is acquired is less than the first threshold.
  • the first threshold is generally a relatively small value. When the time interval is less than the first threshold, the subject is considered to have not changed, and the acquired RGB image and the speckle image 900 are corresponding. It can be understood that the adjustment can also be made according to the changing rule of the object to be photographed.
  • the first threshold can be set to a larger value assuming that the subject is in a stationary state for a long time. Specifically, the speed of change of the object to be photographed is acquired, and a corresponding first threshold is acquired according to the speed of change.
  • the user can click the camera button to initiate a camera instruction, and point the front camera to the face for shooting.
  • the mobile phone sends the camera command to the first processing unit 30, and the first processing unit 30 controls the camera module 10 to operate.
  • the RGB image is acquired by the first camera module, and after the interval of 1 millisecond, the second camera module is controlled to collect the speckle image 900.
  • the depth image is then calculated from the speckle image 900, and the cosmetic processing is performed through the acquired RGB image and depth image.
  • the camera module 10 is controlled to acquire the RGB image at the first moment, and the camera module 10 is controlled to collect the speckle image at the second moment; the time interval between the first moment and the target moment is less than the second threshold; The time interval between the time and the target time is less than the third threshold. If the time interval between the first time and the target time is less than the second threshold, the camera module 10 is controlled to collect the RGB image; if the time interval between the first time and the target time is greater than the second threshold, the application may be returned Respond to the timeout prompt and wait for the application to re-initiate the image capture instruction.
  • the first processing unit 30 can control the camera module 10 to collect the speckle image 900, and the time interval between the second time and the first time of collecting the speckle image 900 is less than the first threshold.
  • the time interval between the second time and the target time is less than a third threshold. If the time interval between the second time and the first time is greater than the first threshold, or the time interval between the second time and the target time is greater than the third threshold, the application may return a response timeout prompt message and wait for the application.
  • the program re-initiates the image acquisition instruction. It can be understood that the second moment of collecting the speckle image 900 may be greater than the first moment of acquiring the RGB image, or may be smaller than the first moment of acquiring the RGB image, which is not limited herein.
  • step 0231 a reference image is obtained, and the reference image is an image with reference depth information obtained by calibration.
  • the electronic device 100 pre-calibrates the laser speckle to obtain a reference image, and stores the reference image in the electronic device 100.
  • the reference image is formed by irradiating a laser speckle to a reference plane, and the reference image is also an image with a plurality of scattered spots, each of which has corresponding reference depth information.
  • the actually collected speckle image 900 can be compared with the reference image, and the actual depth information is calculated according to the offset of the scattered speckle in the actually collected speckle image 900.
  • Figure 5 is a schematic diagram of calculating depth information in one embodiment.
  • the laser lamp 118 can generate a laser speckle, and after the laser speckle is reflected by the object, the formed image is acquired by the laser camera 112.
  • the laser speckle emitted by the laser lamp 118 is reflected by the reference plane 910, and then the reflected light is collected by the laser camera 112, and the reference image is obtained by imaging the imaging plane 920.
  • the reference plane 920 to the laser lamp 112 has a reference depth of L, which is known.
  • the laser speckle emitted by the laser lamp 112 is reflected by the object 930, and the reflected light is collected by the laser camera 112, and the actual speckle image is obtained by imaging the imaging plane 920. Then the actual depth information can be calculated as:
  • L is the distance between the laser lamp 118 and the reference plane 910
  • f is the focal length of the lens in the laser camera 112
  • CD is the distance between the laser lamp 118 and the laser camera 112
  • AB is the imaging and reference plane of the object 930.
  • the offset distance between the images of 910. AB may be the product of the pixel offset n and the actual distance p of the pixel.
  • Step 0232 comparing the reference image with the speckle image 900 to obtain offset information for indicating a horizontal offset of the scattered speckle in the speckle image 900 relative to the corresponding scattered speckle in the reference image.
  • each pixel point (x, y) in the speckle image 900 is traversed, and a preset size pixel block is selected centering on the pixel point. For example, it is possible to select a pixel block of 31 pixel*31 pixel size. Then searching for the matched pixel block on the reference image, and calculating the horizontal offset of the coordinates of the matched pixel point and the pixel point (x, y) coordinates on the reference image, and shifting to the right is positive, offset to the left Record as negative. Then, the calculated horizontal offset is brought into the formula (1) to obtain the depth information of the pixel point (x, y). By sequentially calculating the depth information of each pixel in the speckle image in this way, the depth information corresponding to each pixel in the speckle image 900 can be obtained.
  • Step 0233 calculating a depth image according to the offset information and the reference depth information.
  • the depth image may be used to represent depth information corresponding to the infrared image, and each pixel included in the depth image represents a depth information.
  • each of the scattered spots in the reference image corresponds to one reference depth information, and after obtaining the horizontal offset of the scattered spots in the reference image and the scattered spots in the speckle image 900, the horizontal offset may be calculated according to the horizontal offset.
  • the relative depth information of the object in the speckle image 900 to the reference plane, and then based on the relative depth information and the reference depth information, can calculate the actual depth information of the object to the camera, that is, obtain the final depth image.
  • Step 024 sending the depth image to the target application that initiates the image acquisition instruction, and the depth image is used to indicate that the target application performs the application operation.
  • the depth parallax image may be calculated from the depth image.
  • correction can be performed to obtain a corrected depth image, and then an application operation is performed according to the corrected depth image.
  • Correcting the depth images separately means correcting the internal and external parameters in the depth image.
  • the laser camera 112 produces a deflection, and the acquired depth image requires correction of the error produced by the deflection parallax to obtain a standard depth image.
  • Correcting the depth image described above results in a corrected depth image.
  • the depth image is corrected to obtain a corrected depth image, and the corrected depth image is transmitted to a target application that initiates an image acquisition instruction.
  • the depth parallax image can be calculated according to the depth image, and the internal and external parameter correction is performed according to the depth parallax image to obtain a corrected depth image.
  • the image processing method shown in FIG. 8 or FIG. 9 may further perform encryption processing on the depth image before transmitting the depth image, that is, FIG. 8 or FIG. 9 .
  • the illustrated image processing method further includes step 025, step 026, and step 024 further includes step 0241. among them:
  • Step 025 Obtain a network security level of the network environment where the electronic device 100 is currently located.
  • the application may need to be networked when acquiring images for operation. For example, when modeling an image in three dimensions, RGB images and depth images can be sent to the application's server for 3D modeling on the server. When the application sends the RGB image and the depth image, it needs to connect to the network, and then send the RGB image and the depth image to the corresponding server through the network. In order to prevent a malicious program from acquiring a depth image for malicious operations, the transmitted depth image may be encrypted before the image is transmitted.
  • Step 026 If the network security level is less than the level threshold, the depth image is encrypted.
  • the depth image is encrypted.
  • the encryption level is obtained according to the network security level, and the depth image is encrypted according to the encryption level.
  • the electronic device 100 pre-establishes a correspondence between the network security level and the encryption level, and obtains a corresponding encryption level according to the network security level, and encrypts the face recognition result according to the encryption level.
  • the depth image may be encrypted according to the acquired reference image.
  • the depth image is encrypted, and the specific encryption algorithm is not limited. For example, it may be based on DES (Data Encryption Standard), MD5 (Message-Digest Algorithm 5), and HAVAL (Diffie-Hellman).
  • the reference image is a speckle image that the electronic device 100 collects at the camera module 10. Since the reference image is highly unique, the reference images acquired by different electronic devices 100 are different. Therefore, the reference image itself can be used as an encrypted key to encrypt the data.
  • the electronic device 100 can store the reference image in a secure environment, which can prevent data leakage.
  • the acquired reference image is composed of a two-dimensional matrix of pixels, and each pixel has a corresponding pixel value.
  • the depth image may be encrypted according to all or part of the pixel points of the reference image. For example, the reference image can be directly superimposed with the depth image to obtain an encrypted image.
  • the pixel matrix corresponding to the depth image may be multiplied by the pixel matrix corresponding to the reference image to obtain an encrypted image.
  • the pixel value corresponding to one or more pixels in the reference image may be used as an encryption key to encrypt the depth image.
  • the specific encryption algorithm is not limited in this embodiment.
  • the reference image is generated at the electronic device 100, and the electronic device 100 may pre-store the reference image in a secure environment.
  • the reference image may be read in a secure environment and according to the reference.
  • the image encrypts the depth image.
  • the same reference image is stored on the server corresponding to the deep application.
  • the server of the target application acquires the reference image. And decrypting the encrypted face recognition result according to the obtained reference image.
  • a plurality of reference images acquired by different electronic devices 100 may be stored in the server of the target application, and the reference images corresponding to each electronic device 100 are different. Therefore, a reference image identifier can be defined for each reference image in the server, and the device identifier of the electronic device 100 is stored, and then the correspondence between the reference image identifier and the device identifier is established.
  • the server receives the depth image, the received depth image will simultaneously carry the device identifier of the electronic device 100.
  • the server may search for the corresponding reference image identifier according to the device identifier, and find a corresponding reference image according to the reference image identifier, and then decrypt the depth image according to the found reference image.
  • the method for performing the encryption processing according to the reference image may include: acquiring a pixel matrix corresponding to the reference image, and acquiring the encryption key according to the pixel matrix. Key; encrypts the depth image according to the encryption key.
  • the reference image is composed of a two-dimensional pixel matrix, and since the acquired reference image is unique, the pixel matrix corresponding to the reference image is also unique.
  • the pixel matrix itself can encrypt the depth image as an encryption key, or can perform a certain conversion on the pixel matrix to obtain an encryption key, and then encrypt the depth image by using the converted encryption key.
  • a pixel matrix is a two-dimensional matrix composed of a plurality of pixel values, and the position of each pixel value in the pixel matrix can be represented by a two-dimensional coordinate, and the corresponding one can be obtained by one or more position coordinates. A pixel value and combines the obtained one or more pixel values into an encryption key.
  • the face recognition result may be encrypted according to the encryption key.
  • the encryption algorithm is not limited in this embodiment.
  • the encryption key may be directly superimposed or multiplied with the data, or the encryption key may be inserted into the data as a value to obtain the final encrypted data.
  • the electronic device 100 may also adopt different encryption algorithms for different applications. Specifically, the electronic device 100 may pre-establish a correspondence between an application identifier of the application and an encryption algorithm, where the image collection instruction may include a target application identifier of the target application. After receiving the image collection instruction, the target application identifier included in the image collection instruction may be acquired, and the corresponding encryption algorithm is obtained according to the target application identifier, and the face recognition result is encrypted according to the obtained encryption algorithm.
  • the accuracy of the depth image can also be adjusted before the depth image is sent to the target application.
  • the image processing method shown in FIG. 8 or FIG. 9 may further include: acquiring an application level of the target application that initiates the image acquisition instruction, acquiring a corresponding accuracy level according to the application level; and adjusting the accuracy of the depth image according to the accuracy level, The adjusted depth image is sent to the application.
  • the application level can represent the importance level corresponding to the target application. The higher the application level of a general target application, the higher the accuracy of the transmitted image.
  • the electronic device 100 may preset an application level of the application, and establish a correspondence between the application level and the accuracy level, and the corresponding accuracy level may be acquired according to the application level.
  • the application can be divided into four application levels: system security application, system non-security application, third-party security application, and third-party non-security application, and the corresponding accuracy level is gradually reduced.
  • the accuracy of the depth image can be expressed as the resolution of the depth image or the number of scattered spots contained in the speckle image, so that the accuracy of the depth image obtained from the speckle image is also different.
  • adjusting the image accuracy may include: adjusting a resolution of the image to be transmitted according to the accuracy level; or adjusting the number of scattered spots included in the collected speckle image 900 according to the accuracy level.
  • the number of scattered spots included in the speckle image 900 may be adjusted by software, or may be adjusted by hardware. When the software mode is adjusted, the scattered spots in the collected speckle image 900 can be directly detected, and some scattered spots are combined or eliminated, so that the number of scattered spots contained in the adjusted speckle image 900 is reduced.
  • the number of laser scattered spots generated by the laser lamp 118 can be adjusted. For example, when the accuracy is high, the number of generated laser scatter spots is 30,000; when the accuracy is low, the number of generated laser scatter spots is 20,000. Thus, the accuracy of the corresponding depth image is correspondingly reduced.
  • different diffractive optical elements may be preset in the laser lamp 118, wherein the number of scattered spots formed by different DOE diffraction is different. Diffraction is generated by switching different DOEs according to the accuracy level to generate a speckle image, and a depth map of different precision is obtained according to the obtained speckle image 900.
  • the application level of the application is high, the corresponding accuracy level is also relatively high, and the laser lamp 118 can control the DOE with a large number of scattered spots to emit the laser speckle, thereby obtaining the speckle image 900 with more scattered spots.
  • the laser lamp 118 can control the DOE with a small number of scattered spots to emit laser speckle, thereby obtaining a speckle image with less scattered spots.
  • Step 0241 Send the encrypted depth image to the target application that initiates the image acquisition instruction.
  • the process of transmitting the depth image in step 024 may further include:
  • Step 0242 Acquire an operating environment in which the electronic device 100 is currently located.
  • Step 0243 If the electronic device 100 is currently in a non-secure operating environment, the depth image is sent to the target application that initiates the image acquisition instruction in the non-secure operating environment.
  • the operating environment of the electronic device 100 includes a secure operating environment and a non-secure operating environment.
  • the operating environment of the CPU can be divided into TEE and REE.
  • TEE is a safe operating environment.
  • REE is a non-secure operating environment.
  • the application operation corresponding to the image acquisition instruction is a non-secure operation
  • the collected depth image may be sent to the target application through the non-secure operation environment.
  • Step 0244 If the electronic device 100 is currently in a safe operating environment, the electronic device 100 is switched from the safe operating environment to the non-secure operating environment, and the depth image is sent to the target application that initiates the image capturing instruction in the non-secure operating environment.
  • the first processing unit 30 may be included in the electronic device 100, and the first processing unit 30 may be an MCU processor. Since the MCU processor is externally placed on the CPU processor, the MCU itself is in a safe operating environment. Specifically, the first processing unit 30 can connect the secure transmission channel and the non-secure transmission channel. When the image processing instruction is detected, the first processing unit 30 can connect to the non-secure operation if it determines that the application operation corresponding to the image acquisition instruction is an unsecure operation. A secure transmission channel that transmits depth images over a non-secure transmission channel. The secure transmission channel is in a safe operating environment, and the security of image processing is high; when the non-secure transmission channel is in a non-secure operating environment, the security of image processing is low.
  • the response time of the instruction may be determined according to the time stamp included in the image acquisition instruction. Whether it times out. If the response time of the command does not time out, the control camera module 10 collects the speckle image according to the image acquisition instruction, and then calculates the depth image according to the speckle image, and sends the depth image to the target application for corresponding application operation. In this way, the application operations of the image acquisition instructions can be classified, and different operations can be performed according to different image acquisition instructions. When the acquired image is used for non-secure operation, the acquired image can be processed directly, which improves the efficiency of image processing.
  • steps in the flowcharts of FIGS. 2-4 and 6-11 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these steps is not strictly limited, and the steps may be performed in other orders. Moreover, at least some of the steps in FIGS. 2-4 and 6-11 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be executed at different times. The order of execution of these sub-steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least a portion of other steps or sub-steps or stages of other steps.
  • FIG. 12 is a hardware structural diagram of an image processing method according to any of the above embodiments in an embodiment.
  • the electronic device 100 may include a camera module 10, a central processing unit (CPU) 20, and a first processing unit 30.
  • the camera module 10 includes a laser camera 112 and a pan. Light 114, RGB (Red/Green/Blue, red/green/blue color mode) camera 116 and laser light 118.
  • the first processing unit 30 includes a PWM (Pulse Width Modulation) module 32, a SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) module 34, and a RAM. (Random Access Memory) module 36, Depth Engine module 38.
  • PWM Pulse Width Modulation
  • SPI/I2C Serial Peripheral Interface/Inter-Integrated Circuit
  • the second processing unit 22 may be a CPU core under a TEE (Trusted execution environment), and the first processing unit 30 is an MCU (Microcontroller Unit) processor.
  • TEE Trusted execution environment
  • REE Microcontroller Unit
  • the central processing unit 20 can be in a multi-core operation mode, and the CPU core in the central processing unit 20 can be operated under a TEE or REE (Rich Execution Environment).
  • TEE and REE are operating modes of ARM modules (Advanced RISC Machines, Advanced Reduced Instruction Set Processor).
  • ARM modules Advanced RISC Machines, Advanced Reduced Instruction Set Processor
  • the central processing unit 20 when the central processing unit 20 receives the image acquisition instruction initiated by the target application, the CPU core running under the TEE, that is, the second processing unit 22, sends the SPI/I2C module 34 to the MCU 730 through the SECURE SPI/I2C.
  • the image acquisition instruction is to the first processing unit 30.
  • the first processing unit 30 After receiving the image acquisition instruction, the first processing unit 30 determines the security of the application operation corresponding to the image acquisition instruction, and controls the camera module 10 to collect an image corresponding to the determination result according to the determination result.
  • the first processing unit 30 after receiving the image acquisition instruction, the first processing unit 30, if it is determined that the application operation corresponding to the image acquisition instruction is a safe operation, transmits the pulse wave in the camera module 10 through the PWM module 32 to control the floodlight 114 in the camera module 10.
  • the infrared image is turned on, and the laser light 118 in the camera module 10 is controlled to be turned on to collect the speckle image 900 (shown in FIG. 1).
  • the camera module 10 can transmit the acquired infrared image and the speckle image 900 to the Depth Engine module 38 in the first processing unit 30.
  • the Depth Engine module 38 can calculate the infrared parallax image according to the infrared image, and calculate the depth image according to the speckle image 900.
  • the infrared parallax image and the depth parallax image are then transmitted to the second processing unit 22 operating under the TEE.
  • the second processing unit 22 performs correction according to the infrared parallax image to obtain a corrected infrared image, and performs correction according to the depth parallax image to obtain a corrected depth image.
  • performing face recognition according to the corrected infrared image detecting whether there is a face in the corrected infrared image and whether the detected face matches the stored face; if the face recognition passes, correcting the infrared image and correcting the depth image according to the above
  • the living body detection may be performed first, then the face recognition may be performed, or the face recognition and the living body detection may be performed simultaneously.
  • the second processing unit 22 may transmit one or more of the corrected infrared image, the corrected depth image, and the face recognition result to the target application.
  • the first processing unit 30 transmits the pulse wave to control the laser light 118 in the camera module 10 through the PWM module 32.
  • the first processing unit 30 transmits the pulse wave to control the laser light 118 in the camera module 10 through the PWM module 32.
  • the camera module 10 can transmit the collected speckle image 900 to the Depth Engine module 38 in the first processing unit 30.
  • the Depth Engine module 38 can calculate the depth image according to the speckle image 900 and obtain a depth disparity image according to the depth image. Then, the correction is performed according to the depth parallax image in a non-secure operation environment, a corrected depth image is obtained, and the corrected depth image is transmitted to the target application.
  • FIG. 13 is a hardware configuration diagram for realizing the image processing method shown in FIG. 2, FIG. 3, FIG. 6, or FIG. 7 in another embodiment.
  • the hardware structure includes a first processing unit 41, a camera module 10, and a second processing unit 42.
  • the camera module 10 includes a laser camera 112, a floodlight 114, an RGB camera 116, and a laser lamp 118.
  • the central processing unit 40 may include a CPU core under the TEE and a CPU core under the REE.
  • the first processing unit 41 is a DSP processing module opened in the central processing unit 40, and the second processing unit 42 is under the TEE.
  • the CPU core, the second processing unit 42 and the first processing unit 41 can be connected through a secure buffer, so as to ensure security during image transmission.
  • the central processing unit 40 needs to switch the processor core to the TEE when processing a highly secure operation behavior, and the less secure operation behavior can be performed under the REE.
  • the image processing instruction sent by the upper layer application may be received by the second processing unit 42.
  • the pulse wave control may be transmitted through the PWM module.
  • the floodlight 114 in the camera module 10 is turned on to capture an infrared image, and then the laser light 118 in the camera module 10 is controlled to open to acquire a speckle image 900 (shown in Figure 1).
  • the camera module 10 can transmit the acquired infrared image and the speckle image 900 to the first processing unit 41.
  • the first processing unit 41 can calculate the depth image according to the speckle image 900, and then calculate the depth parallax image according to the depth image. And calculate an infrared parallax image from the infrared image.
  • the infrared parallax image and the depth parallax image are then transmitted to the second processing unit 42.
  • the second processing unit 42 may perform correction according to the infrared parallax image to obtain a corrected infrared image, and perform correction according to the depth parallax image to obtain a corrected depth image.
  • the second processing unit 42 performs face authentication according to the infrared image, detects whether there is a human face in the corrected infrared image, and whether the detected face matches the stored face; if the face authentication passes, the infrared is corrected according to the above
  • the image and the corrected depth image are used for living body detection to determine whether the face is a living face.
  • the processing result is sent to the target application, and the target application performs an application operation such as unlocking and payment according to the detection result.
  • FIG. 14 is a schematic diagram of a software architecture for implementing an image processing method according to any of the above embodiments in an embodiment.
  • the software architecture includes an application layer 910, an operating system 920, and a secure operating environment 930.
  • the module in the secure operating environment 930 includes a first processing unit 931, a camera module 932, a second processing unit 933, and an encryption module 934.
  • the operating system 930 includes a security management module 921, a face management module 922, and a camera.
  • the driver 923 and the camera frame 924 are included;
  • the application layer 910 includes an application 911.
  • the application 911 can initiate an image acquisition instruction and send the image acquisition instruction to the first processing unit 931 for processing.
  • the application when performing operations such as paying, unlocking, beauty, and augmented reality (AR) by collecting faces, the application initiates an image acquisition instruction for collecting face images. It can be understood that the image instruction initiated by the application 911 can be first sent to the second processing unit 933 and then sent by the second processing unit 933 to the first processing unit 931.
  • AR augmented reality
  • the first processing unit 931 After receiving the image capturing instruction, the first processing unit 931 determines that the application operation corresponding to the image capturing instruction is a security operation (such as a payment and an unlocking operation), and then controls the camera module 932 to collect the infrared image and the speckle image according to the image capturing instruction. 900 (shown in FIG. 1), the infrared image and the speckle image 900 acquired by the camera module 932 are transmitted to the first processing unit 931. The first processing unit 931 calculates a depth image including depth information from the speckle image 900, and calculates a depth parallax image from the depth image, and calculates an infrared parallax image from the infrared image.
  • a security operation such as a payment and an unlocking operation
  • the depth disparity image and the infrared parallax image are then transmitted to the second processing unit 933 through the secure transmission channel.
  • the second processing unit 933 performs correction according to the infrared parallax image to obtain a corrected infrared image, and performs correction according to the depth parallax image to obtain a corrected depth image.
  • performing face authentication according to the corrected infrared image detecting whether there is a face in the corrected infrared image, and whether the detected face matches the stored face; if the face authentication passes, correcting the infrared image and correcting the depth according to the above
  • the image is used for living body detection to determine whether the face is a living face.
  • the face recognition result obtained by the second processing unit 933 can be sent to the encryption module 934, and after being encrypted by the encryption module 934, the encrypted face recognition result is sent to the security management module 921.
  • different applications 911 have a corresponding security management module 921.
  • the security management module 921 decrypts the encrypted face recognition result, and sends the face recognition result obtained after the decryption process to the corresponding person.
  • Face management module 922 The face management module 922 sends the face recognition result to the upper application 911, and the application 911 performs corresponding operations according to the face recognition result.
  • the first processing unit 931 can control the camera module 932 to collect the speckle image 900, and according to the scattered
  • the patch image 900 calculates a depth image and then obtains a depth parallax image from the depth image.
  • the first processing unit 931 sends the depth parallax image to the camera driver 923 through the non-secure transmission channel, and the camera driver 923 performs correction processing according to the depth parallax image to obtain a corrected depth image, and then sends the corrected depth image to the camera frame 924, and then The camera frame 924 is sent to the face management module 922 or the application 911.
  • Figure 15 is a block diagram showing the structure of an image processing apparatus 50 in an embodiment.
  • the image processing apparatus 50 includes a detection total module 501 and an acquisition total module 502.
  • the detecting total module 501 is configured to detect an image capturing instruction, and then determine the security of the application operation corresponding to the image capturing instruction.
  • the collection total module 502 is configured to collect an image corresponding to the determination result according to the determination result.
  • the detection total module 501 includes an instruction detection module 511
  • the acquisition total module 502 includes an image acquisition module 512
  • the image processing device 50 further includes a face recognition module 513 and a result transmission module 514.
  • the instruction detecting module 511 is configured to determine, if the image capturing instruction is detected, whether the application operation corresponding to the image capturing instruction is a security operation.
  • the image acquisition module 512 is configured to control the camera module 10 to collect the infrared image and the speckle image 900 according to the image acquisition instruction if the application operation corresponding to the image acquisition instruction is a safe operation.
  • the face recognition module 513 is configured to acquire a target image according to the infrared image and the speckle image 900, and perform face recognition processing according to the target image in a secure operating environment.
  • the result sending module 514 is configured to send the face recognition result to the target application that initiates the image capturing instruction, and the face recognition result is used to indicate that the target application performs the application operation.
  • the image processing apparatus 50 determines whether the application operation corresponding to the image acquisition instruction is a security operation when the image acquisition instruction is detected. If the application operation corresponding to the image acquisition instruction is a safe operation, the infrared image and the speckle image 900 are acquired according to the image acquisition instruction. Then, the captured image is subjected to face recognition processing in a safe operating environment, and the face recognition result is sent to the target application. This ensures that the target application processes the image in a highly secure environment while performing secure operations, thereby ensuring improved image processing security.
  • the image acquisition module 512 is further configured to acquire a timestamp included in the image acquisition instruction.
  • the timestamp is used to indicate the time at which the image acquisition instruction was initiated. If the interval between the timestamp and the target time is less than the duration threshold, the control camera module 10 collects the infrared image and the speckle image 900 according to the image acquisition instruction.
  • the target time is used to indicate the time at which the image acquisition instruction is detected.
  • the face recognition module 513 is further configured to: acquire a reference image, wherein the reference image is a calibrated image with reference depth information; and compare the reference image with the speckle image 900 to obtain offset information,
  • the offset information is used to represent the horizontal offset of the scattered speckles in the speckle image 900 with respect to the corresponding scattered speckles in the reference image;
  • the depth image is calculated from the offset information and the reference depth information, and the depth image and the infrared image are taken as the target image.
  • the face recognition module 513 is further configured to: acquire an operating environment in which the electronic device 100 is currently located; and if the electronic device 100 is currently in a safe operating environment, perform face recognition according to the target image in a safe operating environment. If the electronic device 100 is currently in a non-secure operating environment, the electronic device 100 is switched from the non-secure operating environment to the secure operating environment, and the face recognition processing is performed according to the target image in the safe operating environment.
  • the face recognition module 513 is further configured to: correct the target image in a safe operating environment to obtain a corrected target image; and perform face recognition processing according to the corrected target image.
  • the result sending module 514 is further configured to perform an encryption process on the face recognition result, and send the encrypted face recognition result to the target application that initiates the image collection instruction.
  • the result sending module 514 is further configured to: acquire a network security level of the network environment where the electronic device 100 is currently located; acquire an encryption level according to the network security level, and perform an encryption process corresponding to the encryption level by the face recognition result.
  • the detection total module 501 includes an instruction detection module 521
  • the acquisition total module 502 includes a speckle image acquisition module 522.
  • the image processing device 50 further includes a depth image acquisition module 523 and an image transmission module 524.
  • the instruction detecting module 521 is configured to determine, if the image capturing instruction is detected, whether the application operation corresponding to the image capturing instruction is an unsecure operation.
  • the speckle image acquisition module 522 is configured to control the camera module 10 to collect the speckle image according to the image acquisition instruction if the application operation corresponding to the image acquisition instruction is an unsafe operation.
  • the depth image acquisition module 523 is configured to calculate a depth image from the speckle image.
  • the image sending module 524 is configured to send the depth image to the target application that initiates the image capturing instruction, and the depth image is used to instruct the target application to perform the application operation.
  • the electronic device 100 controls the camera module 10 to collect the speckle image according to the image capturing instruction, and then according to the scattered image.
  • the plaque image 900 calculates a depth image and transmits the depth image to the target application for a corresponding application operation. In this way, the application operations of the image acquisition instructions can be classified, and different operations can be performed according to different image acquisition instructions.
  • the acquired image is used for non-secure operation, the acquired image can be processed directly, which improves the efficiency of image processing.
  • the speckle image acquisition module 522 is further configured to: acquire a timestamp included in the image acquisition instruction, where the timestamp is used to indicate a time when the image acquisition instruction is initiated; if the interval between the timestamp and the target time is less than The duration threshold is used to control the camera module 10 to collect a speckle image according to an image acquisition instruction, and the target moment is used to indicate the timing at which the image acquisition instruction is detected.
  • the depth image acquisition module 523 is further configured to: acquire a reference image, the reference image is a calibrated image with reference depth information; and compare the reference image with the speckle image 900 to obtain offset information, offset The information is used to represent the horizontal offset of the scattered spots in the speckle image 900 relative to the corresponding scattered spots in the reference image; the depth image is calculated from the offset information and the reference depth information.
  • the image transmitting module 524 is further configured to correct the depth image to obtain a corrected depth image, and send the corrected depth image to the target application that initiates the image acquisition instruction.
  • the image sending module 524 is further configured to acquire a current operating environment of the electronic device 100; if the electronic device 100 is currently in a non-secure operating environment, send the depth image to the initiated image capturing instruction in the non-secure operating environment.
  • the target application if the electronic device 100 is currently in a safe operating environment, the electronic device 100 is switched from the safe operating environment to the non-secure operating environment, and the depth image is sent to the target application that initiates the image capturing instruction in the non-secure operating environment. program.
  • the image sending module 524 is further configured to use a network security level of the network environment where the electronic device 100 is currently located; if the network security level is less than the level threshold, the depth image is encrypted; and the encrypted processed depth image is used. Send to the target application that initiated the image capture instruction.
  • the image sending module 524 is further configured to obtain an encryption level according to a network security level, and perform encryption processing on the depth image according to the encryption level.
  • each module in the image processing apparatus 50 described above is for illustrative purposes only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
  • the embodiment of the present application also provides a computer readable storage medium.
  • a computer program is stored on the computer readable storage medium.
  • the image processing method described in any of the above embodiments is implemented when the computer program is executed by the processor.
  • the embodiment of the present application further provides an electronic device (which may be the electronic device 100 described in FIG. 1).
  • the electronic device includes a memory and a processor in which computer readable instructions are stored. When the instructions are executed by the processor, the processor is caused to perform the image processing method described in any of the above embodiments.
  • the embodiment of the present application further provides a computer program product comprising instructions, when executed on a computer, causing the computer to execute the image processing method provided by any one of the above embodiments.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which acts as an external cache.
  • RAM is available in a variety of forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronization.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Link (Synchlink) DRAM
  • SLDRAM Memory Bus
  • Rambus Direct RAM
  • RDRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、图像处理装置(50)、计算机可读存储介质和电子设备。图像处理方法包括:(001)若检测到图像采集指令,(002)则判断图像采集指令对应的应用操作的安全性;根据判断结果采集对应判断结果的图像。

Description

图像处理方法、装置、计算机可读存储介质和电子设备
优先权信息
本申请请求2018年4月28日向中国国家知识产权局提交的、专利申请号为201810404509.0及201810403000.4的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及计算机技术领域,特别涉及一种图像处理方法、图像处理装置、计算机可读存储介质和电子设备。
背景技术
由于人脸具有唯一性特征,因此人脸识别技术在智能终端中的应用越来越广泛。智能终端的很多应用程序都会通过人脸进行认证,例如通过人脸进行智能终端的解锁、通过人脸进行支付认证。同时,智能终端还可以对包含人脸的图像进行处理。例如,对人脸特征进行识别,根据人脸表情制作表情包,或者通过人脸特征进行美颜处理等。
发明内容
本申请实施方式提供一种图像处理方法、图像处理装置、计算机可读存储介质和电子设备。
本申请实施方式的图像处理方法包括:若检测到图像采集指令,则判断所述图像采集指令对应的应用操作的安全性;根据判断结果采集对应所述判断结果的图像。
本申请实施方式的图像处理装置包括检测总模块和采集总模块。检测总模块用于若检测到图像采集指令,则判断所述图像采集指令对应的应用操作的安全性。采集总模块用于根据判断结果采集对应所述判断结果的图像。
本申请实施方式的计算机可读存储介质,其上存储有计算机程序。所述计算机程序被处理器执行时实现上述的图像处理方法。
本申请实施方式的电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行上述的图像处理方法。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请某些实施方式的图像处理方法的应用场景图;
图2至图4为本申请某些实施方式的图像处理方法的流程图;
图5为本申请某些实施方式的计算深度信息的原理图;
图6至图11为本申请某些实施方式的图像处理方法的流程图;
图12为本申请某些实施方式的实现图像处理方法的硬件结构图;
图13为本申请某些实施方式的实现图像处理方法的硬件结构图;
图14为本申请某些实施方式的实现图像处理方法的软件架构示意图;
图15至图17为本申请某些实施方式的图像处理装置的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受 这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。
图1为一个实施例中图像处理方法的应用场景图。如图1所示,该应用场景中包括电子设备100。电子设备100中可安装摄像头模组10,还可以安装若干个应用程序。电子设备100检测到图像采集指令,会判断图像采集指令对应的应用操作的安全性,并根据判断结果采集对应判断结果的图像。其中,电子设备100可为智能手机、平板电脑、个人数字助理、穿戴式设备等。
在一个例子中,电子设备100检测到图像采集指令,会判断图像采集指令对应的应用操作是否为安全操作。若图像采集指令对应的应用操作为安全操作,则控制摄像头模组10根据图像采集指令采集红外图像和散斑图像900。根据红外图像和散斑图像900获取目标图像,并在安全运行环境下根据目标图像进行人脸识别处理。将人脸识别结果发送给发起图像采集指令的目标应用程序,人脸识别结果用于指示目标应用程序执行应用操作。
在另一个例子中,电子设备100检测到图像采集指令时,会判断图像采集指令对应的应用操作是否为非安全操作。若图像采集指令对应的应用操作为非安全操作,则可以控制摄像头模组10根据图像采集指令采集散斑图像900。然后根据散斑图像900计算得到深度图像,并将深度图像发送给发起图像采集指令的目标应用程序,目标应用程序可以根据深度图像去执行应用操作。
请参阅图2,本申请提供一种图像处理方法。图像处理方法包括:
001:若检测到图像采集指令,则判断图像采集指令对应的应用操作的安全性;和
002:根据判断结果采集对应所述判断结果的图像。
请结合图1、图3及图12,在一个例子中,步骤001包括步骤011,步骤002包括步骤012。图像处理方法包括步骤011至步骤014。其中:
步骤011,若检测到图像采集指令,则判断图像采集指令对应的应用操作是否为安全操作。
在一个实施例中,电子设备100上可以安装摄像头模组10,并通过安装的摄像头模组10中的摄像头获取图像。摄像头可以根据获取的图像的不同分为激光摄像头112、可见光摄像头等类型,激光摄像头112可以获取激光照射到物体上所形成的图像,可见光摄像头可以获取可见光照射到物体上所形成的图像。电子设备100上可以安装若干个摄像头,且安装的位置不做限定。例如,可以在电子设备100的正面面板上安装一个摄像头,在背面面板上安装两个摄像头,摄像头还可以以内嵌的方式安装于电子设备100的内部,然后通过旋转或滑动的方式打开摄像头。具体地,电子设备100上可安装前置摄像头和后置摄像头,前置摄像头和后置摄像头可以从不同的视角获取图像,一般前置摄像头可以从电子设备100的正面视角获取图像,后置摄像头可以从电子设备100的背面视角获取图像。
图像采集指令是指用于触发图像采集操作的指令。例如,当用户对智能手机进行解锁的时候,可以通过获取人脸图像进行验证解锁;当用户通过智能手机进行支付的时候,可以人脸图像进行认证。应用操作是指应用程序需要完成的操作,用户打开应用程序后,可以通过应用程序完成不同的应用操作。例如,应用操作可以是支付操作、拍摄操作、解锁操作、游戏操作等。安全性要求比较高的应用操作被认为是安全操作,安全性要求较低的应用操作被认为是非安全操作。
步骤012,若图像采集指令对应的应用操作为安全操作,则控制摄像头模组10根据图像采集指令采集红外图像和散斑图像。
电子设备100的处理单元可以接收来自上层应用程序的指令,当处理单元接收到图像采集指令时,就可以控制摄像头模组10进行工作,通过摄像头采集红外图像和散斑图像。处理单元连接于摄像头,摄像头获取的图像就可以传输给处理单元,并通过处理单元进行裁剪、亮度调节、人脸检测、人脸识别等处理。具体地,摄像头模组10中可以但不限于包括激光摄像头112、镭射灯118和泛光灯114。当处理单元接收到图像采集指令时,处理单元会控制镭射灯118和泛光灯114进行分时工作,当镭射灯118开启时,通过激光摄像头112采集散斑图像900;当泛光灯114开启时,通过激光摄像头112采集红外图像。
可以理解的是,当激光照射在平均起伏大于波长数量级的光学粗糙表面上时,这些表面上无规分布的面元散射的子波相互叠加使反射光场具有随机的空间光强分布,呈现出颗粒状的结构,这就是激光散斑。形成的激光散斑具有高度随机性,因此不同的激光发射器(即镭射灯118)发射出来的激光所生成 的激光散斑不同。当形成的激光散斑照射到不同深度和形状的物体上时,生成的散斑图像900是不一样的。通过不同的激光发射器112形成的激光散斑具有唯一性,从而得到的散斑图像900也具有唯一性。镭射灯118形成的激光散斑可以照射到物体上,然后通过激光摄像头112来采集激光散斑照射到物体上所形成的散斑图像900。
具体地,电子设备100中可包括第一处理单元30和第二处理单元22,第一处理单元30和第二处理单元22都运行在安全运行环境中。安全运行环境可以包括第一安全环境和第二安全环境,第一处理单元30运行在第一安全环境中,第二处理单元22运行在第二安全环境中。第一处理单元30和第二处理单元22为分布在不同的处理器上的处理单元,且处于不同的安全环境下。例如,第一处理单元30可以是外置的MCU(Microcontroller Unit,微控制单元)模块,或者DSP(Digital Signal Processing,数字信号处理器)中的安全处理模块,第二处理单元22可以是处于TEE(Trust Execution Environment,可信执行环境)下的CPU(Central Processing Unit,中央处理器)内核。
电子设备100中的CPU有2种运行模式:TEE和REE(Rich Execution Environment,自然执行环境)。通常情况下,CPU运行在REE下,但当电子设备100需要获取安全级别较高的数据时,例如电子设备100需要获取人脸数据进行识别验证时,CPU可由REE切换到TEE进行运行。当电子设备100中CPU为单核时,可直接将上述单核由REE切换到TEE;当电子设备100中CPU为多核时,电子设备100将一个内核由REE切换到TEE,其他内核仍运行在REE中。
步骤013,根据红外图像和散斑图像900获取目标图像,并在安全运行环境下根据目标图像进行人脸识别处理。
在一个实施例中,目标图像可以包括红外图像和深度图像。目标应用程序发起的图像采集指令可以发给第一处理单元30,当第一处理单元22检测到图像采集指令对应的应用操作为安全操作时,可以控制摄像头模组10采集散斑图像900和红外图像,并根据散斑图像900计算深度图像。然后将深度图像和红外图像发送到第二处理单元22,第二处理单元22根据深度图像和红外图像进行人脸识别处理。
可以理解的是,镭射灯118可以发射出若干个激光散斑点,激光散斑点照射到不同距离的物体上时,在图像上所呈现的斑点位置不同。电子设备100可以预先采集一个标准的参考图像,参考图像是激光散斑照射到已知距离的平面上所形成的图像,所以参考图像上的散斑点一般是均匀分布的。然后,电子设备100建立该参考图像中每一个散斑点与参考深度的对应关系。可以理解的是,参考图像上的散斑点还可以不是均匀分布的,在此不做限定。当需要采集散斑图像900时,电子设备100控制镭射灯118发出激光散斑,激光散斑照射到物体上之后,通过激光摄像头112采集得到散斑图像900。然后将散斑图900像中的每一个散斑点与参考图像中的散斑点进行比较,获取散斑图像900中的散斑点相对于参考图像中对应的散斑点的位置偏移量,并散斑点的位置偏移量与参考深度来获取散斑点对应的实际深度信息。
激光摄像头112采集的红外图像与散斑图像900是对应的,散斑图像900可以用于计算红外图像中每一个像素点对应的深度信息。这样可以通过红外图像对人脸进行检测和识别,根据散斑图像900可以计算得到人脸对应的深度信息。具体地,在根据散斑图像900计算深度信息的过程中,首先要根据散斑图像900相对于参考图像的散斑点的位置偏移量计算相对深度,相对深度可以表示实际拍摄物体到参考平面的深度信息;然后再根据获取的相对深度和参考深度计算物体的实际深度信息。深度图像用于表示红外图像对应的深度信息,可以是表示的物体到参考平面的相对深度,也可以是物体到摄像头的绝对深度。
人脸识别处理是指对图像所包含的人脸进行识别的处理。具体地,可以首先根据红外图像进行人脸检测处理,提取红外图像中人脸所在的区域,并对提取的人脸进行识别处理,分辨该人脸的身份。深度图像与红外图像是对应的,根据深度图像可以得到人脸对应的深度信息,从而识别人脸是否为活体。根据人脸识别处理,可以对当前采集的人脸的身份进行认证。
步骤014,将人脸识别结果发送给发起图像采集指令的目标应用程序,人脸识别结果用于指示目标应用程序执行应用操作。
第二处理单元22可以根据深度图像和红外图像进行人脸识别处理,然后将人脸识别结果发送给发起图像采集指令的目标应用程序。可以理解的是,目标应用程序在生成图像采集指令时,会在图像采集指令中写入目标应用标识、指令发起时刻、采集的图像类型等内容。电子设备100在检测到图像采集指令时,根据其中包含的目标应用标识就可以获取对应的目标应用程序。
人脸识别结果中可以包含人脸匹配结果和活体检测结果,人脸匹配结果用于图像中的人脸与预设人脸是否匹配,活体检测结果用于表示图像中包含的人脸是否为活体人脸。目标应用程序可以根据人脸识别结果执行相应的应用操作。例如,根据人脸识别结果进行解锁,当采集的图像中的人脸与预设人脸匹配,且该人脸为活体人脸时,解除电子设备100的锁定状态。
图3的实施例提供的图像处理方法,在检测到图像采集指令时,判断图像采集指令对应的应用操作是否为安全操作。若图像采集指令对应的应用操作为安全操作,则根据图像采集指令采集红外图像和散斑图像900。然后在安全运行环境下对采集的图像进行人脸识别处理,并将人脸识别结果发送给目标应用程序。这样就可以保证目标应用程序在进行安全操作的时候,在一个安全性较高的环境下对图像进行处理,从而保证提高图像处理的安全性。
请结合图1、图4及图12,在另一个例子中,步骤012包括步骤0121及步骤0122。步骤013包括步骤0131至步骤0135。步骤014包括步骤0141。其中:
步骤011,若检测到图像采集指令,则判断图像采集指令对应的应用操作是否为安全操作。
步骤0121,若图像采集指令对应的应用操作为安全操作,则获取图像采集指令中包含的时间戳,时间戳用于表示发起图像采集指令的时刻。
具体地,应用程序在生成图像采集指令时,可以在图像采集指令中写入一个时间戳,该时间戳用于记录应用程序发起该图像采集指令的时刻。当第一处理单元30接收到图像采集指令时,第一处理单元30可从图像采集指令中获取时间戳,根据该时间戳判断生成该图像采集指令的时刻。例如,当应用程序发起图像采集指令时,应用程序可读取电子设备100的时钟所记录的时刻,作为一个时间戳,并将获取的时间戳写入到图像采集指令中。比如在Android系统中就可通过System.currentTimeMillis()函数来获取系统时刻。
步骤0122,若时间戳到目标时刻之间的间隔时长小于时长阈值,则控制摄像头模组10根据图像采集指令采集红外图像和散斑图像900,目标时刻用于表示检测到图像采集指令的时刻。
目标时刻是指电子设备100检测到图像采集指令的时刻,具体是第一处理单元30检测到图像采集指令的时刻。时间戳到目标时刻之间的间隔时长,具体是指从发起图像采集指令的时刻到电子设备100检测到图像采集指令的时刻所间隔的时长。若该间隔时长超过时长阈值,则认为指令的响应异常,就可以停止获取图像,并向应用程序返回异常消息。若间隔时长小于时长阈值,再控制摄像头模组10采集红外图像和散斑图像900。
在一个实施例中,摄像头模组10是由第一摄像头模组和第二摄像头模组构成的,第一摄像头模组用于采集红外图像,第二摄像头模组用于采集散斑图像900。在根据红外图像和散斑图像900进行人脸识别的时候,需要保证红外图像和散斑图像900是相对应的,那么就需要控制摄像头模组10同时采集红外图像和散斑图像900。具体地,根据图像采集指令控制第一摄像头模组采集红外图像,并控制第二摄像头模组采集散斑图像900;其中,采集红外图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。
第一摄像头模组是由泛光灯114和激光摄像头112构成的,第二摄像头模组是由镭射灯118和激光摄像头112构成的,第一摄像头模组的激光摄像头112和第二摄像头模组112的激光摄像头可以是同一个激光摄像头,也可以是不同的激光摄像头,在此不做限定。当第一处理单元30接收到图像采集指令的时候,第一处理单元30会控制第一摄像头模组和第二摄像头模组进行工作。第一摄像头模组和第二摄像头模组可以并行处理,也可以分时处理,工作的先后顺序不做限定。例如,可以先控制第一摄像头模组采集红外图像,也可以先控制第二摄像头模组采集散斑图像900。
可以理解的是,红外图像和散斑图像900是对应的,也就必须保证红外图像和散斑图像900的一致性。假设第一摄像头模组和第二摄像头模组为分时工作的话,就必须保证采集红外图像和散斑图像900的时间间隔非常短。采集红外图像的第一时刻与采集散斑图像900的第二时刻之间的时间间隔小于第一阈值。第一阈值一般是一个比较小的值,当时间间隔小于第一阈值时,认为被摄物体没有发生变化,采集的红外图像和散斑图像900是对应的。可以理解的是,还可以根据被拍摄物体的变化规律进行调整。被拍摄物体的变化越快,对应获取的第一阈值越小。假设被拍摄物体长时间处于静止状态的话,该第一阈值就可以设置为一个较大的值。具体的,获取被拍摄物体的变化速度,根据该变化速度获取对应的第一阈值。
举例来说,当手机需要通过人脸进行认证解锁时,用户可以点击解锁键发起解锁指令,并将前置摄像头对准脸部进行拍摄。手机会将解锁指令发送到第一处理单元30,第一处理单元30再控制摄像头模组10进行工作。首先通过第一摄像头模组采集红外图像,间隔1毫秒时间后,再控制第二摄像头模组采集散斑图像900,并通过采集的红外图像和散斑图像900进行认证解锁。
更进一步地,在第一时刻控制摄像头模组10采集红外图像,并在第二时刻控制摄像头模组10采集散斑图像;第一时刻与目标时刻之间的时间间隔小于第二阈值;第二时刻与目标时刻之间的时间间隔小于第三阈值。若第一时刻与目标时刻之间的时间间隔小于第二阈值,则控制摄像头模组10采集红外图像;若第一时刻与目标时刻之间的时间间隔大于第二阈值,则可向应用程序返回响应超时的提示信息,并等待应用程序重新发起图像采集指令。
摄像头模组10采集红外图像之后,第一处理单元30可控制摄像头模组10采集散斑图像,采集散斑图像900的第二时刻与第一时刻之间的时间间隔小于第一阈值,同时第二时刻与目标时刻之间的时间间隔小于第三阈值。若第二时刻与第一时刻之间的时间间隔大于第一阈值,或第二时刻与目标时刻之间的时间间隔大于第三阈值,则可向应用程序返回响应超时的提示信息,并等待应用程序重新发起图像采集指令。可以理解的是,采集散斑图像900的第二时刻可以大于采集红外图像的第一时刻,也可以小于采集红外图像的第一时刻,在此不做限定。
具体地,电子设备100可分别设置泛光灯控制器和镭射灯控制器,第一处理单元30通过两路PWM分别连接泛光灯控制器和镭射灯控制器,当第一处理单元30需要控制泛光灯114开启或镭射灯118开启时,可通过脉冲宽度调制(Pulse Width Modulation,PWM)32向泛光灯控制器发射脉冲波控制泛光灯114开启或向镭射灯控制器发射脉冲波控制镭射灯118开启,通过PWM32分别向两个控制器发射脉冲波来控制采集红外图像和散斑图像900之间的时间间隔。采集到的红外图像和散斑图像900之间的时间间隔低于第一阈值,可保证采集到的红外图像和散斑图像900的一致性,避免红外图像和散斑图像900之间存在较大的误差,提高了对图像处理的准确性。
步骤0131,获取参考图像,参考图像为标定得到的带有参考深度信息的图像。
电子设备100会预先对激光散斑进行标定得到一张参考图像,并将参考图像存储在电子设备100中。一般地,参考图像是将激光散斑照射到一个参考平面而形成的,参考图像也是一张带有若干个散斑点的图像,每个散斑点都有对应的参考深度信息。当需要获取被拍摄物体的深度信息时,就可以将实际采集的散斑图像900与参考图像进行比较,并根据实际采集的散斑图像900中散斑点的偏移量来计算实际的深度信息。
图5为一个实施例中计算深度信息的原理图。如图5所示,镭射灯118可以生成激光散斑,激光散斑经过物体进行反射后,通过激光摄像头112获取形成的图像。在摄像头的标定过程中,镭射灯118发射的激光散斑会经过参考平面910进行反射,然后通过激光摄像头112采集反射光线,通过成像平面920成像得到参考图像。参考平面910到镭射灯118的参考深度为L,该参考深度为已知的。在实际计算深度信息的过程中,镭射灯118发射的激光散斑会经过物体930进行反射,再由激光摄像头112采集反射光线,通过成像平面920成像得到实际的散斑图像。则可以得到实际的深度信息的计算公式为:
Figure PCTCN2019083260-appb-000001
其中,L是镭射灯118到与参考平面910之间的距离,f为激光摄像头112中透镜的焦距,CD为镭射灯118到激光摄像头112之间的距离,AB为物体930的成像与参考平面910的成像之间的偏移距离。AB可为像素偏移量n与像素点的实际距离p的乘积。当物体930到镭射灯118之间的距离Dis大于参考平面910到镭射灯118之间的距离L时,AB为负值;当物体930到镭射灯118之间的距离Dis小于参考平面910到镭射灯118之间的距离L时,AB为正值。
步骤0132,将参考图像与散斑图像900进行比较得到偏移信息,偏移信息用于表示散斑图像900中散斑点相对于参考图像中对应散斑点的水平偏移量。
具体地,遍历散斑图像900中每一个像素点(x,y),以该像素点为中心,选择一个预设大小像素块。例如,可以是选取31pixel*31pixel大小的像素块。然后在参考图像上搜索相匹配的像素块,计算在参考图像上匹配的像素点的坐标与像素点(x,y)坐标的水平偏移量,向右偏移即为正,向左偏移记为负。再把计算出的水平偏移量带入公式(1)可以得到像素点(x,y)的深度信息。这样依次计算散斑图像900中每个像素点的深度信息,就可以得到带有散斑图像900中各个像素点所对应的深度信息。
步骤0133,根据偏移信息和参考深度信息计算得到深度图像,将深度图像和红外图像作为目标图像。
深度图像可以用于表示红外图像对应的深度信息,深度图像中包含的每一个像素点表示一个深度信息。具体地,参考图像中的每一个散斑点都对应一个参考深度信息,当获取到参考图像中散斑点与散斑图像900中散斑点的水平偏移量后,可以根据该水平偏移量计算得到散斑图像900中的物体到参考平面的相对深度信息,然后再根据相对深度信息和参考深度信息,就可以计算得到物体到摄像头的实际深度信息,即得到最后的深度图像。
步骤0134,在安全运行环境下将目标图像进行校正,得到校正目标图像。
在一个实施例中,获取到红外图像和散斑图像900之后,可以根据散斑图像900计算得到深度图像。还可以分别将红外图像和深度图像进行校正,得到校正红外图像和校正深度图像。再根据校正红外图像和校正深度图像进行人脸识别处理。对上述红外图像和深度图像分别进行校正,是指校正上述红外图像和深度图像中内外参数。例如,激光摄像头112产生偏转,那么获取的红外图像和深度图像就需要对该偏转视差产生的误差进行校正,从而得到标准的红外图像和深度图像。在对上述红外图像校正后可得到校正红外图像,上述深度图像进行校正可得到校正深度图像。具体地,可以根据红外图像计算得到红外视差图像,再根据红外视差图像进行内外参数校正,得到校正红外图像。根据深度图像计算得到深度视差图像,再根据深度视差图像进行内外参数校正,得到校正深度图像。
步骤0135,根据校正目标图像进行人脸识别处理。
第一处理单元30在获取到深度图像和红外图像之后,可将深度图像和红外图像发送至第二处理单元22进行人脸识别处理。第二处理单元22在进行人脸识别之前,会将深度图像和红外图像进行校正,得到校正深度图像和校正红外图像,再根据校正深度图像和校正红外图像进行人脸识别处理。人脸识别的过程包括人脸认证阶段和活体检测阶段,人脸认证阶段是指识别人脸身份的过程,活体检测阶段是指识别被拍摄人脸是否为活体的过程。在人脸认证阶段,第二处理单元22可以对校正红外图像进行人脸检测,检测校正红外图像中是否存在人脸;若校正红外图像中存在人脸,则提取校正红外图像中包含的人脸图像;再将提取的人脸图像与电子设备100中存储的人脸图像进行匹配,若匹配成功,则人脸认证成功。
在对人脸图像进行匹配的时候,可以提取人脸图像的人脸属性特征,再将提取的人脸属性特征与电子设备200中存储的人脸图像的人脸属性特征进行匹配,若匹配值超过匹配阈值,则认为人脸认证成功。例如,可以提取人脸图像中人脸的偏转角度、亮度信息、五官特征等特征作为人脸属性特征,若提取的人脸属性特征与存储的人脸属性特征匹配度超过90%,则认为人脸认证成功。
一般地,在对人脸进行认证的过程中,可以根据采集的红外图像认证人脸图像是否与预设的人脸图像匹配。假设拍摄的为照片、雕塑等人脸时,也可能认证成功。因此,可以需要根据采集的深度图像和红外图像进行活体检测处理,这样必须保证采集的是活体的人脸才能认证成功。可以理解的是,采集的红外图像可以表示人脸的细节信息,采集深度图像可以表示红外图像对应的深度信息,根据深度图像和红外图像可以进行活体检测处理。例如,被拍摄的人脸为照片中的人脸的话,根据深度图像就可以判断采集的人脸不是立体的,则可以认为采集的人脸为非活体的人脸。
具体地,根据上述校正深度图像进行活体检测包括:在校正深度图像中查找与上述人脸图像对应的人脸深度信息,若上述深度图像中存在与上述人脸图像对应的人脸深度信息,且上述人脸深度信息符合人脸立体规则,则上述人脸图像为活体人脸图像。上述人脸立体规则是带有人脸三维深度信息的规则。可选地,第二处理单元还可采用人工智能模型对上述校正红外图像和校正深度图像进行人工智能识别,获取上述人脸图像对应的活体属性特征,并根据获取的活体属性特征判断上述人脸图像是否为活体人脸图像。活体属性特征可以包括人脸图像对应的肤质特征、纹理的方向、纹理的密度、纹理的宽度等,若上述活体属性特征符合人脸活体规则,则认为上述人脸图像具有生物活性,即为活体人脸图像。可以理解的是,第二处理单元22进行人脸检测、人脸认证、活体检测等处理时,处理顺序可以根据需要进行调换。例如,可以先对人脸进行认证,再检测人脸是否为活体。也可以先检测人脸是否为活体,再对人脸进行认证。
第二处理单元22根据红外图像和深度图像进行活体检测的方法具体可以包括:获取连续多帧红外图像和深度图像,根据上述红外图像和深度图像检测人脸是否有对应的深度信息,若人脸有对应的深度信息,再通过连续多帧红外图像和深度图像检测人脸是否有变化,例如人脸是否眨眼、摆动、张嘴等。 若检测到人脸存在对应的深度信息且人脸有变化,则判断该人脸为活体人脸。上述第一处理单元30在进行人脸识别处理时,若人脸认证未通过则不再进行活体检测,或活体检测未通过则不再进行人脸认证。
步骤0141,将人脸识别结果进行加密处理,并将加密处理后的人脸识别结果发送给发起图像采集指令的目标应用程序。
将人脸识别结果进行加密处理,具体的加密算法不做限定。例如,可以是根据DES(Data Encryption Standard,数据加密标准)、MD5(Message-Digest Algorithm 5,信息-摘要算法5)、HAVAL(Diffie-Hellman,密钥交换算法)。
请结合图1和图6,在一个实施例中,步骤0141中的对人脸识别结果进行加密处理的方法具体可以包括:
步骤01411,获取电子设备100当前所处的网络环境的网络安全等级。
应用程序在获取图像进行操作的时候,一般需要进行联网操作。例如,对人脸进行支付认证的时候,可以将人脸识别结果发送给应用程序,应用程序再发送给对应的服务器完成相应的支付操作。应用程序在发送人脸识别结果时,需要连接网络,再通过网络将人脸识别结果发送给对应的服务器。因此,在发送人脸识别结果时,可以首先对人脸识别结果进行加密。检测电子设备100当前所处的网络环境的网络安全等级,并根据网络安全等级进行加密处理。
步骤01422,根据网络安全等级获取加密等级,将人脸识别结果进行加密等级对应的加密处理。
网络安全等级越低,认为网络环境的安全性越低,对应的加密等级越高。电子设备100预先建立网络安全等级与加密等级的对应关系,根据网络安全等级可以获取对应的加密等级,并根据加密等级对人脸识别结果进行加密处理。可以根据获取的参考图像对人脸识别结果进行加密处理。人脸识别结果中可包含人脸认证结果、活体检测结果、红外图像、散斑图像和深度图像中的一种或多种。
参考图像是电子设备100在对摄像头模组进行标定时采集的散斑图像,由于参考图像具有高度唯一性,不同的电子设备100采集的参考图像是不同的。所以参考图像本身就可以作为一个加密的密钥,用来对数据进行加密处理。电子设备100可以将参考图像存放在安全环境中,这样可以防止数据泄露。具体地,获取的参考图像是由一个二维的像素矩阵构成的,每一个像素点都有对应的像素值。可以根据参考图像的全部或部分像素点对人脸识别结果进行加密处理。例如,可以将参考图像直接与目标图像进行叠加,得到一张加密图像。也可以目标图像对应的像素矩阵与参考图像对应的像素矩阵进行乘积运算,得到加密图像。还可以取参考图像中某一个或多个像素点对应的像素值作为加密密钥,对目标图像进行加密处理,具体加密算法在本实施例不做限定。
参考图像是在电子设备100标定时生成的,则电子设备100可以将参考图像预先存储在安全环境中,在需要对人脸识别结果进行加密的时候,可以在安全环境下读取参考图像,并根据参考图像对人脸识别结果进行加密处理。同时,会在目标应用程序对应的服务器上存储一张相同的参考图像,当电子设备100将加密处理后的人脸识别结果发送给目标应用程序对应的服务器之后,目标应用程序的服务器获取参考图像,并根据获取的参考图像对加密后的人脸识别结果进行解密处理。
可以理解的是,目标应用程序的服务器中可能会存储多张不同电子设备采集的参考图像,每个电子设备100对应的参考图像不同。因此,服务器中可以对每一张参考图像定义一个参考图像标识,并存储电子设备100的设备标识,然后建立参考图像标识与设备标识之间的对应关系。当服务器接收到人脸识别结果时,接收到的人脸识别结果会同时携带电子设备100的设备标识。服务器就可以根据设备标识查找对应的参考图像标识,并根据参考图像标识找到对应的参考图像,然后根据找到的参考图像对人脸识别结果进行解密处理。
在本申请提供的其他实施例中,对于图3或者图4所示的图像处理方法,根据参考图像进行加密处理的方法具体可以包括:获取参考图像对应的像素矩阵,根据该像素矩阵获取加密密钥;根据加密密钥对人脸识别结果进行加密处理。
具体地,参考图像是由一个二维像素矩阵构成的,由于获取的参考图像是唯一的,因此参考图像对应的像素矩阵也是唯一的。该像素矩阵本身可以作为一个加密密钥对人脸识别结果进行加密,也可以对像素矩阵进行一定的转换得到加密密钥,再通过转换得到的加密密钥对人脸识别结果进行加密处理。举例来说,像素矩阵是一个由多个像素值构成的二维矩阵,每一个像素值在像素矩阵中的位置可以通过一个二维坐标进行表示,则可以通过一个或多个位置坐标获取对应的像素值,并将获取的这一个或多个像 素值组合成一个加密密钥。获取到加密密钥之后,可以根据加密密钥对人脸识别结果进行加密处理,具体地加密算法在本实施例中不做限定。例如,可以直接将加密密钥与数据进行叠加或乘积,或者可以将加密密钥作为一个数值插入数据中,得到最终的加密数据。
对于步骤0141中的对人脸识别结果进行加密处理,电子设备100还可以对不同的应用程序采用不同的加密算法。具体地,电子设备100可以预先建立应用程序的应用标识与加密算法的对应关系,图像采集指令中可包含目标应用程序的目标应用标识。在接收到图像采集指令后,可以获取图像采集指令中包含的目标应用标识,并根据目标应用标识获取对应的加密算法,根据获取的加密算法对人脸识别结果进行加密处理。
在将红外图像、散斑图像和深度图像发送给目标应用程序之前,还可以将红外图像、散斑图像和深度图像的精度进行调整。具体地,图3或图3所示的图像处理方法还可包括:获取红外图像、散斑图像900和深度图像中的一种或多种作为待发送图像;获取发起图像采集指令的目标应用程序的应用等级,根据应用等级获取对应的精度等级;根据精度级别调整待发送图像的精度,将调整后的待发送图像发送给目标应用程序。
应用等级可以表示目标应用程序对应的重要等级。一般目标应用程序的应用等级越高,发送的图像的精度越高。电子设备100可以预先设置应用程序的应用等级,并建立应用等级与精度等级的对应关系,根据应用等级可以获取对应的精度等级。例如,可以将应用程序分为系统安全类应用程序、系统非安全类应用程序、第三方安全类应用程序、第三方非安全类应用程序等四个应用等级,对应的精度等级逐渐降低。
待发送图像的精度可以表现为图像的分辨率,或者散斑图像900中包含的散斑点的个数,这样根据散斑图像900得到的深度图像的精度也会不同。具体地,调整图像精度可以包括:根据精度级别调整待发送图像的分辨率;或,根据精度级别调整采集的散斑图像900中包含的散斑点的个数。其中,散斑图像中包含的散斑点的个数可以通过软件的方式进行调整,也可以通过硬件的方式进行调整。软件方式调整时,可直接检测采集的散斑图像900中的散斑点,并将部分散斑点进行合并或消除处理,这样调整后的散斑图像900中包含的散斑点的数量就减少了。硬件方式调整时,可以调整镭射灯生成的激光散斑点的个数。例如,精度高时,生成的激光散斑点的个数为30000个;精度较低时,生成的激光散斑点的个数为20000个。这样对应计算得到的深度图像的精度就会相应地降低。
具体的,可在镭射灯118中预置不同的衍射光学元件(Diffractive Optical Elements,DOE),其中不同DOE衍射形成的散斑点的个数不同。根据精度级别切换不同的DOE进行衍射生成散斑图像900,并根据得到的散斑图像900得到不同精度的深度图。当应用程序的应用等级较高时,对应的精度级别也比较高,镭射灯可控制散斑点个数较多的DOE来发射激光散斑,从而获取散斑点个数较多的散斑图像;当应用程序的应用等级较低时,对应的精度级别也比较低,镭射灯118可控制散斑点个数较少的DOE来发射激光散斑,从而获取散斑点个数较少的散斑图像900。
请参阅图1、图7和图12,在图3或图4所示的图像处理方法中,步骤013中的对人脸识别处理的过程还可以包括:
步骤0136,获取电子设备100当前所处的运行环境。
步骤0137,若电子设备100当前处于安全运行环境下,则在该安全运行环境下根据目标图像进行人脸识别处理。
电子设备100的运行环境包括安全运行环境和普通运行环境。例如,CPU的运行环境可分为TEE和REE,TEE就是一种安全运行环境,REE就是一种非安全运行环境,对于一些安全性要求比较高的应用操作,就需要在安全运行环境中完成。对于一些安全性要求比较低的应用操作,可以在非安全运行环境中进行。
步骤0138,若电子设备100当前处于非安全运行环境下,则将电子设备100从非安全运行环境切换到安全运行环境,在安全运行环境下根据目标图像进行人脸识别处理。
在一个实施例中,电子设备100中可包括第一处理单元30和第二处理单元22,第一处理单元30可以是可以MCU处理器,第二处理单元22可以是CPU内核。由于MCU处理器是外置于CPU处理器的,因此MCU本身是处于安全环境下的。具体地,若判断图像采集指令对应的应用操作为安全操作,则可以判断第一处理单元30是否连接到处于安全运行环境下的第二处理单元。若是,则直接将获取的 图像发送给第二处理单元22进行处理;若否,则将第一处理单元30连接到处于安全运行环境下的第二处理单元22,并将获取的图像发送给第二处理单元22进行处理。
图3、图4、图6及图7所示实施例提供的图像处理方法,在检测到图像采集指令时,若判断图像采集指令对应的应用操作为安全操作,则可以根据图像采集指令中包含的时间戳判断指令的响应时间是否超时。如果指令的响应时间没有超时,则根据图像采集指令采集图像。可以在安全运行环境下对采集的图像进行人脸识别处理。然后将人脸识别结果进行加密处理,并将加密处理后的人脸识别结果发送给目标应用程序。这样就可以保证目标应用程序在进行安全操作的时候,在一个安全性较高的环境下对图像进行处理,并在数据传输的过程中通过加密处理来提高数据的安全性,从而保证提高图像处理的安全性。
请结合图1、图8及图12,在又一个例子中,步骤001包括步骤021,步骤002包括步骤022。图像处理方法包括步骤021至步骤024。其中:
步骤021,若检测到图像采集指令,则判断图像采集指令对应的应用操作是否为非安全操作。
在一个实施例中,电子设备100上可以安装摄像头模组10,并通过安装的摄像头模组10中的摄像头获取图像。摄像头可以根据获取的图像的不同分为激光摄像头112、可见光摄像头等类型,激光摄像头112可以获取激光照射到物体上所形成的图像,可见光摄像头可以获取可见光照射到物体上所形成的图像。电子设备100上可以安装若干个摄像头,且安装的位置不做限定。例如,可以在电子设备100的正面面板上安装一个摄像头,在背面面板上安装两个摄像头,摄像头还可以以内嵌的方式安装于电子设备100的内部,然后通过旋转或滑动的方式打开摄像头。具体地,电子设备100上可安装前置摄像头和后置摄像头,前置摄像头和后置摄像头可以从不同的视角获取图像,一般前置摄像头可以从电子设备100的正面视角获取图像,后置摄像头可以从电子设备100的背面视角获取图像。
图像采集指令是指用于触发图像采集操作的指令。例如,当用户对智能手机进行解锁的时候,可以通过获取人脸图像进行验证解锁;当用户通过智能手机进行支付的时候,可以人脸图像进行认证。应用操作是指应用程序需要完成的操作,用户打开应用程序后,可以通过应用程序完成不同的应用操作。例如,应用操作可以是支付操作、拍摄操作、解锁操作、游戏操作等。安全性要求比较高的应用操作被认为是安全操作,安全性要求较低的应用操作被认为是非安全操作。
步骤022,若图像采集指令对应的应用操作为非安全操作,则控制摄像头模组10根据图像采集指令采集散斑图像。
电子设备100的处理单元可以接收来自上层应用程序的指令,当处理单元接收到图像采集指令时,就可以控制摄像头模组10进行工作,通过摄像头采集散斑图像。处理单元连接于摄像头,摄像头获取的图像就可以传输给处理单元,并通过处理单元进行裁剪、亮度调节、人脸检测、人脸识别等处理。具体地,摄像头模组10中可以但不限于包括激光摄像头112和镭射灯118。当处理单元接收到图像采集指令时,处理单元会控制镭射灯118开启,当镭射灯118开启时,通过激光摄像头112采集散斑图像900。摄像头模组10中还可以包括激光摄像头112、镭射灯118和泛光灯114。当处理单元接收到图像采集指令时,处理单元会控制镭射灯118和泛光灯114进行分时工作,当镭射灯118开启时,通过激光摄像头112采集散斑图像900;当泛光灯118开启时,通过激光摄像头112采集红外图像。
可以理解的是,当激光照射在平均起伏大于波长数量级的光学粗糙表面上时,这些表面上无规分布的面元散射的子波相互叠加使反射光场具有随机的空间光强分布,呈现出颗粒状的结构,这就是激光散斑。形成的激光散斑具有高度随机性,因此不同的激光发射器(即镭射灯118)发射出来的激光所生成的激光散斑不同。当形成的激光散斑照射到不同深度和形状的物体上时,生成的散斑图像900是不一样的。通过不同的激光发射器形成的激光散斑具有唯一性,从而得到的散斑图像900也具有唯一性。镭射灯118形成的激光散斑可以照射到物体上,然后通过激光摄像头112来采集激光散斑照射到物体上所形成的散斑图像900。
具体地,电子设备100中可包括第一处理单元30和第二处理单元22,第一处理单元30运行在安全运行环境中。第二处理单元22可在安全运行环境下运行,也可以在非安全运行环境下运行。第一处理单元30和第二处理单元22为分布在不同的处理器上处理单元,且处于不同的安全环境下。第一处理单元30运行在第一安全环境下,第二处理单元22可运行在第二安全环境下。例如,第一处理单元30可以是外置的MCU(Microcontroller Unit,微控制单元)模块,或者DSP(Digital Signal Processing,数字 信号处理器)中的安全处理模块,第二处理单元22可以CPU(Central Processing Unit,中央处理器)内核,CPU内核可处于TEE(Trust Execution Environment,可信执行环境)下,也可处于REE(Rich Execution Environment,自然执行环境)下。
具体地,当电子设备100需要获取安全级别较高的数据时,例如电子设备100需要获取人脸数据进行识别验证时,CPU可由REE切换到TEE进行运行。当电子设备100中CPU为单核时,可直接将上述单核由REE切换到TEE;当电子设备100中CPU为多核时,电子设备100将一个内核由REE切换到TEE,其他内核仍运行在REE中。第二处理单元22可以接收应用程序发送的图像采集指令,并将图像采集指令发送给第一处理单元30,再由第一处理单元30控制摄像头模组采集散斑图像。
步骤023,根据散斑图像计算得到深度图像。
可以理解的是,镭射灯118可以发射出若干个激光散斑点,激光散斑点照射到不同距离的物体上时,在图像上所呈现的斑点位置不同。电子设备100可以预先采集一个标准的参考图像,参考图像是激光散斑照射到平面上所形成的图像。所以参考图像上的散斑点一般是均匀分布的,然后建立该参考图像中每一个散斑点与参考深度的对应关系。当需要采集散斑图像900时,控制镭射灯118发出激光散斑,激光散斑照射到物体上之后,通过激光摄像头112采集得到散斑图像900。然后将散斑图像900中的每一个散斑点与参考图像中的散斑点进行比较,获取散斑图像900中的散斑点相对于参考图像中对应的散斑点的位置偏移量,并散斑点的位置偏移量与参考深度来获取散斑点对应的实际深度信息。
具体地,根据散斑图像900计算深度信息的过程中,首先要根据散斑图像900相对于参考图像的散斑点的位置偏移量计算相对深度,相对深度可以表示实际拍摄物体到参考平面的深度信息。然后再根据获取的相对深度和参考深度计算物体的实际深度信息。深度图像用于表示拍摄物体的深度信息,可以是表示的物体到参考平面的相对深度,也可以是物体到摄像头的绝对深度。
步骤024,将深度图像发送给发起图像采集指令的目标应用程序,深度图像用于指示目标应用程序执行应用操作。
将获取的深度图像发送给目标应用程序,目标应用程序可以根据深度图像获取拍摄物体的深度信息,然后根据深度图像进行对应的应用操作。例如,电子设备100可以同时采集一张RGB(Red Green Blue,红绿蓝)图像和一张散斑图像900,所采集的RGB图像和散斑图像900是对应的,那么根据散斑图像900计算得到的深度图像与上述RGB图像也是对应的。目标应用程序获取到RGB图像和深度图像之后,就可以根据深度图像得到RGB图像中每一个像素点对应的深度值,然后根据得到的深度值对RGB图像进行三维建模、AR(Augmented Reality,增强现实技术)、美颜等处理。
图8所示实施例提供的图像处理方法,电子设备100在检测到图像采集指令对应的应用操作为非安全操作时,会控制摄像头模组10根据图像采集指令采集散斑图像900,然后根据散斑图像900计算得到深度图像,并将深度图像发送给目标应用程序进行对应的应用操作。这样可以将图像采集指令的应用操作进行分类,并根据不同的图像采集指令进行不同的操作。当采集的图像用于非安全操作时,则可以直接将采集的图像进行处理,提高了图像处理的效率。
请参阅图1、图9和图12,在再一个例子中,步骤022包括步骤0221及步骤0222,步骤023包括步骤0231至步骤0233。其中:
步骤021,若检测到图像采集指令,则判断图像采集指令对应的应用操作是否为非安全操作。
步骤0221,若图像采集指令对应的应用操作为非安全操作,则获取图像采集指令中包含的时间戳,时间戳用于表示发起图像采集指令的时刻。
具体地,应用程序在发送图像采集指令时,可以在图像采集指令中写入一个时间戳,该时间戳用于记录应用程序发起该图像采集指令的时刻。当第一处理单元30接收到图像采集指令时,第一处理单元30可从图像采集指令中获取时间戳,根据该时间戳判断生成该图像采集指令的时刻。例如,当应用程序发起图像采集指令时,应用程序可读取电子设备100的时钟所记录的时刻,作为一个时间戳,并将获取的时间戳写入到图像采集指令中。比如在Android系统中就可通过System.currentTimeMillis()函数来获取系统时刻。
步骤0222,若时间戳到目标时刻之间的间隔时长小于时长阈值,则控制摄像头模组10根据图像采集指令采集散斑图像900,目标时刻用于表示检测到图像采集指令的时刻。
目标时刻是指电子设备100检测到图像采集指令的时刻,具体是第一处理单元30检测到图像采集 指令的时刻。时间戳到目标时刻之间的间隔时长,具体是指从发起图像采集指令的时刻到电子设备100检测到图像采集指令的时刻所间隔的时长。若该间隔时长超过时长阈值,则认为指令的响应异常,就可以停止获取图像,并向应用程序返回异常消息。若间隔时长小于时长阈值,再控制摄像头采集散斑图像900。
在一个实施例中,摄像头模组10是由第一摄像头模组和第二摄像头模组构成的,第一摄像头模组用于采集RGB图像,第二摄像头模组用于采集散斑图像900。在根据RGB图像和散斑图像900进行应用操作的时候,需要保证RGB图像和散斑图像900是相对应的,那么就需要控制摄像头模组10同时采集RGB图像和散斑图像。具体地,根据图像采集指令控制第一摄像头模组采集RGB图像,并控制第二摄像头模组采集散斑图像900;其中,采集RGB图像的第一时刻与采集散斑图像900的第二时刻之间的时间间隔小于第一阈值。
可以理解的是,采集的RGB图像和散斑图像900是对应的,也就必须保证RGB图像和散斑图像900的一致性。假设第一摄像头模组和第二摄像头模组为分时工作的话,就必须保证采集RGB图像和散斑图像900的时间间隔非常短。采集RGB图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。第一阈值一般是一个比较小的值,当时间间隔小于第一阈值时,认为被摄物体没有发生变化,采集的RGB图像和散斑图像900是对应的。可以理解的是,还可以根据被拍摄物体的变化规律进行调整。被拍摄物体的变化越快,对应获取的第一阈值越小。假设被拍摄物体长时间处于静止状态的话,该第一阈值就可以设置为一个较大的值。具体的,获取被拍摄物体的变化速度,根据该变化速度获取对应的第一阈值。
举例来说,当手机需要进行美颜自拍时,用户可以点击拍照按钮发起拍照指令,并将前置摄像头对准脸部进行拍摄。手机会将拍照指令发送到第一处理单元30,第一处理单元30再控制摄像头模组10进行工作。首先通过第一摄像头模组采集RGB图像,间隔1毫秒时间后,再控制第二摄像头模组采集散斑图像900。然后根据散斑图像900计算深度图像,并通过采集的RGB图像和深度图像进行美颜处理。
更进一步地,在第一时刻控制摄像头模组10采集RGB图像,并在第二时刻控制摄像头模组10采集散斑图像;第一时刻与目标时刻之间的时间间隔小于第二阈值;第二时刻与目标时刻之间的时间间隔小于第三阈值。若第一时刻与目标时刻之间的时间间隔小于第二阈值,则控制摄像头模组10采集RGB图像;若第一时刻与目标时刻之间的时间间隔大于第二阈值,则可向应用程序返回响应超时的提示信息,并等待应用程序重新发起图像采集指令。
摄像头模组10采集RGB图像之后,第一处理单元30可控制摄像头模组10采集散斑图像900,采集散斑图像900的第二时刻与第一时刻之间的时间间隔小于第一阈值,同时第二时刻与目标时刻之间的时间间隔小于第三阈值。若第二时刻与第一时刻之间的时间间隔大于第一阈值,或第二时刻与目标时刻之间的时间间隔大于第三阈值,则可向应用程序返回响应超时的提示信息,并等待应用程序重新发起图像采集指令。可以理解的是,采集散斑图像900的第二时刻可以大于采集RGB图像的第一时刻,也可以小于采集RGB图像的第一时刻,在此不做限定。
步骤0231,获取参考图像,参考图像为标定得到的带有参考深度信息的图像。
电子设备100会预先对激光散斑进行标定得到一张参考图像,并将参考图像存储在电子设备100中。一般地,参考图像是将激光散斑照射到一个参考平面而形成的,参考图像也是一张带有若干个散斑点的图像,每个散斑点都有对应的参考深度信息。当需要获取被拍摄物体的深度信息时,就可以将实际采集的散斑图像900与参考图像进行比较,并根据实际采集的散斑图像900中散斑点的偏移量来计算实际的深度信息。
图5为一个实施例中计算深度信息的原理图。如图5所示,镭射灯118可以生成激光散斑,激光散斑经过物体进行反射后,通过激光摄像头112获取形成的图像。在摄像头的标定过程中,镭射灯118发射的激光散斑会经过参考平面910进行反射,然后通过激光摄像头112采集反射光线,通过成像平面920成像得到参考图像。参考平面920到镭射灯112的参考深度为L,该参考深度为已知的。在实际计算深度信息的过程中,镭射灯112发射的激光散斑会经过物体930进行反射,再由激光摄像头112采集反射光线,通过成像平面920成像得到实际的散斑图像。则可以得到实际的深度信息的计算公式为:
Figure PCTCN2019083260-appb-000002
其中,L是镭射灯118到与参考平面910之间的距离,f为激光摄像头112中透镜的焦距,CD为镭射灯118到激光摄像头112之间的距离,AB为物体930的成像与参考平面910的成像之间的偏移距离。AB可为像素偏移量n与像素点的实际距离p的乘积。当物体930到镭射灯118之间的距离Dis大于参考平面910到镭射灯118之间的距离L时,AB为负值;当物体930到镭射灯118之间的距离Dis小于参考平面910到镭射灯118之间的距离L时,AB为正值。
步骤0232,将参考图像与散斑图像900进行比较得到偏移信息,偏移信息用于表示散斑图像900中散斑点相对于参考图像中对应散斑点的水平偏移量。
具体地,遍历散斑图像900中每一个像素点(x,y),以该像素点为中心,选择一个预设大小像素块。例如,可以是选取31pixel*31pixel大小的像素块。然后在参考图像上搜索相匹配的像素块,计算在参考图像上匹配的像素点的坐标与像素点(x,y)坐标的水平偏移量,向右偏移即为正,向左偏移记为负。再把计算出的水平偏移量带入公式(1)可以得到像素点(x,y)的深度信息。这样依次计算散斑图像中每个像素点的深度信息,就可以得到带有散斑图像900中各个像素点所对应的深度信息。
步骤0233,根据偏移信息和参考深度信息计算得到深度图像。
深度图像可以用于表示红外图像对应的深度信息,深度图像中包含的每一个像素点表示一个深度信息。具体地,参考图像中的每一个散斑点都对应一个参考深度信息,当获取到参考图像中散斑点与散斑图像900中散斑点的水平偏移量后,可以根据该水平偏移量计算得到散斑图像900中的物体到参考平面的相对深度信息,然后再根据相对深度信息和参考深度信息,就可以计算得到物体到摄像头的实际深度信息,即得到最后的深度图像。
步骤024,将深度图像发送给发起图像采集指令的目标应用程序,深度图像用于指示目标应用程序执行应用操作。
在一个实施例中,获取到深度图像之后,可以根据深度图像计算得到深度视差图像。根据深度视差图像可以进行校正,得到校正深度图像,再根据校正深度图像进行应用操作。对上述深度图像分别进行校正,是指校正上述深度图像中内外参数。例如,激光摄像头112产生偏转,那么获取的深度图像就需要对该偏转视差产生的误差进行校正,从而得到标准的深度图像。对上述深度图像进行校正可得到校正深度图像。具体地,将深度图像进行校正得到校正深度图像,并将校正深度图像发送给发起图像采集指令的目标应用程序。可以根据深度图像计算得到深度视差图像,再根据深度视差图像进行内外参数校正,得到校正深度图像。
请参阅图1、图10和图12,在一个实施例中,图8或图9所示的图像处理方法在发送深度图像之前,还可以将深度图像进行加密处理,即图8或图9所示的图像处理方法还包括步骤025、步骤026,步骤024还包括步骤0241。其中:
步骤025,获取电子设备100当前所处的网络环境的网络安全等级。
应用程序在获取图像进行操作的时候,可能需要进行联网操作。例如,对图像进行三维建模的时候,可以将RGB图像和深度图像发送给应用程序的服务器,并在服务器上进行三维建模处理。则应用程序在发送RGB图像和深度图像时,需要连接网络,再通过网络将RGB图像和深度图像发送给对应的服务器。为了防止恶意程序获取深度图像进行恶意操作,在发送图像之前可以将发送的深度图像进行加密处理。
步骤026,若网络安全等级小于等级阈值,则将深度图像进行加密处理。
当网络安全等级小于等级阈值时,则认为当前连接的网络的安全性比较低。在网络环境的安全性较低的情况下,将深度图像进行加密处理。具体地,根据网络安全等级获取加密等级,并根据加密等级对深度图像进行加密处理。电子设备100预先建立网络安全等级与加密等级的对应关系,根据网络安全等级可以获取对应的加密等级,并根据加密等级对人脸识别结果进行加密处理。可以根据获取的参考图像对深度图像进行加密处理。将深度图像进行加密处理,具体的加密算法不做限定。例如,可以是根据DES(Data Encryption Standard,数据加密标准)、MD5(Message-Digest Algorithm 5,信息-摘要算法5)、HAVAL(Diffie-Hellman,密钥交换算法)。
参考图像是电子设备100在对摄像头模组10进行标定时采集的散斑图像,由于参考图像具有高度唯一性,不同的电子设备100采集的参考图像是不同的。所以参考图像本身就可以作为一个加密的密钥,用来对数据进行加密处理。电子设备100可以将参考图像存放在安全环境中,这样可以防止数据泄露。 具体地,获取的参考图像是由一个二维的像素矩阵构成的,每一个像素点都有对应的像素值。可以根据参考图像的全部或部分像素点对深度图像进行加密处理。例如,可以将参考图像直接与深度图像进行叠加,得到一张加密图像。也可以深度图像对应的像素矩阵与参考图像对应的像素矩阵进行乘积运算,得到加密图像。还可以取参考图像中某一个或多个像素点对应的像素值作为加密密钥,对深度图像进行加密处理,具体加密算法在本实施例不做限定。
参考图像是在电子设备100标定时生成的,则电子设备100可以将参考图像预先存储在安全环境中,在需要对深度图像进行加密的时候,可以在安全环境下读取参考图像,并根据参考图像对深度图像进行加密处理。同时,会在深度应用程序对应的服务器上存储一张相同的参考图像,当电子设备100将加密处理后的人脸识别结果发送给目标应用程序对应的服务器之后,目标应用程序的服务器获取参考图像,并根据获取的参考图像对加密后的人脸识别结果进行解密处理。
可以理解的是,目标应用程序的服务器中可能会存储多张不同电子设备100采集的参考图像,每个电子设备100对应的参考图像不同。因此,服务器中可以对每一张参考图像定义一个参考图像标识,并存储电子设备100的设备标识,然后建立参考图像标识与设备标识之间的对应关系。当服务器接收到深度图像时,接收到的深度图像会同时携带电子设备100的设备标识。服务器就可以根据设备标识查找对应的参考图像标识,并根据参考图像标识找到对应的参考图像,然后根据找到的参考图像对深度图像进行解密处理。
在本申请提供的其他实施例中,对于图8或图9所示的图像处理方法,根据参考图像进行加密处理的方法具体可以包括:获取参考图像对应的像素矩阵,根据该像素矩阵获取加密密钥;根据加密密钥对深度图像进行加密处理。
具体地,参考图像是由一个二维像素矩阵构成的,由于获取的参考图像是唯一的,因此参考图像对应的像素矩阵也是唯一的。该像素矩阵本身可以作为一个加密密钥对深度图像进行加密,也可以对像素矩阵进行一定的转换得到加密密钥,再通过转换得到的加密密钥对深度图像进行加密处理。举例来说,像素矩阵是一个由多个像素值构成的二维矩阵,每一个像素值在像素矩阵中的位置可以通过一个二维坐标进行表示,则可以通过一个或多个位置坐标获取对应的像素值,并将获取的这一个或多个像素值组合成一个加密密钥。获取到加密密钥之后,可以根据加密密钥对人脸识别结果进行加密处理,具体地加密算法在本实施例中不做限定。例如,可以直接将加密密钥与数据进行叠加或乘积,或者可以将加密密钥作为一个数值插入数据中,得到最终的加密数据。
对于步骤026中的在网络安全等级小于等级阈值时将深度图像进行加密处理,电子设备100还可以对不同的应用程序采用不同的加密算法。具体地,电子设备100可以预先建立应用程序的应用标识与加密算法的对应关系,图像采集指令中可包含目标应用程序的目标应用标识。在接收到图像采集指令后,可以获取图像采集指令中包含的目标应用标识,并根据目标应用标识获取对应的加密算法,根据获取的加密算法对人脸识别结果进行加密处理。
在将深度图像发送给目标应用程序之前,还可以将深度图像的精度进行调整。具体地,图8或图9所示的图像处理方法还可包括:获取发起图像采集指令的目标应用程序的应用等级,根据应用等级获取对应的精度等级;根据精度级别调整深度图像的精度,将调整后的深度图像发送给应用程序。应用等级可以表示目标应用程序对应的重要等级。一般目标应用程序的应用等级越高,发送的图像的精度越高。电子设备100可以预先设置应用程序的应用等级,并建立应用等级与精度等级的对应关系,根据应用等级可以获取对应的精度等级。例如,可以将应用程序分为系统安全类应用程序、系统非安全类应用程序、第三方安全类应用程序、第三方非安全类应用程序等四个应用等级,对应的精度等级逐渐降低。
深度图像的精度可以表现为深度图像的分辨率,或者散斑图像中包含的散斑点的个数,这样根据散斑图像得到的深度图像的精度也会不同。具体地,调整图像精度可以包括:根据精度级别调整待发送图像的分辨率;或,根据精度级别调整采集的散斑图像900中包含的散斑点的个数。其中,散斑图像900中包含的散斑点的个数可以通过软件的方式进行调整,也可以通过硬件的方式进行调整。软件方式调整时,可直接检测采集的散斑图像900中的散斑点,并将部分散斑点进行合并或消除处理,这样调整后的散斑图像900中包含的散斑点的数量就减少了。硬件方式调整时,可以调整镭射灯118生成的激光散斑点的个数。例如,精度高时,生成的激光散斑点的个数为30000个;精度较低时,生成的激光散斑点的个数为20000个。这样对应计算得到的深度图像的精度就会相应地降低。
具体的,可在镭射灯118中预置不同的衍射光学元件(Diffractive Optical Elements,DOE),其中不同DOE衍射形成的散斑点的个数不同。根据精度级别切换不同的DOE进行衍射生成散斑图像,并根据得到的散斑图像900得到不同精度的深度图。当应用程序的应用等级较高时,对应的精度级别也比较高,镭射灯118可控制散斑点个数较多的DOE来发射激光散斑,从而获取散斑点个数较多的散斑图像900;当应用程序的应用等级较低时,对应的精度级别也比较低,镭射灯118可控制散斑点个数较少的DOE来发射激光散斑,从而获取散斑点个数较少的散斑图像900。
步骤0241,将加密处理后的深度图像发送给发起图像采集指令的目标应用程序。
请参阅图1、图11及图12,在图8或图9所示的图像处理方法中,步骤024中的发送深度图像的过程还可以包括:
步骤0242,获取电子设备100当前所处的运行环境。
步骤0243,若电子设备100当前处于非安全运行环境下,则在非安全运行环境下将深度图像发送给发起图像采集指令的目标应用程序。
电子设备100的运行环境包括安全运行环境和非安全运行环境。例如,CPU的运行环境可分为TEE和REE,TEE就是一种安全运行环境,REE就是一种非安全运行环境,对于一些安全性要求比较高的应用操作,就需要在安全运行环境中完成。对于一些安全性要求比较低的应用操作,可以在非安全运行环境中进行。则图像采集指令对应的应用操作为非安全操作时,就可以将采集的深度图像通过非安全运行环境发送给目标应用程序。
步骤0244,若电子设备100当前处于安全运行环境下,则将电子设备100从安全运行环境切换到非安全运行环境,在非安全运行环境下将深度图像发送给发起图像采集指令的目标应用程序。
在一个实施例中,电子设备100中可以包括第一处理单元30,第一处理单元30可以是可以MCU处理器。由于MCU处理器是外置于CPU处理器的,因此MCU本身是处于安全运行环境下的。具体地,第一处理单元30可以连接安全传输通道和非安全传输通道,第一处理单元30在检测到图像指令时,若判断图像采集指令对应的应用操作为非安全操作,则可以连接到非安全传输通道,通过非安全传输通道发送深度图像。安全传输通道处于安全运行环境下,对图像处理的安全性较高;非安全传输通道处于非安全运行环境下,对图像处理的安全性较低。
图8至图11所示实施例提供的图像处理方法,电子设备100在检测到图像采集指令对应的应用操作为非安全操作时,则可以根据图像采集指令中包含的时间戳判断指令的响应时间是否超时。如果指令的响应时间没有超时,则控制摄像头模组10根据图像采集指令采集散斑图像,然后根据散斑图像计算得到深度图像,并将深度图像发送给目标应用程序进行对应的应用操作。这样可以将图像采集指令的应用操作进行分类,并根据不同的图像采集指令进行不同的操作。当采集的图像用于非安全操作时,则可以直接将采集的图像进行处理,提高了图像处理的效率。
应该理解的是,虽然图2-4、图6-11的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-4、图6-11中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图12为一个实施例中实现上述任意一项实施例所述的图像处理方法的硬件结构图。如图12所示,该电子设备100(图1所示)中可包括摄像头模组10、中央处理器(CPU)20和第一处理单元30,上述摄像头模组10中包括激光摄像头112、泛光灯114、RGB(Red/Green/Blue,红/绿/蓝色彩模式)摄像头116和镭射灯118。第一处理单元30包括PWM(Pulse Width Modulation,脉冲宽度调制)模块32、SPI/I2C(Serial Peripheral Interface/Inter-Integrated Circuit,串行外设接口/双向二线制同步串行接口)模块34、RAM(Random Access Memory,随机存取存储器)模块36、Depth Engine模块38。其中,第二处理单元22可为处于TEE(Trusted execution environment,可信运行环境)下的CPU内核,第一处理单元30为MCU(Microcontroller Unit,微控制单元)处理器。可以理解的是,中央处理器20可以为多核运行模式,中央处理器20中的CPU内核可以在TEE或REE(Rich Execution Environment,自然运行环境)下运行。TEE和REE均为ARM模块(Advanced RISC Machines,高级精简指令集处理器)的运 行模式。通常情况下,电子设备100中安全性较高的操作行为需要在TEE下执行,其他操作行为则可在REE下执行。本申请实施例中,当中央处理器20接收到目标应用程序发起的图像采集指令,TEE下运行的CPU内核即第二处理单元22,会通过SECURE SPI/I2C向MCU730中SPI/I2C模块34发送图像采集指令至第一处理单元30。第一处理单元30在接收到图像采集指令后,会判断图像采集指令对应的应用操作的安全性,并根据判断结果控制摄像头模组10采集对应判断结果的图像。
在一个例子中,第一处理单元30在接收到图像采集指令后,若判断图像采集指令对应的应用操作为安全操作,则通过PWM模块32发射脉冲波控制摄像头模组10中的泛光灯114开启来采集红外图像、控制摄像头模组10中镭射灯118开启来采集散斑图像900(图1所示)。摄像头模组10可将采集到的红外图像和散斑图像900传送给第一处理单元30中Depth Engine模块38,Depth Engine模块38可根据红外图像计算红外视差图像,根据散斑图像900计算深度图像,并根据深度图像得到深度视差图像。然后将红外视差图像和深度视差图像发送给TEE下运行的第二处理单元22。第二处理单元22会根据红外视差图像进行校正得到校正红外图像,并根据深度视差图像进行校正得到校正深度图像。然后根据校正红外图像进行人脸识别,检测上述校正红外图像中是否存在人脸以及检测到的人脸与存储的人脸是否匹配;若人脸识别通过,再根据上述校正红外图像和校正深度图像来进行活体检测,检测上述人脸是否为活体人脸。在一个实施例中,在获取到校正红外图像和校正深度图像后,可先进行活体检测再进行人脸识别,或同时进行人脸识别和活体检测。在人脸识别通过且检测到的人脸为活体人脸后,第二处理单元22可将上述校正红外图像、校正深度图像以及人脸识别结果中的一种或多种发送给目标应用程序。
在另一个例子中,第一处理单元30在接收到图像采集指令后,若判断图像采集指令对应的应用操作为非安全操作,则通过PWM模块32发射脉冲波控制摄像头模组10中镭射灯118开启来采集散斑图像900。摄像头模组10可将采集到的散斑图像900传送给第一处理单元30中Depth Engine模块38,Depth Engine模块38可根据散斑图像900计算深度图像,并根据深度图像得到深度视差图像。然后在非安全运行环境下根据深度视差图像进行校正,得到校正深度图像,并将校正深度图像发送给目标应用程序。
图13为另一个实施例中实现图2、图3、图6或图7所示的图像处理方法的硬件结构图。如图13所示,该硬件结构中包括第一处理单元41、摄像头模组10和第二处理单元42。摄像头模组10中包括激光摄像头112、泛光灯114、RGB摄像头116和镭射灯118。其中,中央处理器40中可包括处于TEE下的CPU内核与处于REE下的CPU内核,第一处理单元41为中央处理器40中开辟的DSP处理模块,第二处理单元42即为处于TEE下的CPU内核,第二处理单元42和第一处理单元41可以通过一个安全缓冲区(secure buffer)进行连接,这样可以保证图像传输过程中的安全性。通常情况下,中央处理器40在处理安全性较高的操作行为时,需要将处理器内核切换到TEE下执行,安全性较低的操作行为则可在REE下执行。本申请实施例中,可通过第二处理单元42接收上层应用发送的图像采集指令,当第二处理单元42接收的图像采集指令对应的应用操作为安全操作时,可通过PWM模块发射脉冲波控制摄像头模组10中的泛光灯114开启来采集红外图像,然后控制摄像头模组10中的镭射灯118开启来采集散斑图像900(图1所示)。摄像头模组10可将采集到的红外图像和散斑图像900传送给第一处理单元41中,第一处理单元41可根据散斑图像900计算得到深度图像,然后根据深度图像计算得到深度视差图像,并根据红外图像计算得到红外视差图像。然后将红外视差图像和深度视差图像发送给第二处理单元42。第二处理单元42可以根据红外视差图像进行校正得到校正红外图像,并根据深度视差图像进行校正得到校正深度图像。第二处理单元42会根据红外图像进行人脸认证,检测上述校正红外图像中是否存在人脸,以及检测到的人脸与存储的人脸是否匹配;若人脸认证通过,再根据上述校正红外图像和校正深度图像来进行活体检测,判断上述人脸是否为活体人脸。在第二处理单元42进行人脸认证和活体检测处理后,会将处理结果发送给目标应用程序,目标应用程序再根据检测结果进行解锁、支付等应用操作。
图14为一个实施例中实现上述任意一项实施例所述的图像处理方法的软件架构示意图。如图14所示,该软件架构包括应用层910、操作系统920和安全运行环境930。其中,处于安全运行环境930中的模块包括第一处理单元931、摄像头模组932、第二处理单元933和加密模块934等;操作系统930中包含安全管理模块921、人脸管理模块922、摄像头驱动923和摄像头框架924;应用层910中包含应用程序911。应用程序911可以发起图像采集指令,并将图像采集指令发送给第一处理单元931进行处理。例如,在通过采集人脸进行支付、解锁、美颜、增强现实技术(Augmented Reality,AR)等操作时, 应用程序会发起采集人脸图像的图像采集指令。可以理解的是,应用程序911发起的图像指令可以首先发送到第二处理单元933,再由第二处理单元933发送给第一处理单元931。
第一处理单元931接收到图像采集指令之后,若判断图像采集指令对应的应用操作为安全操作(如支付、解锁操作),则会根据图像采集指令控制摄像头模组932采集红外图像和散斑图像900(图1所示),摄像头模组932采集的红外图像和散斑图像900传输给第一处理单元931。第一处理单元931根据散斑图像900计算得到包含深度信息的深度图像,并根据深度图像计算得到深度视差图像,根据红外图像计算得到红外视差图像。然后通过安全传输通道将深度视差图像和红外视差图像发送给第二处理单元933。第二处理单元933会根据红外视差图像进行校正得到校正红外图像,根据深度视差图像进行校正得到校正深度图像。然后根据校正红外图像进行人脸认证,检测上述校正红外图像中是否存在人脸,以及检测到的人脸与存储的人脸是否匹配;若人脸认证通过,再根据上述校正红外图像和校正深度图像来进行活体检测,判断上述人脸是否为活体人脸。第二处理单元933得到的人脸识别结果可以发送给加密模块934,通过加密模块934进行加密后,将加密后的人脸识别结果发送给安全管理模块921。一般地,不同的应用程序911都有对应的安全管理模块921,安全管理模块921会将加密后的人脸识别结果进行解密处理,并将解密处理后得到的人脸识别结果发送给相应的人脸管理模块922。人脸管理模块922会将人脸识别结果发送给上层的应用程序911,应用程序911再根据人脸识别结果进行相应的操作。
若第一处理单元931接收到的图像采集指令对应的应用操作为非安全操作(如美颜、AR操作),则第一处理单元931可以控制摄像头模组932采集散斑图像900,并根据散斑图像900计算深度图像,然后根据深度图像得到深度视差图像。第一处理单元931会通过非安全传输通道将深度视差图像发送给摄像头驱动923,摄像头驱动923再根据深度视差图像进行校正处理得到校正深度图像,然后将校正深度图像发送给摄像头框架924,再由摄像头框架924发送给人脸管理模块922或应用程序911。
图15为一个实施例中图像处理装置50的结构示意图。如图15所示,图像处理装置50包括检测总模块501及采集总模块502。检测总模块501用于检测到图像采集指令,则判断所述图像采集指令对应的应用操作的安全性。采集总模块502用于根据判断结果采集对应所述判断结果的图像。
请参阅图16,在一个实施例中,检测总模块501包括指令检测模块511,采集总模块502包括图像采集模块512。图像处理装置50还包括人脸识别模块513和结果发送模块514。其中,指令检测模块511用于若检测到图像采集指令,则判断所述图像采集指令对应的应用操作是否为安全操作。图像采集模块512用于若图像采集指令对应的应用操作为安全操作,则控制摄像头模组10根据图像采集指令采集红外图像和散斑图像900。人脸识别模块513用于根据红外图像和散斑图像900获取目标图像,并在安全运行环境下根据目标图像进行人脸识别处理。结果发送模块514用于将人脸识别结果发送给发起图像采集指令的目标应用程序,人脸识别结果用于指示目标应用程序执行应用操作。
图16所示实施例提供的图像处理装置50在检测到图像采集指令时,判断图像采集指令对应的应用操作是否为安全操作。若图像采集指令对应的应用操作为安全操作,则根据图像采集指令采集红外图像和散斑图像900。然后在安全运行环境下对采集的图像进行人脸识别处理,并将人脸识别结果发送给目标应用程序。这样就可以保证目标应用程序在进行安全操作的时候,在一个安全性较高的环境下对图像进行处理,从而保证提高图像处理的安全性。
在一个实施例中,图像采集模块512还用于获取图像采集指令中包含的时间戳。时间戳用于表示发起图像采集指令的时刻。若时间戳到目标时刻之间的间隔时长小于时长阈值,则控制摄像头模组10根据图像采集指令采集红外图像和散斑图像900。目标时刻用于表示检测到图像采集指令的时刻。
在一个实施例中,人脸识别模块513还用于:获取参考图像,其中,参考图像为标定得到的带有参考深度信息的图像;将参考图像与散斑图像900进行比较得到偏移信息,偏移信息用于表示散斑图像900中散斑点相对于参考图像中对应散斑点的水平偏移量;根据偏移信息和参考深度信息计算得到深度图像,将深度图像和红外图像作为目标图像。
在一个实施例中,人脸识别模块513还用于:获取电子设备100当前所处的运行环境;若电子设100当前处于安全运行环境下,则在安全运行环境下根据目标图像进行人脸识别处理;若电子设备100当前处于非安全运行环境下,则将电子设备100从非安全运行环境切换到安全运行环境,在安全运行环境下根据目标图像进行人脸识别处理。
在一个实施例中,人脸识别模块513还用于:在安全运行环境下将目标图像进行校正,得到校正目 标图像;根据:校正目标图像进行人脸识别处理。
在一个实施例中,结果发送模块514还用于将人脸识别结果进行加密处理,并将加密处理后的人脸识别结果发送给发起图像采集指令的目标应用程序。
在一个实施例中,结果发送模块514还用于:获取电子设备100当前所处的网络环境的网络安全等级;根据网络安全等级获取加密等级,将人脸识别结果进行加密等级对应的加密处理。
请参阅图17,在一个实施例中,检测总模块501包括指令检测模块521,采集总模块502包括散斑图像获取模块522。图像处理装置50还包括深度图像获取模块523和图像发送模块524。其中,指令检测模块521用于若检测到图像采集指令,则判断图像采集指令对应的应用操作是否为非安全操作。散斑图像获取模块522用于若图像采集指令对应的应用操作为非安全操作,则控制摄像头模组10根据图像采集指令采集散斑图像。深度图像获取模块523用于根据散斑图像计算得到深度图像。图像发送模块524用于将深度图像发送给发起图像采集指令的目标应用程序,深度图像用于指示目标应用程序执行应用操作。
图17所示实施例提供的图像处理装置50,电子设备100在检测到图像采集指令对应的应用操作为非安全操作时,会控制摄像头模组10根据图像采集指令采集散斑图像,然后根据散斑图像900计算得到深度图像,并将深度图像发送给目标应用程序进行对应的应用操作。这样可以将图像采集指令的应用操作进行分类,并根据不同的图像采集指令进行不同的操作。当采集的图像用于非安全操作时,则可以直接将采集的图像进行处理,提高了图像处理的效率。
在一个实施例中,散斑图像获取模块522还用于:获取图像采集指令中包含的时间戳,时间戳用于表示发起图像采集指令的时刻;若时间戳到目标时刻之间的间隔时长小于时长阈值,则控制摄像头模组10根据图像采集指令采集散斑图像,目标时刻用于表示检测到图像采集指令的时刻。
在一个实施例中,深度图像获取模块523还用于:获取参考图像,参考图像为标定得到的带有参考深度信息的图像;将参考图像与散斑图像900进行比较得到偏移信息,偏移信息用于表示散斑图像900中的散斑点相对于参考图像中对应散斑点的水平偏移量;根据偏移信息和参考深度信息计算得到深度图像。
在一个实施例中,图像发送模块524还用于将深度图像进行校正得到校正深度图像,并将校正深度图像发送给发起图像采集指令的目标应用程序。
在一个实施例中,图像发送模块524还用于获取电子设备100当前的运行环境;若电子设备100当前处于非安全运行环境下,则在非安全运行环境下将深度图像发送给发起图像采集指令的目标应用程序;若电子设备100当前处于安全运行环境下,则将电子设备100从安全运行环境切换到非安全运行环境,在非安全运行环境下将深度图像发送给发起图像采集指令的目标应用程序。
在一个实施例中,图像发送模块524还用于电子设备100当前所处的网络环境的网络安全等级;若网络安全等级小于等级阈值,则将深度图像进行加密处理;将加密处理后的深度图像发送给发起图像采集指令的目标应用程序。
在一个实施例中,图像发送模块524还用于根据网络安全等级获取加密等级,并根据加密等级对深度图像进行加密处理。
上述图像处理装置50中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。
本申请实施例还提供了一种计算机可读存储介质。计算机可读存储介质上存储有计算机程序。计算机程序被处理器执行时实现上述任意一个实施例所述的图像处理方法。
本申请实施例还提供了一种电子设备(可为图1所述的电子设备100)。电子设备包括存储器及处理器,存储器中存储有计算机可读指令。指令被处理器执行时,使得处理器执行上述任意一个实施例所述的图像处理方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述任意一个实施例提供的图像处理方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓 冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:
    若检测到图像采集指令,则判断所述图像采集指令对应的应用操作的安全性;和
    根据判断结果采集对应所述判断结果的图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述若检测到图像采集指令,则判断所述图像采集指令对应的应用操作的安全性,包括:
    若检测到所述图像采集指令,则判断所述图像采集指令对应的所述应用操作是否为安全操作;
    所述根据判断结果采集对应所述判断结果的图像,包括:
    若所述图像采集指令对应的所述应用操作为所述安全操作,则控制摄像头模组根据所述图像采集指令采集红外图像和散斑图像;
    所述图像处理方法还包括:
    根据所述红外图像和所述散斑图像获取目标图像,并在安全运行环境下根据所述目标图像进行人脸识别处理;和
    将人脸识别结果发送给发起所述图像采集指令的目标应用程序,所述人脸识别结果用于指示所述目标应用程序执行所述应用操作。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述控制摄像头模组根据所述图像采集指令采集红外图像和散斑图像,包括:
    获取所述图像采集指令中包含的时间戳,所述时间戳用于表示发起所述图像采集指令的时刻;和
    若所述时间戳到目标时刻之间的间隔时长小于时长阈值,则控制所述摄像头模组根据所述图像采集指令采集所述红外图像和所述散斑图像,所述目标时刻用于表示检测到所述图像采集指令的时刻。
  4. 根据权利要求2所述的图像处理方法,其特征在于,所述根据所述红外图像和所述散斑图像获取目标图像,包括:
    获取参考图像,所述参考图像为标定得到的带有参考深度信息的图像;
    将所述参考图像与所述散斑图像进行比较得到偏移信息,所述偏移信息用于表示所述散斑图像中散斑点相对于所述参考图像中对应散斑点的水平偏移量;和
    根据所述偏移信息和所述参考深度信息计算得到深度图像,将所述深度图像和所述红外图像作为所述目标图像。
  5. 根据权利要求2所述的图像处理方法,其特征在于,所述在安全运行环境下根据所述目标图像进行人脸识别处理,包括:
    获取电子设备当前所处的运行环境;
    若所述电子设备当前处于安全运行环境下,则在所述安全运行环境下根据所述目标图像进行人脸识别处理;
    若所述电子设备当前处于非安全运行环境下,则将所述电子设备从所述非安全运行环境切换到所述安全运行环境,在所述安全运行环境下根据所述目标图像进行所述人脸识别处理。
  6. 根据权利要求2所述的图像处理方法,其特征在于,所述在安全运行环境下根据所述目标图像进行人脸识别处理,包括:
    在所述安全运行环境下将所述目标图像进行校正,得到校正目标图像;和
    根据所述校正目标图像进行所述人脸识别处理。
  7. 根据权利要求2至6中任一项所述的图像处理方法,其特征在于,所述将所述人脸识别结果发送给发起所述图像采集指令的目标应用程序,包括:
    将所述人脸识别结果进行加密处理,并将加密处理后的所述人脸识别结果发送给发起所述图像采集指令的所述目标应用程序。
  8. 根据权利要求7所述的图像处理方法,其特征在于,所述将所述人脸识别结果进行加密处理,包括:
    获取电子设备当前所处的网络环境的网络安全等级;和
    根据所述网络安全等级获取加密等级,将所述人脸识别结果进行所述加密等级对应的加密处理。
  9. 根据权利要求1所述的图像处理方法,其特征在于,所述若检测到图像采集指令,则判断所述图 像采集指令对应的应用操作的安全性,包括:
    若检测到所述图像采集指令,则判断所述图像采集指令对应的所述应用操作是否为非安全操作;
    所述根据判断结果采集对应所述判断结果的图像,包括:
    若所述图像采集指令对应的所述应用操作为所述非安全操作,则控制摄像头模组根据所述图像采集指令采集散斑图像;
    所述图像处理方法还包括:
    根据所述散斑图像计算得到深度图像;和
    将所述深度图像发送给发起所述图像采集指令的目标应用程序,所述深度图像用于指示所述目标应用程序执行所述应用操作。
  10. 根据权利要求9所述的图像处理方法,其特征在于,所述控制摄像头模组根据所述图像采集指令采集散斑图像,包括:
    获取所述图像采集指令中包含的时间戳,所述时间戳用于表示发起所述图像采集指令的时刻;
    若所述时间戳到目标时刻之间的间隔时长小于时长阈值,则控制所述摄像头模组根据所述图像采集指令采集所述散斑图像,所述目标时刻用于表示检测到所述图像采集指令的时刻。
  11. 根据权利要求9所述的图像处理方法,其特征在于,所述根据所述散斑图像计算得到深度图像,包括:
    获取参考图像,所述参考图像为标定得到的带有参考深度信息的图像;
    将所述参考图像与所述散斑图像进行比较得到偏移信息,所述偏移信息用于表示所述散斑图像中散斑点相对于所述参考图像中对应散斑点的水平偏移量;和
    根据所述偏移信息和所述参考深度信息计算得到所述深度图像。
  12. 根据权利要求9所述的图像处理方法,其特征在于,所述将所述深度图像发送给发起所述图像采集指令的目标应用程序,包括:
    将所述深度图像进行校正得到校正深度图像,并将所述校正深度图像发送给发起所述图像采集指令的所述目标应用程序。
  13. 根据权利要求9所述的图像处理方法,其特征在于,所述将所述深度图像发送给发起所述图像采集指令的目标应用程序,包括:
    获取电子设备当前所处的运行环境;
    若所述电子设备当前处于非安全运行环境下,则在所述非安全运行环境下将所述深度图像发送给发起所述图像采集指令的所述目标应用程序;
    若所述电子设备当前处于安全运行环境下,则将所述电子设备从所述安全运行环境切换到所述非安全运行环境,在所述非安全运行环境下将所述深度图像发送给发起所述图像采集指令的所述目标应用程序。
  14. 根据权利要求9至13中任一项所述的图像处理方法,其特征在于,所述图像处理方法在所述将所述深度图像发送给发起所述图像采集指令的目标应用程序之前,还包括:
    获取电子设备当前所处的网络环境的网络安全等级;和
    若所述网络安全等级小于等级阈值,则将所述深度图像进行加密处理;
    所述将所述深度图像发送给发起所述图像采集指令的目标应用程序,包括:
    将加密处理后的所述深度图像发送给发起所述图像采集指令的所述目标应用程序。
  15. 根据权利要求14所述的图像处理方法,其特征在于,所述将所述深度图像进行加密处理,包括:
    根据所述网络安全等级获取加密等级,并根据所述加密等级对所述深度图像进行所述加密处理。
  16. 一种图像处理装置,其特征在于,包括:
    检测总模块,用于若检测到图像采集指令,则判断所述图像采集指令对应的应用操作的安全性;和
    采集总模块,用于根据判断结果采集对应所述判断结果的图像。
  17. 根据权利要求16所述的图像处理装置,其特征在于,所述检测总模块包括指令检测模块,用于若检测到所述图像采集指令,则判断所述图像采集指令对应的所述应用操作是否为安全操作;
    所述采集总模块包括图像采集模块,用于若所述图像采集指令对应的所述应用操作为所述安全操作,则控制摄像头模组根据所述图像采集指令采集红外图像和散斑图像;
    所述图像处理装置还包括:
    人脸识别模块,用于根据所述红外图像和所述散斑图像获取目标图像,并在安全运行环境下根据所述目标图像进行人脸识别处理;和
    结果发送模块,用于将人脸识别结果发送给发起所述图像采集指令的目标应用程序,所述人脸识别结果用于指示所述目标应用程序执行所述应用操作。
  18. 根据权利要求16所述的图像处理装置,其特征在于,所述检测总模块包括指令检测模块,用于若检测到所述图像采集指令,则判断所述图像采集指令对应的所述应用操作是否为非安全操作;
    所述采集总模块包括散斑图像获取模块,用于若所述图像采集指令对应的所述应用操作为所述非安全操作,则控制摄像头模组根据所述图像采集指令采集散斑图像;
    所述图像处理装置还包括:
    深度图像获取模块,用于根据所述散斑图像计算得到深度图像;和
    图像发送模块,用于将所述深度图像发送给发起所述图像采集指令的目标应用程序,所述深度图像用于指示所述目标应用程序执行所述应用操作。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至15中任一项所述的图像处理方法。
  20. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行权利要求1至15中任一项所述的图像处理方法。
PCT/CN2019/083260 2018-04-28 2019-04-18 图像处理方法、装置、计算机可读存储介质和电子设备 WO2019206020A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19791784.2A EP3624006A4 (en) 2018-04-28 2019-04-18 IMAGE PROCESSING, DEVICE, COMPUTER-READABLE STORAGE MEDIA AND ELECTRONIC DEVICE
US16/671,856 US11275927B2 (en) 2018-04-28 2019-11-01 Method and device for processing image, computer readable storage medium and electronic device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810404509.0 2018-04-28
CN201810404509.0A CN108804895B (zh) 2018-04-28 2018-04-28 图像处理方法、装置、计算机可读存储介质和电子设备
CN201810403000.4 2018-04-28
CN201810403000.4A CN108830141A (zh) 2018-04-28 2018-04-28 图像处理方法、装置、计算机可读存储介质和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/671,856 Continuation US11275927B2 (en) 2018-04-28 2019-11-01 Method and device for processing image, computer readable storage medium and electronic device

Publications (1)

Publication Number Publication Date
WO2019206020A1 true WO2019206020A1 (zh) 2019-10-31

Family

ID=68294762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/083260 WO2019206020A1 (zh) 2018-04-28 2019-04-18 图像处理方法、装置、计算机可读存储介质和电子设备

Country Status (3)

Country Link
US (1) US11275927B2 (zh)
EP (1) EP3624006A4 (zh)
WO (1) WO2019206020A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696196A (zh) * 2020-05-25 2020-09-22 北京的卢深视科技有限公司 一种三维人脸模型重建方法及装置
CN112861583A (zh) * 2019-11-27 2021-05-28 深圳市万普拉斯科技有限公司 人脸验证方法、电子设备及可读存储介质
CN112861584A (zh) * 2019-11-27 2021-05-28 深圳市万普拉斯科技有限公司 对象图像处理方法、终端设备及可读存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196793A1 (zh) * 2018-04-12 2019-10-17 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备和计算机可读存储介质
CN111311661A (zh) * 2020-03-05 2020-06-19 北京旷视科技有限公司 人脸图像处理方法、装置、电子设备及存储介质
CN111680672B (zh) * 2020-08-14 2020-11-13 腾讯科技(深圳)有限公司 人脸活体检测方法、系统、装置、计算机设备和存储介质
CN113837106A (zh) * 2021-09-26 2021-12-24 北京的卢深视科技有限公司 人脸识别方法、系统、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268608A (zh) * 2013-05-17 2013-08-28 清华大学 基于近红外激光散斑的深度估计方法及装置
CN107292283A (zh) * 2017-07-12 2017-10-24 深圳奥比中光科技有限公司 混合人脸识别方法
CN108804895A (zh) * 2018-04-28 2018-11-13 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN108830141A (zh) * 2018-04-28 2018-11-16 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1737820A (zh) 2004-06-17 2006-02-22 罗纳德·内维尔·兰福德 验证由软件应用程序识别的图像
US9582889B2 (en) * 2009-07-30 2017-02-28 Apple Inc. Depth mapping based on pattern matching and stereoscopic information
JP6359259B2 (ja) 2012-10-23 2018-07-18 韓國電子通信研究院Electronics and Telecommunications Research Institute デプスセンサと撮影カメラとの間の関係に基づいたデプス映像補正装置及び方法
CN104239816A (zh) 2014-09-28 2014-12-24 联想(北京)有限公司 可切换工作状态电子设备及其切换方法
US10764563B2 (en) 2014-11-13 2020-09-01 Intel Corporation 3D enhanced image correction
CN104506838B (zh) * 2014-12-23 2016-06-29 宁波盈芯信息科技有限公司 一种符号阵列面结构光的深度感知方法、装置及系统
CN106331462A (zh) 2015-06-25 2017-01-11 宇龙计算机通信科技(深圳)有限公司 一种轨迹照片的拍摄方法、装置及移动终端
US10762335B2 (en) * 2017-05-16 2020-09-01 Apple Inc. Attention detection
CN107341481A (zh) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 利用结构光图像进行识别

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268608A (zh) * 2013-05-17 2013-08-28 清华大学 基于近红外激光散斑的深度估计方法及装置
CN107292283A (zh) * 2017-07-12 2017-10-24 深圳奥比中光科技有限公司 混合人脸识别方法
CN108804895A (zh) * 2018-04-28 2018-11-13 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN108830141A (zh) * 2018-04-28 2018-11-16 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3624006A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861583A (zh) * 2019-11-27 2021-05-28 深圳市万普拉斯科技有限公司 人脸验证方法、电子设备及可读存储介质
CN112861584A (zh) * 2019-11-27 2021-05-28 深圳市万普拉斯科技有限公司 对象图像处理方法、终端设备及可读存储介质
CN112861584B (zh) * 2019-11-27 2024-05-07 深圳市万普拉斯科技有限公司 对象图像处理方法、终端设备及可读存储介质
CN111696196A (zh) * 2020-05-25 2020-09-22 北京的卢深视科技有限公司 一种三维人脸模型重建方法及装置
CN111696196B (zh) * 2020-05-25 2023-12-08 合肥的卢深视科技有限公司 一种三维人脸模型重建方法及装置

Also Published As

Publication number Publication date
EP3624006A4 (en) 2020-11-18
US11275927B2 (en) 2022-03-15
EP3624006A1 (en) 2020-03-18
US20200065562A1 (en) 2020-02-27

Similar Documents

Publication Publication Date Title
TWI736883B (zh) 影像處理方法和電子設備
CN111126146B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN108804895B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN108805024B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2019206020A1 (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
US11256903B2 (en) Image processing method, image processing device, computer readable storage medium and electronic device
CN108668078B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2019205890A1 (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN110248111B (zh) 控制拍摄的方法、装置、电子设备及计算机可读存储介质
CN108711054B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN108921903B (zh) 摄像头标定方法、装置、计算机可读存储介质和电子设备
CN109213610B (zh) 数据处理方法、装置、计算机可读存储介质和电子设备
CN108573170B (zh) 信息处理方法和装置、电子设备、计算机可读存储介质
CN108985255B (zh) 数据处理方法、装置、计算机可读存储介质和电子设备
CN108830141A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN111523499B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN108650472B (zh) 控制拍摄的方法、装置、电子设备及计算机可读存储介质
EP3621294B1 (en) Method and device for image capture, computer readable storage medium and electronic device
WO2019196669A1 (zh) 基于激光的安全验证方法、装置及终端设备
WO2020024619A1 (zh) 数据处理方法、装置、计算机可读存储介质和电子设备
WO2019205889A1 (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN108881712B (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN109145772B (zh) 数据处理方法、装置、计算机可读存储介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19791784

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019791784

Country of ref document: EP

Effective date: 20191209

NENP Non-entry into the national phase

Ref country code: DE