CN108846310A - Image processing method, device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN108846310A
CN108846310A CN201810403022.0A CN201810403022A CN108846310A CN 108846310 A CN108846310 A CN 108846310A CN 201810403022 A CN201810403022 A CN 201810403022A CN 108846310 A CN108846310 A CN 108846310A
Authority
CN
China
Prior art keywords
image data
image
processing units
running environment
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810403022.0A
Other languages
Chinese (zh)
Other versions
CN108846310B (en
Inventor
郭子青
周海涛
惠方方
谭筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110045933.2A priority Critical patent/CN112668547A/en
Priority to CN201810403022.0A priority patent/CN108846310B/en
Publication of CN108846310A publication Critical patent/CN108846310A/en
Priority to PCT/CN2019/081743 priority patent/WO2019196793A1/en
Priority to EP19784964.9A priority patent/EP3633546A4/en
Priority to US16/742,378 priority patent/US11170204B2/en
Application granted granted Critical
Publication of CN108846310B publication Critical patent/CN108846310B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

This application involves a kind of image processing method, device, electronic equipment and computer readable storage mediums.The above method includes:If receiving the image data for obtaining face depth information, security level is divided to described image data;The corresponding running environment of described image data is determined according to the security level;The running environment is the running environment of first processing units;Described image data are divided into the first processing units under corresponding running environment to handle, obtain face depth information.The above method, first processing units are after getting image data, security level can be divided to image data, the corresponding running environment of image data is determined according to the security level of image data, the first processing units that image data is divided under corresponding running environment are handled, the efficiency to image real time transfer is improved by the different demarcation to image data.

Description

Image processing method, device, electronic equipment and computer readable storage medium
Technical field
This application involves field of computer technology, more particularly to a kind of image processing method, device, electronic equipment and meter Calculation machine readable storage medium storing program for executing.
Background technique
With the development of face recognition technology and structured light technique, face unlock, face payment etc. are got in the electronic device Come more common.By structured light technique, electronic equipment can acquire the 3D information of facial image and face, according to collected people Face image and face 3D information can carry out face payment, face unlock etc..
Summary of the invention
The embodiment of the present application provides a kind of image processing method, device, electronic equipment and computer readable storage medium, can To improve first processing units to the efficiency of image real time transfer.
A kind of image processing method, including:
If receiving the image data for obtaining face depth information, security level is divided to described image data;
The corresponding running environment of described image data is determined according to the security level;The running environment is the first processing The running environment of unit;
Described image data are divided into the first processing units under corresponding running environment to handle, obtain face Depth information.
A kind of image processing apparatus, including:
Receiving module, if being drawn for receiving the image data for obtaining face depth information to described image data Divide security level;
Determining module, for determining the corresponding running environment of described image data according to the security level;The operation Environment is the running environment of first processing units;
Processing module is carried out for described image data to be divided into the first processing units under corresponding running environment Processing, obtains face depth information.
A kind of electronic equipment, including:First processing units, the second processing unit and camera module;The second processing Unit is separately connected the first processing units and the camera module;
The first processing units, if for receiving the image data for obtaining face depth information, to the figure As data divide security level;
The first processing units, for determining the corresponding running environment of described image data according to the security level; The running environment is the running environment of first processing units;
The first processing units, first processing for being divided into described image data under corresponding running environment Unit is handled, and face depth information is obtained.A kind of computer readable storage medium is stored thereon with computer program, institute State the step of realizing method as described above when computer program is executed by processor.
Above-mentioned image processing method, device, electronic equipment and computer readable storage medium, first processing units are obtaining To after image data, security level can be divided to image data, determine that image data is corresponding according to the security level of image data Running environment, the first processing units that image data is divided under corresponding running environment are handled, by picture number According to different demarcation improve the efficiency to image real time transfer.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the application scenario diagram of image processing method in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is the flow chart of image processing method in another embodiment;
Fig. 4 is the structural block diagram of image processing apparatus in one embodiment;
Fig. 5 is the structural block diagram of image processing apparatus in another embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and It is not used in restriction the application.
Fig. 1 is the application scenario diagram of image processing method in one embodiment.As shown in Figure 1, electronic equipment 10 may include Camera module 110, first processing units 120, the second processing unit 130.Above-mentioned first processing units 120 can be CPU (Central Processing Unit, central processing unit).Above-mentioned the second processing unit 130 can be MCU (Microcontroller Unit, micro-control unit) etc..Wherein, the second processing unit 130 is connected to first processing units 120 Between camera module 110, above-mentioned the second processing unit 130 can control Laser video camera head 112 in camera module 110, general Light lamp 114 and color-changing lamp 118, above-mentioned first processing units 120 can control RGB (Red/Green/ in camera module 110 Blue, red green blue color mode) camera 116.
It include Laser video camera head 112, floodlight 114, RGB camera 116 and color-changing lamp 118 in camera module 110.On Stating Laser video camera head 112 is infrared camera, for obtaining infrared image.Above-mentioned floodlight 114 is the point that can emit infrared light Light source;Above-mentioned color-changing lamp 118 is that the point light source of laser can occur and be with figuratum point light source.Wherein, when floodlight 114 is sent out When exit point light source, Laser video camera head 112 can obtain infrared image according to the light being reflected back.When color-changing lamp 118 emits point light source When, Laser video camera head 112 can obtain speckle image according to the light being reflected back.Above-mentioned speckle image is that color-changing lamp 118 emits The pattern image that deformation occurs after being reflected with figuratum point light source.
First processing units 120 may include in TEE (Trusted execution environment, credible operation ring Border) CPU core that runs under environment and under REE (Rich Execution Environment, natural running environment) environment The CPU core of operation.Wherein, TEE environment and REE environment are ARM module (Advanced RISC Machines, advanced essence Simple instruction set processor) operational mode.Wherein, the security level of TEE environment is higher, has in first processing units 120 and only There is a CPU core that can operate in simultaneously under TEE environment.Under normal conditions, the higher operation of 10 medium security level of electronic equipment Behavior needs execute in the CPU core under TEE environment, and the lower operation behavior of security level can be in the CPU under REE environment It is executed in core.
The second processing unit 130 include PWM (Pulse Width Modulation, pulse width modulation) module 132, SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit, Serial Peripheral Interface (SPI)/bis- To two-wire system synchronous serial interface) interface 134, RAM (Random Access Memory, random access memory) module 136 With depth engine 138.Above-mentioned PWM module 132 can emit pulse to camera module, control floodlight 114 or color-changing lamp 118 is opened It opens, so that Laser video camera head 112 can collect infrared image or speckle image.Above-mentioned SPI/I2C interface 134 is for receiving first The image capture instruction that processing unit 120 is sent.Above-mentioned depth engine 138 can handle speckle image to obtain depth parallax Figure.
When first processing units 120 receive the data acquisition request of application program, for example, when application program need into When the unlock of pedestrian's face, face payment, image can be sent to the second processing unit 130 by the CPU core operated under TEE environment Acquisition instructions.After the second processing unit 130 receives image capture instruction, impulse wave control can be emitted by PWM module 132 Floodlight 114 is opened and acquires infrared image, control camera module 110 by Laser video camera head 112 in camera module 110 Middle color-changing lamp 118 is opened and acquires speckle image by Laser video camera head 112.Camera module 110 can will be collected infrared Image and speckle image are sent to the second processing unit 130.The second processing unit 130 can be to the infrared image received at Reason obtains infrared disparity map;The speckle image received is handled to obtain speckle disparity map or depth parallax figure.Wherein, Two processing units 130 carry out processing to above-mentioned infrared image and speckle image and refer to and carry out school to infrared image or speckle image Just, influence of the inside and outside parameter to image in camera module 110 is removed.Wherein, the second processing unit 130 can be set to different The image of mode, different mode output is different.When the second processing unit 130 is set as speckle chart-pattern, the second processing unit 130 pairs of speckle image processings obtain speckle disparity map, and target speckle image can be obtained according to above-mentioned speckle disparity map;At second When reason unit 130 is set as depth chart-pattern, the second processing unit 130 obtains depth parallax figure to speckle image processing, according to Depth image can be obtained in above-mentioned depth parallax figure, and above-mentioned depth image refers to the image with depth information.The second processing unit 130 can be sent to above-mentioned infrared disparity map and speckle disparity map first processing units 120, and the second processing unit 130 can also incite somebody to action Above-mentioned infrared disparity map and depth parallax figure are sent to first processing units 120.First processing units 120 can be according to above-mentioned infrared Disparity map obtains Infrared Targets image, obtains depth image according to above-mentioned depth parallax figure.Further, first processing units 120 can carry out recognition of face, face matching, In vivo detection and acquisition according to Infrared Targets image, depth image detects Face depth information.
Communicate between the second processing unit 130 and first processing units 120 is by fixed safe interface, to ensure Transmit the safety of data.As shown in Figure 1, the data that first processing units 120 are sent to the second processing unit 130 are to pass through SECURE SPI/I2C 140, the data that the second processing unit 130 is sent to first processing units 120 are to pass through SECURE MIPI (Mobile Industry Processor Interface, mobile industry processor interface) 150.
In one embodiment, the second processing unit 130 can also be obtained according to above-mentioned infrared disparity map Infrared Targets image, Above-mentioned depth parallax figure, which calculates, obtains depth image, then above-mentioned Infrared Targets image, depth image are sent to the first processing list Member 120.
In one embodiment, the second processing unit 130 can carry out face according to above-mentioned Infrared Targets image, depth image Identification, face matching, In vivo detection and the depth information for obtaining the face detected.Wherein, the second processing unit 130 will be schemed Refer to that the second processing unit 130 is sent an image in first processing units 120 as being sent to first processing units 120 to be in CPU core under TEE environment.
Electronic equipment can be mobile phone, tablet computer, personal digital assistant or wearable device etc. in the embodiment of the present application.
Fig. 2 is the flow chart of image processing method in one embodiment.As shown in Fig. 2, a kind of image processing method, packet It includes:
Step 202, if receiving the image data for obtaining face depth information, safety etc. is divided to image data Grade.
It, can will be above-mentioned after first processing units receive application program side and obtain the instruction of human face data in electronic equipment Instruction is sent to the second processing unit connecting with first processing units, keeps the second processing unit control camera module acquisition red Outer image and speckle image;First processing units can also be directly controlled according to the instruction of the human face data of acquisition and be taken the photograph in electronic equipment As head mould group, control camera module acquisition infrared image and speckle image.Optionally, if the instruction of above-mentioned acquisition human face data In further include obtaining visible images, then the also controllable camera module of first processing units acquires visible light figure in electronic equipment Picture, i.e. RGB image.Above-mentioned first processing units are the integrated circuit that data are handled in electronic equipment, such as CPU;Above-mentioned second Processing unit is separately connected the second processing unit and camera module, and the facial image that can be acquired to camera module is located in advance Reason, then the intermediate image that pretreatment obtains is sent to first processing units, optional above-mentioned the second processing unit can be MCU.
Camera module after collecting image according to above-metioned instruction, above-mentioned image can be sent to the second processing unit or First processing units.Optionally, camera module can send infrared image and speckle image to the second processing unit, by RGB Image sends first processing units to;Camera module can also all send infrared image, speckle image and RGB image to first Processing unit.Wherein, when camera module sends infrared image and speckle image to the second processing unit, second processing list Member can handle the image of acquisition to obtain infrared disparity map and depth parallax figure, then the infrared disparity map and depth that will acquire Disparity map sends first processing units to.
When above-mentioned first processing units receive the image data that camera module is directly transmitted or pass through second processing list When member carries out treated intermediate image, security level can be divided to the image data received.Wherein, in first processing units The corresponding security level of each image data can be preset.Optionally, the image data that first processing units receive may include red Outer image, speckle image, infrared disparity map, depth parallax figure and RGB image.Three safety etc. can be preset in first processing units Grade includes the first estate, the second grade and the tertiary gradient, is gradually decreased by the first estate to tertiary gradient security level.According to scattered Face depth information can be obtained in spot image and depth parallax image, therefore can be by speckle image and depth parallax figure setting first etc. Grade;Recognition of face can be carried out according to infrared image and infrared anaglyph, therefore infrared image and infrared anaglyph can be set It is set to the second grade;Above-mentioned RGB image may be set to the tertiary gradient.
Step 204, the corresponding running environment of image data is determined according to security level;Running environment is first processing units Running environment.
First processing units can be run under different running environment, such as TEE environment and REE environment.Wherein, at first Reason unit may operate under TEE environment or REE environment.By taking first processing units are CPU as an example, when CPU includes in electronic equipment When multiple CPU cores, one and only one CPU core be may operate under TEE environment, other CPU cores may operate at REE environment Under.Wherein, when CPU core operates under TEE environment, the security level of CPU core is higher;When CPU core operates in REE ring When under border, the security level of CPU core is lower;Optionally, electronic equipment can determine that the image data of the first estate corresponds to TEE fortune Row environment, the image data of the tertiary gradient correspond to REE running environment, the image data of the second grade correspond to TEE running environment or REE running environment.
Step 206, the first processing units that image data is divided under corresponding running environment are handled, obtains face Depth information.
After the corresponding running environment of security level and security level for getting each image data, electronic equipment can The first processing units that the image data that will acquire is divided under corresponding running environment are handled.Optionally, above-mentioned speckle pattern The first processing units that picture and depth parallax figure can be divided under TEE environment are handled, and RGB image can be divided into REE environment Under first processing units handled, infrared image and infrared disparity map can be divided into the first processing units under TEE environment The first processing units carried out under processing or REE environment are handled.Wherein, first processing units can be according to infrared image or red Whether outer disparity map carries out recognition of face, detect comprising face in the infrared image or infrared disparity map of acquisition, if above-mentioned infrared It include face in image or infrared disparity map, in the face that electronic equipment can will include in above-mentioned infrared image or infrared disparity map Matched with the stored face of electronic equipment, detect include in above-mentioned infrared image or infrared disparity map face whether be Stored face.First processing units can obtain the depth information of face, above-mentioned people according to speckle image or depth parallax figure The depth information of face refers to the 3 D stereo information of face.First processing units can also carry out recognition of face, inspection according to RGB image It surveys in above-mentioned RGB image and whether is matched with the presence or absence of face in face and RGB image with stored face.
Under normal conditions, when first processing units are CPU, one and only one CPU core may operate at TEE in CPU In environment, when image data is handled by CPU in TEE environment entirely, CPU treatment effeciency is more low.
Method in the embodiment of the present application, first processing units can divide image data and pacify after getting image data Congruent grade determines the corresponding running environment of image data according to the security level of image data, image data is divided into correspondence First processing units under running environment are handled, and are improved by the different demarcation to image data to image real time transfer Efficiency.
In one embodiment, image data includes:The facial image and/or the second processing unit of camera module acquisition The intermediate image that face image processing is obtained.
Camera module can acquire infrared image and speckle image in electronic equipment;Above-mentioned camera module can acquire infrared Image, speckle image and RGB image.Wherein, camera module can be directly by above-mentioned collected infrared image and speckle image It is sent to first processing units or camera module can be directly by above-mentioned collected infrared image, speckle image and RGB image It is sent to first processing units;Infrared image and speckle image can also be sent to the second processing unit by camera module, by RGB Image is sent to first processing units, and the second processing unit will handled above-mentioned infrared image and speckle image Intermediate image is sent to first processing units.
In one embodiment, image data includes the infrared image and speckle image of camera module acquisition;Wherein, it adopts The time interval collected between the first moment of infrared image and the second moment of acquisition speckle image is less than first threshold.
First processing units can control infrared lamp in camera module to open and pass through Laser video camera head acquisition infrared image, Color-changing lamp is opened and passes through Laser video camera head acquisition speckle image in the also controllable camera module of first processing units.To guarantee Above-mentioned infrared image is consistent with the image content of speckle image, camera module acquire the first moment of above-mentioned infrared image with The time interval acquired between the second moment of speckle image should be less than first threshold.For example, the first moment and the second moment it Between time interval less than 5 milliseconds.Wherein, settable floodlight lamp controller and radium-shine lamp controller in camera module, the One processing unit is controllable by the time interval that control emits impulse wave to above-mentioned floodlight controller or radium-shine lamp controller Acquire the time interval between the first moment of infrared image and the second moment of acquisition speckle image.
Method in the embodiment of the present application, the time interval between collected infrared image and speckle image are lower than the first threshold Value, it is ensured that collected infrared image is consistent with speckle image, avoids between infrared image and speckle image that there are larger Error, improve the accuracy to data processing.
In one embodiment, image data includes the infrared image and RGB image of camera module acquisition;Wherein, red Outer image and RGB image are the images of camera module while acquisition.
When in image capture instruction further including acquisition RGB image, the second processing unit can control RGB in camera module Camera acquires RGB image.Wherein, first processing units control Laser video camera head acquisition infrared image and speckle image, second Processing unit controls RGB camera and acquires RGB image.It, can be in above-mentioned Laser video camera head and RGB to ensure to acquire the consistent of image Timing synchronization line is added between camera, so that camera module can acquire above-mentioned infrared image and RGB image simultaneously.
Method in the embodiment of the present application acquires infrared image and RGB image simultaneously by control camera module, so that adopting The infrared image of collection is consistent with RGB image, improves the accuracy of image procossing.
In one embodiment, image data is divided into the first processing units under corresponding running environment and carries out processing packet It includes:Extract feature set in image data;By feature set be divided into image data correspond to first processing units under running environment into Row processing.
After first processing units get image data, feature set in image data can extract, then will be special in image data Collection is divided into the first processing units that image data corresponds under running environment and is handled.Optionally, first processing units can Human face region in each image in the image data that identification receives extracts human face region subdivided to each image data First processing units under corresponding running environment are handled.Further, first processing units also can extract each picture number The corresponding operation of image data is divided into according to the information of middle human face characteristic point, then by the information of human face characteristic point in each image data First processing units under environment are handled.Wherein, feature set is being divided into image data to meeting the tendency of by first processing units When first processing units under row environment, the image data for extracting feature set is first searched, then to obtain above-mentioned image data corresponding Running environment, then will be from being divided into first in the corresponding running environment of image data from the feature set extracted in above-mentioned image data Reason unit is handled.
Method in the embodiment of the present application, first processing units can extract in image data after receiving image data Feature set in image data is divided into first processing units and handled by feature set, reduces the processing of first processing units Amount, improves treatment effeciency.
In one embodiment, before obtaining face depth information, the above method further includes:
Recognition of face and In vivo detection are carried out according to image data;
Determine that the face that image data recognition of face is passed through and detected has bioactivity.
First processing units are receiving image data, recognition of face can be carried out according to above-mentioned image data and living body is examined It surveys.Wherein, first processing units, which can detect, whether there is face in infrared image or infrared disparity map.When above-mentioned infrared image or There are when face in infrared disparity map, first processing units can by face present in above-mentioned infrared image or infrared disparity map with Face has been stored to be matched, detect face present in above-mentioned infrared image or infrared disparity map and stored face whether With success.If successful match, first processing units can obtain face depth image, root according to speckle image or depth parallax figure In vivo detection is practiced according to above-mentioned face depth image.Wherein, carrying out In vivo detection according to above-mentioned face depth image includes:In people Human face region is searched in face depth image, whether detection human face region has depth information, and whether above-mentioned depth information meets people The three-dimensional rule of face.If human face region has depth information in above-mentioned face depth image, and above-mentioned depth information meets face solid Rule, then face has bioactivity.Above-mentioned face solid rule is the rule with face three-dimensional depth information.Optionally, First processing units also can be used artificial intelligence model and carry out artificial intelligence identification to above-mentioned image data, obtain face surface Texture, detects whether the direction of above-mentioned texture, the density of texture, width of texture etc. meet face rule, if meeting face rule Then, then determine that face has bioactivity.
In one embodiment, the above method further includes:
Step 208, the type for receiving the application program of face depth information is obtained.
Step 210, the corresponding data channel of application program is determined according to type.
Step 212, face depth information is sent to application program by corresponding data transmission channel.
The face depth information that first processing units can will acquire is sent to application program, carries out face for application program The operations such as unlock, face payment.Optionally, first processing units can be transmitted depth image by exit passageway or Common passageway To application program, above-mentioned exit passageway is different with the security level of Common passageway.Wherein, the security level of above-mentioned exit passageway compared with The security level of height, Common passageway is lower.When data are transmitted in exit passageway, data can be encrypted, avoid data It reveals or is stolen.Corresponding data channel can be arranged according to the type of application program in electronic equipment.Optionally, security requirement High application program can correspond to exit passageway, and the low application program of security requirement can correspond to Common passageway.For example, payment class is answered Exit passageway is corresponded to program, image class application program corresponds to Common passageway.It can be preset in first processing units each using journey The type of sequence and the corresponding data channel of each type can incite somebody to action after the corresponding data channel of type for obtaining application program Face depth information is sent to application program by corresponding data channel, so that application program is carried out according to above-mentioned depth image It operates in next step.
Method in the embodiment of the present application chose corresponding data channel to transmit data, both according to the type of application program It can guarantee the safety that the application program high to security requirement carries out data transmission, also improve low to security requirement Application program carries out the speed of transmission data.
In one embodiment, a kind of image processing method, including:
(1) if receiving the image data for obtaining face depth information, security level is divided to image data.
(2) the corresponding running environment of image data is determined according to security level;Running environment is the fortune of first processing units Row environment.
(3) first processing units that image data is divided under corresponding running environment are handled, obtains face depth Information.
In one embodiment, image data includes the facial image and/or the second processing unit of camera module acquisition The intermediate image that face image processing is obtained.
In one embodiment, image data includes the infrared image and speckle image of camera module acquisition;Wherein, it adopts The time interval collected between the first moment of infrared image and the second moment of acquisition speckle image is less than first threshold.
In one embodiment, image data includes the infrared image and RGB image of camera module acquisition;Wherein, red Outer image and RGB image are the images of camera module while acquisition.
In one embodiment, image data is divided into the first processing units under corresponding running environment and carries out processing packet It includes:Extract feature set in image data;By feature set be divided into image data correspond to first processing units under running environment into Row processing.
In one embodiment, before obtaining face depth information, the above method further includes:It is carried out according to image data Recognition of face and In vivo detection;Determine that the face that image data recognition of face is passed through and detected has bioactivity.
In one embodiment, the above method further includes:Obtain the type for receiving the application program of face depth information;Root The corresponding data channel of application program is determined according to type;Face depth information is sent to by corresponding data transmission channel and is answered Use program.
It should be understood that although each step in above-mentioned flow chart is successively shown according to the instruction of arrow, this A little steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these steps It executes there is no the limitation of stringent sequence, these steps can execute in other order.Moreover, in above-mentioned flow chart at least A part of step may include that perhaps these sub-steps of multiple stages or stage are not necessarily in same a period of time to multiple sub-steps Quarter executes completion, but can execute at different times, the execution in these sub-steps or stage be sequentially also not necessarily according to Secondary progress, but in turn or can replace at least part of the sub-step or stage of other steps or other steps Ground executes.
Fig. 4 is the structural block diagram of image processing apparatus in one embodiment.As shown in figure 4, a kind of image processing apparatus, packet It includes:
Receiving module 402, if being divided for receiving the image data for obtaining face depth information to image data Security level.
Determining module 404, for determining the corresponding running environment of image data according to security level;Running environment is first The running environment of processing unit.
Processing module 406, the first processing units for image data to be divided under corresponding running environment are handled, Obtain face depth information.
In one embodiment, image data includes the facial image and/or the second processing unit of camera module acquisition The intermediate image that face image processing is obtained.
In one embodiment, image data includes the infrared image and speckle image of camera module acquisition;Wherein, it adopts The time interval collected between the first moment of infrared image and the second moment of acquisition speckle image is less than first threshold.
In one embodiment, image data includes the infrared image and RGB image of camera module acquisition;Wherein, red Outer image and RGB image are the images of camera module while acquisition.
In one embodiment, 406 image data of processing module is divided into the first processing units under corresponding running environment Carrying out processing includes:Extract feature set in image data;By feature set be divided into image data correspond under running environment first Processing unit is handled.
In one embodiment, determining module 404 is also used to before obtaining face depth information, according to image data into Row recognition of face and In vivo detection;Determine that the face that image data recognition of face is passed through and detected has bioactivity.
Fig. 5 is the structural block diagram of image processing apparatus in another embodiment.As shown in figure 5, a kind of image processing apparatus, Including:Receiving module 502, processing module 506, obtains module 508, sending module 510 at determining module 504.Wherein, mould is received Block 502, determining module 504, processing module 506 are identical as functions of modules corresponding in Fig. 4.
Module 508 is obtained, for obtaining the type for receiving the application program of face depth information.
Determining module 504, for determining the corresponding data channel of application program according to type.
Sending module 510, for face depth information to be sent to application program by corresponding data transmission channel.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
Realizing for the modules in image processing apparatus provided in the embodiment of the present application can be the shape of computer program Formula.The computer program can be run in terminal or server.The program module that the computer program is constituted is storable in terminal Or on the memory of server.When the computer program is executed by processor, method described in the embodiment of the present application is realized Step.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer The non-volatile computer readable storage medium storing program for executing of instruction, when computer executable instructions are executed by one or more processors, So that processor executes the step of image processing method in the embodiment of the present application.
The embodiment of the present application also provides a kind of computer program products comprising instruction, when it runs on computers When, so that the step of computer executes image processing method in the embodiment of the present application.
The embodiment of the present application also provides a kind of electronic equipment, above-mentioned electronic equipment includes:First processing units, at second Manage unit and camera module.Wherein, the second processing unit is separately connected above-mentioned first processing units and camera module.
First processing units, if being drawn for receiving the image data for obtaining face depth information to image data Divide security level;
First processing units, for determining the corresponding running environment of image data according to security level;Running environment is The running environment of one processing unit;
First processing units, at the first processing units for image data to be divided under corresponding running environment Reason, obtains face depth information.
In one embodiment, image data includes the facial image and/or the second processing unit of camera module acquisition The intermediate image that face image processing is obtained.
In one embodiment, image data includes the infrared image and speckle image of camera module acquisition;Wherein, it adopts The time interval collected between the first moment of infrared image and the second moment of acquisition speckle image is less than first threshold.
In one embodiment, image data includes the infrared image and RGB image of camera module acquisition;Wherein, red Outer image and RGB image are the images of camera module while acquisition.
In one embodiment, image data is divided into the first processing list under corresponding running environment by first processing units Member carries out processing:Extract feature set in image data;By feature set be divided into image data correspond under running environment One processing unit is handled.
In one embodiment, before obtaining face depth information, first processing units are also used to according to image data Carry out recognition of face and In vivo detection;Determine that the face that image data recognition of face is passed through and detected has bioactivity.
In one embodiment, first processing units are also used to obtain the class for receiving the application program of face depth information Type;The corresponding data channel of application program is determined according to type;Face depth information is sent out by corresponding data transmission channel Give application program.
Any reference to memory, storage, database or other media used in this application may include non-volatile And/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
Above embodiments only express the several embodiments of the application, and the description thereof is more specific and detailed, but can not Therefore it is interpreted as the limitation to the application the scope of the patents.It should be pointed out that for those of ordinary skill in the art, Without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection model of the application It encloses.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (16)

1. a kind of image processing method, which is characterized in that including:
If receiving the image data for obtaining face depth information, security level is divided to described image data;
The corresponding running environment of described image data is determined according to the security level;The running environment is first processing units Running environment;
Described image data are divided into the first processing units under corresponding running environment to handle, obtain face depth Information.
2. according to the method described in claim 1, it is characterized in that:
Described image data include camera module acquisition facial image and/or the second processing unit to the facial image at Manage obtained intermediate image.
3. according to the method described in claim 1, it is characterized in that:
Described image data include the infrared image and speckle image of camera module acquisition;
Wherein, the time interval between the first moment of the infrared image and the second moment of the acquisition speckle image is acquired Less than first threshold.
4. according to the method described in claim 1, it is characterized in that:
Described image data include the infrared image and RGB image of the camera module acquisition;
Wherein, the infrared image and the RGB image are the images of the camera module while acquisition.
5. the method according to claim 1, wherein described image data are divided under corresponding running environment The first processing units carry out processing:
Extract feature set in described image data;
It the feature set is divided into described image data corresponds to the first processing units under running environment and handle.
6. the method according to any one of claims 1 to 5, which is characterized in that it is described obtain face depth information it Before, the method also includes:
Recognition of face and In vivo detection are carried out according to described image data;
Determine that the face that described image data recognition of face is passed through and detected has bioactivity.
7. the method according to any one of claims 1 to 5, which is characterized in that the method also includes:
Obtain the type for receiving the application program of the face depth information;
The corresponding data channel of the application program is determined according to the type;
The face depth information is sent to the application program by the corresponding data transmission channel.
8. a kind of image processing apparatus, which is characterized in that including:
Receiving module, if dividing and pacifying to described image data for receiving the image data for obtaining face depth information Congruent grade;
Determining module, for determining the corresponding running environment of described image data according to the security level;The running environment It is the running environment of first processing units;
Processing module, for described image data to be divided at the first processing units under corresponding running environment Reason, obtains face depth information.
9. a kind of electronic equipment, which is characterized in that including:First processing units, the second processing unit and camera module;It is described The second processing unit is separately connected the first processing units and the camera module;
The first processing units, if for receiving the image data for obtaining face depth information, to described image number According to division security level;
The first processing units, for determining the corresponding running environment of described image data according to the security level;It is described Running environment is the running environment of first processing units;
The first processing units, the first processing units for being divided into described image data under corresponding running environment It is handled, obtains face depth information.
10. electronic equipment according to claim 9, it is characterised in that:
Described image data include camera module acquisition facial image and/or the second processing unit to the facial image at Manage obtained intermediate image.
11. electronic equipment according to claim 9, it is characterised in that:
Described image data include the infrared image and speckle image of camera module acquisition;Wherein, the infrared image is acquired The first moment and the acquisition speckle image the second moment between time interval be less than first threshold.
12. electronic equipment according to claim 9, it is characterised in that:
Described image data include the infrared image and RGB image of the camera module acquisition;Wherein, the infrared image and The RGB image is the image of the camera module while acquisition.
13. electronic equipment according to claim 9, it is characterised in that:
Described image data are divided into the first processing units under corresponding running environment and carried out by the first processing units Processing includes:Extract feature set in described image data;The feature set is divided into described image data and corresponds to running environment Under the first processing units handled.
14. the electronic equipment according to any one of claim 9 to 13, it is characterised in that:
The first processing units be also used to it is described obtain face depth information before, according to described image data carry out face Identification and In vivo detection;Determine that the face that described image data recognition of face is passed through and detected has bioactivity.
15. the electronic equipment according to any one of claim 9 to 13, it is characterised in that:
The first processing units are also used to obtain the type for receiving the application program of the face depth information;According to the class Type determines the corresponding data channel of the application program;The face depth information is passed through into the corresponding data transmission channel It is sent to the application program.
16. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method as described in any one of claims 1 to 7 is realized when being executed by processor.
CN201810403022.0A 2018-04-12 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium Expired - Fee Related CN108846310B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202110045933.2A CN112668547A (en) 2018-04-28 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN201810403022.0A CN108846310B (en) 2018-04-28 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium
PCT/CN2019/081743 WO2019196793A1 (en) 2018-04-12 2019-04-08 Image processing method and apparatus, and electronic device and computer-readable storage medium
EP19784964.9A EP3633546A4 (en) 2018-04-12 2019-04-08 Image processing method and apparatus, and electronic device and computer-readable storage medium
US16/742,378 US11170204B2 (en) 2018-04-12 2020-01-14 Data processing method, electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810403022.0A CN108846310B (en) 2018-04-28 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110045933.2A Division CN112668547A (en) 2018-04-28 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108846310A true CN108846310A (en) 2018-11-20
CN108846310B CN108846310B (en) 2021-02-02

Family

ID=64212362

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110045933.2A Pending CN112668547A (en) 2018-04-28 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN201810403022.0A Expired - Fee Related CN108846310B (en) 2018-04-12 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110045933.2A Pending CN112668547A (en) 2018-04-28 2018-04-28 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (2) CN112668547A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196793A1 (en) * 2018-04-12 2019-10-17 Oppo广东移动通信有限公司 Image processing method and apparatus, and electronic device and computer-readable storage medium
CN111292488A (en) * 2020-02-13 2020-06-16 展讯通信(上海)有限公司 Image data processing method, device and storage medium
EP3979202A4 (en) * 2019-06-24 2022-07-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104702577A (en) * 2013-12-09 2015-06-10 华为技术有限公司 Method and device for security processing of data stream
CN105120257A (en) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 Vertical depth sensing device based on structured light coding
CN106685997A (en) * 2017-02-24 2017-05-17 深圳市金立通信设备有限公司 Method and terminal for transmitting data
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face
CN107169343A (en) * 2017-04-25 2017-09-15 深圳市金立通信设备有限公司 A kind of method and terminal of control application program
CN107292183A (en) * 2017-06-29 2017-10-24 国信优易数据有限公司 A kind of data processing method and equipment
CN107480613A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Face identification method, device, mobile terminal and computer-readable recording medium
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106487775B (en) * 2015-09-01 2020-01-21 阿里巴巴集团控股有限公司 Service data processing method and device based on cloud platform
CN106815494B (en) * 2016-12-28 2020-02-07 中软信息系统工程有限公司 Method for realizing application program safety certification based on CPU time-space isolation mechanism
CN107105217B (en) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN107292283A (en) * 2017-07-12 2017-10-24 深圳奥比中光科技有限公司 Mix face identification method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104702577A (en) * 2013-12-09 2015-06-10 华为技术有限公司 Method and device for security processing of data stream
CN105120257A (en) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 Vertical depth sensing device based on structured light coding
CN106685997A (en) * 2017-02-24 2017-05-17 深圳市金立通信设备有限公司 Method and terminal for transmitting data
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information
CN107169343A (en) * 2017-04-25 2017-09-15 深圳市金立通信设备有限公司 A kind of method and terminal of control application program
CN107292183A (en) * 2017-06-29 2017-10-24 国信优易数据有限公司 A kind of data processing method and equipment
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face
CN107480613A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Face identification method, device, mobile terminal and computer-readable recording medium
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196793A1 (en) * 2018-04-12 2019-10-17 Oppo广东移动通信有限公司 Image processing method and apparatus, and electronic device and computer-readable storage medium
US11170204B2 (en) 2018-04-12 2021-11-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Data processing method, electronic device and computer-readable storage medium
EP3979202A4 (en) * 2019-06-24 2022-07-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, and storage medium
CN111292488A (en) * 2020-02-13 2020-06-16 展讯通信(上海)有限公司 Image data processing method, device and storage medium

Also Published As

Publication number Publication date
CN108846310B (en) 2021-02-02
CN112668547A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
US10621454B2 (en) Living body detection method, living body detection system, and computer program product
CN108549867A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN110191266B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN110248111B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
CN108764052A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN108804895A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN108564032A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN108805024A (en) Image processing method, device, computer readable storage medium and electronic equipment
US20200151425A1 (en) Image Processing Method, Image Processing Device, Computer Readable Storage Medium and Electronic Device
CN108573170A (en) Information processing method and device, electronic equipment, computer readable storage medium
CN108846310A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109213610B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108985255B (en) Data processing method and device, computer readable storage medium and electronic equipment
EP3905104B1 (en) Living body detection method and device
WO2016172923A1 (en) Video detection method, video detection system, and computer program product
CN110971836B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
CN109145653A (en) Data processing method and device, electronic equipment, computer readable storage medium
CN108764053A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN113298158B (en) Data detection method, device, equipment and storage medium
CN110532746B (en) Face checking method, device, server and readable storage medium
CN111429476B (en) Method and device for determining action track of target person
CN108833887B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN112613471A (en) Face living body detection method and device and computer readable storage medium
CN110399833B (en) Identity recognition method, modeling method and equipment
US11170204B2 (en) Data processing method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210202

CF01 Termination of patent right due to non-payment of annual fee