CN108805024A - Image processing method, device, computer readable storage medium and electronic equipment - Google Patents
Image processing method, device, computer readable storage medium and electronic equipment Download PDFInfo
- Publication number
- CN108805024A CN108805024A CN201810403815.2A CN201810403815A CN108805024A CN 108805024 A CN108805024 A CN 108805024A CN 201810403815 A CN201810403815 A CN 201810403815A CN 108805024 A CN108805024 A CN 108805024A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- target
- depth
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Processing (AREA)
Abstract
This application involves a kind of image processing method, device, computer readable storage medium and electronic equipments.The method includes:Infrared Targets image and target depth image are obtained, and Face datection is carried out according to the Infrared Targets image and determines target human face region, wherein the target depth image is for indicating the corresponding depth information of Infrared Targets image;In vivo detection processing is carried out to the target human face region according to the target depth image;If In vivo detection success, obtains the corresponding target face property parameters of the target human face region, face matching treatment is carried out to the target human face region according to the target face property parameters, obtains face matching result;Face verification result is obtained according to the face matching result.Above-mentioned image processing method, device, computer readable storage medium and electronic equipment can improve the accuracy of image procossing.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of image processing method, device, computer-readable deposit
Storage media and electronic equipment.
Background technology
Since face has uniqueness characteristic, application of the face recognition technology in intelligent terminal more and more extensive.
Many application programs of intelligent terminal can be all authenticated by face, such as carried out the unlocking of intelligent terminal by face, led to
It crosses face and carries out payment authentication.Meanwhile intelligent terminal can also be handled the image comprising face.For example, to face spy
Sign is identified, and makes expression packet according to human face expression, or carry out U.S. face processing etc. by face characteristic.
Invention content
A kind of image processing method of the embodiment of the present application offer, device, computer readable storage medium and electronic equipment, can
To improve the accuracy of image procossing.
A kind of image processing method, including:
Infrared Targets image and target depth image are obtained, and Face datection determination is carried out according to the Infrared Targets image
Target human face region, wherein the target depth image is for indicating the corresponding depth information of Infrared Targets image;
In vivo detection processing is carried out to the target human face region according to the target depth image;
If In vivo detection success, obtains the corresponding target face property parameters of the target human face region, according to described
Target face property parameters carry out face matching treatment to the target human face region, obtain face matching result;
Face verification result is obtained according to the face matching result.
A kind of image processing apparatus, including:
Face detection module, for obtaining Infrared Targets image and target depth image, and according to the Infrared Targets figure
Target human face region is determined as carrying out Face datection, wherein the target depth image is for indicating that Infrared Targets image is corresponding
Depth information;
In vivo detection module, for being carried out at In vivo detection to the target human face region according to the target depth image
Reason;
If face matching module obtains the corresponding target face of the target human face region for In vivo detection success
Property parameters carry out face matching treatment to the target human face region according to the target face property parameters, obtain face
Matching result;
Face verification module, for obtaining face verification result according to the face matching result.
A kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
Following steps are realized when being executed by processor:
Infrared Targets image and target depth image are obtained, and Face datection determination is carried out according to the Infrared Targets image
Target human face region, wherein the target depth image is for indicating the corresponding depth information of Infrared Targets image;
In vivo detection processing is carried out to the target human face region according to the target depth image;
If In vivo detection success, obtains the corresponding target face property parameters of the target human face region, according to described
Target face property parameters carry out face matching treatment to the target human face region, obtain face matching result;
Face verification result is obtained according to the face matching result.
A kind of electronic equipment, including memory and processor store computer-readable instruction in the memory, described
When instruction is executed by the processor so that the processor executes following steps:
Infrared Targets image and target depth image are obtained, and Face datection determination is carried out according to the Infrared Targets image
Target human face region, wherein the target depth image is for indicating the corresponding depth information of Infrared Targets image;
In vivo detection processing is carried out to the target human face region according to the target depth image;
If In vivo detection success, obtains the corresponding target face property parameters of the target human face region, according to described
Target face property parameters carry out face matching treatment to the target human face region, obtain face matching result;
Face verification result is obtained according to the face matching result.
Above-mentioned image processing method, device, computer readable storage medium and electronic equipment can obtain Infrared Targets figure
Picture and target depth image carry out Face datection according to Infrared Targets image and obtain target human face region.Then according to target depth
It spends image and carries out In vivo detection processing, after In vivo detection success, then obtain the target face property parameters of target human face region,
And carry out face matching treatment according to target face property parameters.Face verification knot to the end is obtained according to face matching result
Fruit.In this way during face verification, In vivo detection can be carried out according to depth image, face is carried out according to infrared image
Match, improves the accuracy of face verification.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
Obtain other attached drawings according to these attached drawings.
Fig. 1 is the application scenario diagram of image processing method in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is the flow chart of image processing method in another embodiment;
Fig. 4 is the schematic diagram that depth information is calculated in one embodiment;
Fig. 5 is the flow chart of image processing method in another embodiment;
Fig. 6 is the flow chart of image processing method in another embodiment;
Fig. 7 is the hardware structure diagram that image processing method is realized in one embodiment;
Fig. 8 is the hardware structure diagram that image processing method is realized in another embodiment;
Fig. 9 is the software architecture schematic diagram that image processing method is realized in one embodiment;
Figure 10 is the structural schematic diagram of image processing apparatus in one embodiment.
Specific implementation mode
It is with reference to the accompanying drawings and embodiments, right in order to make the object, technical solution and advantage of the application be more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe herein various elements,
But these elements should not be limited by these terms.These terms are only used to distinguish first element and another element.Citing comes
It says, in the case where not departing from scope of the present application, the first client can be known as the second client, and similarly, can incite somebody to action
Second client is known as the first client.First client and the second client both client, but it is not same visitor
Family end.
Fig. 1 is the application scenario diagram of image processing method in one embodiment.As shown in Figure 1, the application scenarios include
User 102 and electronic equipment 104.Camera module can be installed, and it is red to obtain 102 corresponding target of user in electronic equipment 104
Outer image and target depth image, and Face datection is carried out according to Infrared Targets image and determines target human face region, wherein target
Depth image is for indicating the corresponding depth information of Infrared Targets image.Target human face region is carried out according to target depth image
In vivo detection processing, if face successful match, obtains the corresponding target face property parameters of target human face region, according to target
Face character parameters on target human face region carries out face matching treatment, obtains face matching result.According to face matching result
Obtain face verification result.Wherein, electronic equipment 104 can be smart mobile phone, tablet computer, personal digital assistant, wearable set
It is standby etc..
Fig. 2 is the flow chart of image processing method in one embodiment.As shown in Fig. 2, the image processing method includes step
Rapid 202 to step 208.Wherein:
Step 202, Infrared Targets image and target depth image are obtained, and Face datection is carried out according to Infrared Targets image
Determine target human face region, wherein target depth image is for indicating the corresponding depth information of Infrared Targets image.
In one embodiment, camera can be installed on electronic equipment, and image is obtained by the camera of installation.It takes the photograph
As head can be divided into the first-class type of Laser video camera head, visible image capturing according to the difference of the image of acquisition, Laser video camera head can be with
It obtains and is formed by image in laser irradiation to object, it is seen that light image can be obtained on radiation of visible light to object and is formed by
Image.Several cameras can be installed, and the position installed does not limit on electronic equipment.For example, can be in electronic equipment
Front panel on a camera is installed, two cameras are overleaf installed, camera can also be with embedded side on panel
Formula is installed on the inside of electronic equipment, and camera is then opened by way of rotating or sliding.It specifically, can on electronic equipment
Front camera and rear camera be installed, front camera and rear camera can obtain image from different visual angles, and one
As front camera can obtain image from the positive visual angle of electronic equipment, rear camera can regard from the back side of electronic equipment
Angle obtains image.
The Infrared Targets image and target depth image that electronic equipment obtains are corresponding, and target depth image is for indicating
The corresponding depth information of Infrared Targets image.Infrared Targets image can show the detailed information of the shooting object of acquisition, target
Depth image can indicate the depth information of shooting image.It, can be according to Infrared Targets after getting Infrared Targets image
Whether image carries out Face datection, to detect in above-mentioned Infrared Targets image to include face.If above-mentioned Infrared Targets image
In include face, then extract the target human face region where face in the Infrared Targets image.Due to Infrared Targets image and mesh
It is corresponding to mark depth image, therefore after extracting target human face region, so that it may with according to pair in target depth image
Region is answered to obtain the corresponding depth information of each pixel in target human face region.
Step 204, In vivo detection processing is carried out to target human face region according to target depth image.
Infrared Targets image and target depth image be it is corresponding, according to Infrared Targets image zooming-out target human face region it
Afterwards, the region in target depth image where target face can be found according to the position of target human face region.Specifically, image
It is a two-dimensional picture element matrix, the position of each pixel in the picture can be indicated by a two-dimensional coordinate.
For example, the pixel in the image most lower left corner is established coordinate system as coordinate origin, per to the right on the basis of the coordinate origin
A mobile pixel is to move a unit to X-axis positive direction, and it is to Y-axis positive direction often to move up a pixel
A unit is moved, therefore can indicate the position coordinates of each pixel in image by a two-dimensional coordinate.
After detecting target human face region in Infrared Targets image, target person can be indicated by face coordinate
Then position of any one pixel in Infrared Targets image in face region positions target depth according to the face coordinate
Position in image where target face, to obtain the corresponding face depth information of target human face region.Usually, live body people
Face is three-dimensional, and the face of display such as picture, screen is then plane.Meanwhile different skin quality, the collected depth of institute
As information is also likely to be difference.Therefore it may determine that the target human face region of acquisition is according to the face depth information of acquisition
It is three-dimensional or plane, the skin quality feature of face can also be obtained according to the face depth information of acquisition, to target face
Region carries out In vivo detection.
Step 206, if In vivo detection success, obtains the corresponding target face property parameters of target human face region, according to
Target face property parameters carry out face matching treatment to target human face region, obtain face matching result.
Target face property parameters refer to the parameter for the attribute that can indicate target face, according to target face property parameters
Target face can be identified and matching treatment.Target face property parameters can be, but not limited to include face deflection angle
Degree, face luminance parameter, face parameter, skin quality parameter, geometrical characteristic parameter etc..Electronic equipment can be prestored for matching
Default human face region, then obtain preset human face region face character parameter.Get target face property parameters it
Afterwards, so that it may to be compared target face property parameters with pre-stored face character parameter.If target face character is joined
Number is matched with pre-stored face character parameter, then the default face area corresponding to the face character parameter to match
Domain is just default human face region corresponding with target human face region.
The default human face region stored in electronic equipment, it is considered to be the human face region with operating right.If target person
Face region matches with default human face region, it is judged that the corresponding user of target human face region has operating right.
I.e. when target human face region and default human face region match, then it is assumed that face successful match;When target human face region and in advance
If human face region mismatches, then it is assumed that it fails to match for face.
Step 208, face verification result is obtained according to face matching result.
In one embodiment, In vivo detection processing is carried out according to target depth image, if In vivo detection success, further according to
Infrared Targets image carries out face matching treatment.After only In vivo detection and face matching all succeed, face verification is just thought
Success.The processing unit of electronic equipment can receive the face verification instruction initiated from upper level applications, work as processing unit
After detecting face verification instruction, face verification processing is carried out according to Infrared Targets image and target depth image, finally will
Face verification result returns to the application program on upper layer, and application program carries out subsequent processing further according to face verification result.
The image processing method that above-described embodiment provides, can obtain Infrared Targets image and target depth image, according to
Infrared Targets image carries out Face datection and obtains target human face region.Then it is carried out at In vivo detection according to target depth image
Reason after In vivo detection success, then obtains the target face property parameters of target human face region, and is joined according to target face character
Number carries out face matching treatment.Face verification result to the end is obtained according to face matching result.In this way in the mistake of face verification
Cheng Zhong can carry out In vivo detection according to depth image, carry out face matching according to infrared image, improve the standard of face verification
True property.
Fig. 3 is the flow chart of image processing method in another embodiment.As shown in figure 3, the image processing method includes
Step 302 is to step 316.Wherein:
Step 302, when first processing units detect face verification instruction, control camera module acquires infrared image
And depth image;Wherein, the time interval between the first moment of infrared image and the second moment of sampling depth image is acquired
Less than first threshold.
In one embodiment, the processing unit of electronic equipment can receive the instruction from upper level applications, work as place
When reason unit receives face verification instruction, so that it may be worked with controlling camera module, infrared figure is acquired by camera
Picture and depth image.Processing unit is connected to camera, and the image that camera obtains can be transferred to processing unit, and pass through
The processing such as processing unit cut, brightness regulation, Face datection, recognition of face.It can be, but not limited to wrap in camera module
Include Laser video camera head, color-changing lamp and floodlight.When processing unit receives face verification instruction, infrared figure can be directly acquired
Picture and depth image, can also obtain infrared image and speckle image, and depth image is calculated according to speckle image.Specifically
Ground, processing unit can control color-changing lamp and floodlight carries out time-sharing work and acquired by Laser video camera head when color-changing lamp is opened
Speckle image;When floodlight is opened, infrared image is acquired by Laser video camera head.
It is understood that when laser irradiation is on the optically roughness surface that mean fluctuation is more than number of wavelengths magnitude, this
The wavelet of the bin scattering of random distribution, which is overlapped mutually, on a little surfaces makes reflection light field that there is random spatial light intensity to be distributed, and presents
Go out granular structure, here it is laser speckles.The laser speckle of formation has height random, therefore different Laser emissions
The laser speckle that the laser that device emits is generated is different.When the laser speckle of formation is irradiated to different depth and the object of shape
When on body, the speckle image of generation is different.There is uniqueness by the laser speckle that different laser emitters is formed,
The speckle image obtained from also has uniqueness.The laser speckle that color-changing lamp is formed can be irradiated on object, then be passed through
Laser video camera head is irradiated on object the laser speckle that acquires and is formed by speckle image.
Color-changing lamp can launch several laser speckle points, when laser speckle point is irradiated on the object of different distance,
The speckle displacement presented on the image is different.Electronic equipment can acquire the reference picture of a standard, reference picture in advance
It is that laser speckle is irradiated in plane and is formed by image.So the speckle point on reference picture is usually equally distributed, so
The correspondence of each speckle point and reference depth in the reference picture is established afterwards.When needing to acquire speckle image, control
Color-changing lamp sends out laser speckle, and after laser speckle is irradiated on object, speckle image is collected by Laser video camera head.So
Each speckle point in speckle image is compared with the speckle point in reference picture afterwards, obtains the speckle in speckle image
Position offset of the point relative to corresponding speckle point in reference picture, and the position offset of speckle point is obtained with reference depth
Take the corresponding real depth information of speckle point.
The infrared image of camera acquisition is corresponding with speckle image, and speckle image can be used for calculating in infrared image
The corresponding depth information of each pixel.Face can be detected and be identified by infrared image in this way, according to speckle
The corresponding depth information of face can be calculated in image.Specifically, first during calculating depth information according to speckle image
First relative depth is calculated according to the opposite position offset with the speckle point of reference picture of speckle image, relative depth can be with table
Show actual photographed object to reference planes depth information.Then object is calculated further according to the relative depth of acquisition and reference depth
Real depth information.Depth image can be the object that indicates to referring to for indicating the corresponding depth information of infrared image
The relative depth of plane can also be absolute depth of the object to camera.
The step of depth image is calculated according to speckle image can specifically include:Obtain reference picture, reference picture
For the image obtained by calibrating with reference depth information;Reference picture is compared with speckle image to obtain offset information,
Offset information is for indicating that speckle point is relative to the horizontal offset for corresponding to speckle point in reference picture in speckle image;According to inclined
It moves information and depth image is calculated in reference depth information.
Fig. 4 is the schematic diagram that depth information is calculated in one embodiment.As shown in figure 4, color-changing lamp 402 can generate laser
Speckle, laser speckle obtain the image formed after object is reflected, by Laser video camera head 404.In the mark of camera
During fixed, the laser speckle that color-changing lamp 402 emits can be reflected by reference planes 408, then pass through Laser video camera head
404 acquisition reflection lights obtain reference picture by the imaging of imaging plane 410.Reference planes 408 arrive the reference of color-changing lamp 402
Depth is L, which is known.During actually calculating depth information, the laser speckle of the transmitting of color-changing lamp 402
It can be reflected by object 406, then reflection light is acquired by Laser video camera head 404, reality is obtained by the imaging of imaging plane 410
The speckle image on border.The calculation formula that actual depth information can then be obtained is:
Wherein, L is that color-changing lamp 402 arrives the distance between reference planes 408, and f is the coke of lens in Laser video camera head 404
Be color-changing lamp 402 the distance between to Laser video camera head 404 away from, CD, AB be object 406 imaging and reference planes 408 at
Offset distance as between.AB can be the product of the actual range p of pixel-shift amount n and pixel.When object 404 arrives color-changing lamp
When the distance between 402 Dis are more than reference planes 406 to the distance between color-changing lamp 402 L, AB is negative value;When object 404 arrives
When the distance between color-changing lamp 402 Dis is less than reference planes 406 to the distance between color-changing lamp 402 L, AB is positive value.
Specifically, each pixel (x, y) in speckle image is traversed, centered on the pixel, selects one to preset
Size block of pixels.For example, it may be choosing the block of pixels of 31pixel*31pixel sizes.Then phase is searched on a reference
Matched block of pixels calculates the horizontal offset of the coordinate of matched pixel and pixel (x, y) coordinate on a reference,
Offset to the right is that just, offset to the left is denoted as negative.Calculated horizontal offset, which is brought into formula (1), again can obtain pixel
The depth information of (x, y).Calculate the depth information of each pixel in speckle image successively in this way, so that it may to obtain carrying speckle
Depth information in image corresponding to each pixel.
Depth image can be used to indicate that the corresponding depth information of infrared image, each pixel for including in depth image
Point indicates a depth information.Specifically, each speckle point in reference picture corresponds to a reference depth information, when obtaining
Getting speckle point in reference picture can calculate with after the horizontal offset of speckle point in speckle image according to the horizontal offset
Obtain the object in speckle image to reference planes relative depth information, then further according to relative depth information and reference depth
Information, so that it may be calculated object to camera real depth information to get depth image to the end.
Step 304, Infrared Targets image is obtained according to infrared image, target depth image is obtained according to depth image.
It, can be according to speckle image after getting infrared image and speckle image in embodiment provided by the present application
Depth image is calculated.Infrared image and depth image can also be corrected respectively, to above-mentioned infrared image and depth
Image is corrected respectively, refers to inside and outside parameter in the above-mentioned infrared image of correction and depth image.For example, Laser video camera head generates
Deflection, then the infrared image and depth image that obtain just need the error generated to the deflection parallax to be corrected, to
To the infrared image and depth image of standard.Infrared Targets image, above-mentioned depth can be obtained after being corrected to above-mentioned infrared image
Image, which is corrected, can be obtained target depth image.Specifically, infrared anaglyph can be calculated according to infrared image, then
Inside and outside parameter correction is carried out according to infrared anaglyph, obtains Infrared Targets image.Depth is calculated according to depth image to regard
Difference image carries out inside and outside parameter correction further according to depth parallax image, obtains target depth image.
Step 306, the human face region in Infrared Targets image is detected.
Step 308, if there are two or more human face regions in Infrared Targets image, by region area maximum
Human face region as target human face region.
It is understood that human face region may be not present in Infrared Targets image, it is also possible to there are a human face region,
There is likely to be two or more human face regions.When human face region is not present in Infrared Targets image, then may not be used
With carrying out face verification processing.When in Infrared Targets image there are when a human face region, can directly to the human face region into
Pedestrian's face verification processing.When there are when two or more human face regions, then being obtained wherein in Infrared Targets image
One human face region carries out face verification processing as target human face region.Specifically, if there are two in Infrared Targets image
Or more than two human face regions, the corresponding region area of each human face region can calculated.The region area can pass through
The quantity for the pixel for including in human face region is indicated, and the maximum human face region of region area can be used as the mesh for verification
Mark human face region.
Step 310, target human face region corresponding target face depth areas in target depth image is extracted, according to mesh
It marks face depth areas and obtains target live body property parameters.
It usually, can be according to the Infrared Targets image authentication people of acquisition during being authenticated processing to face
Whether face region matches with preset human face region.Assuming that when shooting is the faces such as photo, sculpture, it is also possible to successful match.
Therefore, the process of face verification includes In vivo detection stage and face matching stage, and face matching stage refers to identification face body
The process of part, In vivo detection stage refer to the process of detecting whether the face that is taken is live body.According to the target depth figure of acquisition
As carrying out In vivo detection processing, must assure that acquisition in this way to be the face of live body can just be proved to be successful.It is understood that adopting
The Infrared Targets image of collection can indicate that the detailed information of face, acquisition target depth image can indicate Infrared Targets image pair
The depth information answered can carry out In vivo detection processing according to target depth image.For example, the face being taken is in photo
If face, according to target depth image it may determine that the face of acquisition is not three-dimensional, it may be considered that the face of acquisition
For the face of non-living body.
Step 312, In vivo detection processing is carried out according to target live body property parameters.
Specifically, carrying out In vivo detection according to above-mentioned target depth image includes:In target depth image search with it is upper
The corresponding target face depth areas of target human face region is stated, target live body attribute ginseng is extracted according to target face depth areas
Number carries out In vivo detection processing according to target live body property parameters.Optionally, target live body property parameters may include face pair
The face depth information answered, skin quality feature, the direction of texture, the density of texture, texture width etc..For example, target live body category
Property parameter can be face depth information, if above-mentioned face depth information meets face live body rule, then it is assumed that above-mentioned target person
Face region has bioactivity, as living body faces region.
Step 314, if In vivo detection success, obtains the corresponding target face property parameters of target human face region, according to
Target face property parameters carry out face matching treatment to target human face region, obtain face matching result.
In one embodiment, in face matching stage, second processing unit can by the target human face region of extraction with
Default human face region is matched.When being matched to target facial image, the mesh of target facial image can be extracted
Mark face character parameter, then by the face of the default facial image stored in the target face property parameters of extraction and electronic equipment
Property parameters are matched, if matching value is more than matching threshold, then it is assumed that face successful match.For example, face figure can be extracted
The features such as deflection angle, luminance information, face feature of face are as face character parameter as in, if the target face category of extraction
Property parameter and the face character parameter matching degree of storage are more than 90%, then it is assumed that face successful match.Specifically, judge target person
The target face property parameters in face region and the no matching of face character parameter of default human face region;If so, target face area
The face successful match in domain;If it is not, then it fails to match for the face of target human face region.
Step 316, face verification result is obtained according to face matching result.
In embodiment provided by the present application, after carrying out In vivo detection processing to face, if In vivo detection success, then
Matching treatment is carried out to face.Only In vivo detection success and when face successful match, just thinks face verification success.Specifically
, step 316 may include:If face successful match, the successful result of face verification is obtained;If it fails to match for face,
To the result of face verification failure.Above-mentioned image processing method can also include:If In vivo detection fails, face verification is obtained
The result of failure.Face verification result can be sent to answering for upper layer by processing unit again after obtaining face verification result
With program, application program can accordingly be handled according to the face verification result.
For example, when carrying out payment verification according to face, when face verification result is sent to application by processing unit
After program, application program can carry out payment processing according to face verification result.If face verification success, application program is just
It can continue delivery operation, and be shown to user and pay successful information;If face verification fails, application program can
To stop carrying out delivery operation, and the information that payment fails is shown to user.
In one embodiment, the step of obtaining Infrared Targets image and target depth image may include:
Step 502, infrared anaglyph is calculated according to infrared image in first processing units, and according to depth image meter
Calculation obtains depth parallax image.
Specifically, first processing units and second processing unit, first processing units and second are may include in electronic equipment
Processing unit all operates in secure operating environment.Secure operating environment may include the first security context and the second safety collar
Border, first processing units operate in the first security context, and second processing unit operates in the second security context.First processing
Unit and second processing unit are distribution processing unit on the different processors, and under different security contexts.For example,
First processing units can be external MCU (Microcontroller Unit, micro-control unit) modules or DSP
Secure processing module in (Digital Signal Processing, digital signal processor), second processing unit can be
CPU (Central Processing under TEE (Trust Execution Environment, credible performing environment)
Unit, central processing unit) kernel.
CPU has 2 kinds of operational modes in electronic equipment:(Rich Execution Environment, hold naturally by TEE and REE
Row environment).Under normal conditions, CPU is operated under REE, but when electronic equipment needs to obtain the higher data of security level, example
When needing acquisition human face data that verification is identified such as electronic equipment, CPU can be switched to TEE by REE and be run.When electronics is set
When standby middle CPU is monokaryon, can above-mentioned monokaryon be directly switched to TEE by REE;When CPU is multinuclear in electronic equipment, electronics is set
Standby that a kernel is switched to TEE by REE, other kernels still operate in REE.
Step 504, infrared anaglyph and depth parallax image are sent to second processing unit by first processing units.
Specifically, first processing units are connected to the channel of two data transmissions, including secure transmission tunnel and non-security
Transmission channel.When carrying out face verification processing, it usually needs handled under secure operating environment, second processing unit is
Illustrate the first processing so when first processing units are connected to second processing unit for the processing unit under secure operating environment
What unit currently connected is secure transmission tunnel.When first processing units are connected to the processing unit under non-security running environment,
Illustrate that first processing units currently connect is non-secure transmission tunnel.First processing units are detecting face verification instruction
When, secure transmission tunnel can be switched to and carry out transmission data.Then step 504 may include:Judge whether first processing units connect
It is connected to second processing unit, if so, infrared anaglyph and depth parallax image are sent to second processing unit;If it is not,
It then controls first processing units and is connected to second processing unit, and regarded infrared anaglyph and depth by first processing units
Difference image is sent to second processing unit.
Step 506, second processing unit is corrected to obtain Infrared Targets image according to infrared anaglyph, and according to depth
Degree anaglyph is corrected to obtain target depth image.
In one embodiment, the step of carrying out In vivo detection may include:
Step 602, target human face region corresponding target face depth areas in target depth image is extracted, according to mesh
It marks face depth areas and obtains first object live body property parameters.
In one embodiment, In vivo detection can be carried out according to target depth image, it can also be according to target depth figure
Picture and speckle image carry out In vivo detection.Specifically, first object live body property parameters are obtained according to target depth image, according to
Speckle image obtains the second target live body property parameters, then according to first object live body property parameters and the second target live body category
Property parameter carry out In vivo detection.
Step 604, speckle image is obtained, speckle image is that the laser speckle acquired by Laser video camera head is irradiated to object
On be formed by image, target depth image is calculated according to speckle image.
Step 606, target human face region corresponding target face speckle regions in speckle image are extracted, according to target person
Face speckle regions obtain the second target live body property parameters.
Speckle image and infrared image are corresponding, so can find mesh in speckle image according to target human face region
Face speckle regions are marked, then obtain the second target live body property parameters according to target face speckle regions.Electronic equipment can be with
It controls color-changing lamp to open, and speckle image is acquired by Laser video camera head.Usually, electronic equipment can install two simultaneously
Or more than two cameras, if installing more than two cameras, then the visual field of each camera acquisition will be variant.
In order to ensure that different camera acquisitions are the corresponding images of Same Scene, the image for acquiring different cameras is needed to carry out pair
Together, the image that camera acquires is made to be corresponding.It therefore, generally can also will be original after camera collects original speckle image
Speckle image is corrected, the speckle image after being corrected.So speckle image for being used to carry out In vivo detection, can be with
It is original speckle image, can also be the speckle image after correction.
Specifically, it if the speckle image obtained is the original speckle image of camera acquisition, can also be wrapped before step 606
It includes:Speckle anaglyph is calculated according to speckle image, is corrected to obtain target speckle image according to speckle anaglyph.
Then step 606 may include:Target human face region corresponding target face speckle regions in target speckle image are extracted, according to mesh
It marks face speckle regions and obtains the second target live body property parameters.
Step 608, it is carried out at In vivo detection according to first object live body property parameters and the second target live body property parameters
Reason.
It is understood that first object live body property parameters and the second target live body property parameters can be according to net
Network learning algorithm obtains, can be according to after obtaining first object live body property parameters and the second target live body property parameters
One target live body property parameters and the second target live body property parameters carry out In vivo detection processing.For example, first object live body category
Property parameter can be face depth information, and the second target live body property parameters can be skin quality characteristic parameter.Network can be passed through
Learning algorithm is trained speckle image, the corresponding skin quality characteristic parameter of the speckle image to be acquired, further according to people
Face depth information and skin quality characteristic parameter judge whether target face is live body.
In embodiment provided by the present application, face verification processing can carry out in second processing unit, second processing
Face verification result can be sent to the intended application for initiating face verification instruction by unit after obtaining face verification result
Program.Specifically, face verification result can be encrypted, and the face verification result after encryption is sent to
Play the destination application of face verification instruction.Face verification result is encrypted, specific Encryption Algorithm does not limit
It is fixed.For example, it may be according to DES (Data Encryption Standard, data encryption standards), MD5 (Message-
Digest Algorithm 5, Message-Digest Algorithm 5), HAVAL (Diffie-Hellman, Diffie-Hellman).
Specifically, it can be encrypted according to the network environment of electronic equipment:Obtain what electronic equipment was presently in
The network safety grade of network environment;Secret grade is obtained according to network safety grade, face verification result is encrypted
The corresponding encryption of grade.It is understood that application program generally requires and is joined when acquisition image is operated
Net operation.For example, when carrying out payment authentication to face, face verification result can be sent to application program, using journey
Sequence is then forwarded to corresponding server and completes corresponding delivery operation.Application program needs to connect when sending face verification result
Network is connect, then face verification result is sent to by corresponding server by network.Therefore, when sending face verification result,
Face verification result can be encrypted first.The network safety grade for the network environment that detection electronic equipment is presently in,
And it is encrypted according to network safety grade.
Network safety grade is lower, it is believed that the safety of network environment is lower, and corresponding secret grade is higher.Electronic equipment
The correspondence for pre-establishing network safety grade and secret grade can obtain corresponding encryption etc. according to network safety grade
Grade, and face verification result is encrypted according to secret grade.It can be according to the reference picture of acquisition to face verification
As a result it is encrypted.
In one embodiment, reference picture is electronic equipment in the speckle pattern for carrying out mark timing acquiring to camera module
Picture, since there is reference picture height uniqueness, the reference picture of different electronic equipments acquisition to be different.So reference chart
As itself can serve as an encrypted key, for data are encrypted.Electronic equipment can be by reference picture
It is stored in security context, leaking data can be prevented in this way.Specifically, the reference picture of acquisition is by a two-dimensional pixel
What matrix was constituted, each pixel has corresponding pixel value.It can be according to all or part of pixel pair of reference picture
Face verification result is encrypted.For example, may include depth image in face verification result, then it can be straight by reference picture
It connects and is overlapped with depth image, obtain an encrypted image.It can also the corresponding picture element matrix of depth image and reference picture
Corresponding picture element matrix carries out product calculation, obtains encrypted image.Some in reference picture or multiple pixels can also be taken
Corresponding pixel value is encrypted depth image as encryption key, and specific Encryption Algorithm is not limited in the present embodiment
It is fixed.
Reference picture is generated when electronic equipment is demarcated, then reference picture can be stored in advance in peace by electronic equipment
In full ambient engine, when needing that face verification result is encrypted, reference picture, and root can be read in a secure environment
Face verification result is encrypted according to reference picture.Meanwhile it can be stored on the corresponding server of destination application
One identical reference picture, face verification result after electronic equipment is by encryption are sent to destination application correspondence
Server after, the server of destination application obtains reference picture, and according to the reference picture of acquisition to encrypted
Face verification result is decrypted.
It is understood that the ginseng of multiple distinct electronic apparatuses acquisition may be stored in the server of destination application
Examine image, the corresponding reference picture of each electronic equipment is different.Therefore, one can be defined to each reference picture in server
A reference picture mark, and the device identification of electronic equipment is stored, it then establishes between reference picture mark and device identification
Correspondence.When server receives face verification result, the face verification result received can carry electronic equipment simultaneously
Device identification.Server can search corresponding reference picture according to device identification and identify, and be identified according to reference picture
Corresponding reference picture is found, then face verification result is decrypted according to the reference picture found.
In other embodiment provided by the present application, can specifically it be wrapped according to the method that reference picture is encrypted
It includes:The corresponding picture element matrix of reference picture is obtained, encryption key is obtained according to the picture element matrix;Face is tested according to encryption key
Card result is encrypted.Reference picture is made of a two-dimensional pixel matrix, since the reference picture of acquisition is only
One, therefore the corresponding picture element matrix of reference picture is also unique.The picture element matrix itself can be used as an encryption key
Face verification result is encrypted, picture element matrix can also be carried out it is certain be converted to encryption key, then pass through conversion
Face verification result is encrypted in obtained encryption key.For example, picture element matrix is one by multiple pixel values
The two-dimensional matrix of composition, position of each pixel value in picture element matrix can be indicated by a two-dimensional coordinate, then
Corresponding pixel value can be obtained by one or more position coordinates, and the one or more pixel value of acquisition is combined into
One encryption key.After getting encryption key, face verification result can be encrypted according to encryption key, is had
Body Encryption Algorithm do not limit in the present embodiment.For example, can directly be overlapped encryption key and data or product,
Or final encryption data can be obtained using encryption key as a numerical value insert number in.
Electronic equipment can also use different Encryption Algorithm to different application programs.Specifically, electronic equipment can be with
The application identities of application program and the correspondence of Encryption Algorithm are pre-established, may include intended application journey in face verification instruction
The intended application of sequence identifies.After receiving face verification instruction, the intended application for including in face verification instruction can be obtained
Mark, and corresponding Encryption Algorithm is obtained according to intended application mark, according to the Encryption Algorithm of acquisition to face verification result into
Row encryption.
Specifically, above-mentioned image processing method may also include:It is deep to obtain Infrared Targets image, target speckle image and target
It spends one or more as image to be sent in image;Obtain the application etc. for the destination application for initiating face verification instruction
Grade obtains corresponding accuracy class according to application level;The precision that image to be sent is adjusted according to precision grade, after adjustment
Image to be sent is sent to above-mentioned destination application.Application level can indicate the corresponding important level of destination application.
The application level of general objectives application program is higher, and the precision of the image of transmission is higher.Electronic equipment can pre-set application
The application level of program, and the correspondence of application level and accuracy class is established, it can be obtained according to application level corresponding
Accuracy class.For example, application program can be divided into the non-security class application program of system security classes application program, system, third
Four application levels, the corresponding accuracy classes such as square security classes application program, the non-security class application program of third party continuously decrease.
The precision of image to be sent can show as image resolution ratio or speckle image in include speckle point
The precision of number, the target depth image and target speckle image that are obtained in this way according to speckle image also can be different.Specifically, it adjusts
The precision of images may include:The resolution ratio of image to be sent is adjusted according to precision grade;Or, according to precision grade adjustment acquisition
The number for the speckle point for including in speckle image.Wherein, the number for the speckle point for including in speckle image can be by software
Mode is adjusted, and can also be adjusted by way of hardware.When software mode adjusts, the speckle of acquisition can be directly detected
Speckle point in figure, and speckle point in part is merged or Processing for removing, the speckle for including in the speckle pattern after adjusting in this way
The quantity of point just reduces.When hardware mode adjusts, the number of the laser speckle point of color-changing lamp diffraction generation can be adjusted.Example
Such as, when precision is high, the number of the laser speckle point of generation is 30000;When precision is relatively low, the number of the laser speckle point of generation
It is 20000.The precision of the depth image being calculated corresponding in this way will accordingly decrease.
Specifically, can in color-changing lamp preset different diffraction optical element (Diffractive Optical
Elements, DOE), wherein the number for the speckle point that difference DOE diffraction is formed is different.Switch different DOE according to precision grade
It carries out diffraction and generates speckle image, and the depth map of different accuracy is obtained according to obtained speckle image.When answering for application program
With it is higher ranked when, corresponding precision grade is also relatively high, and DOE that color-changing lamp can control speckle point number more emits laser
Speckle, to obtain the more speckle image of speckle point number;When the application level of application program is relatively low, corresponding levels of precision
Also not relatively low, DOE that color-changing lamp can control speckle point number less emits laser speckle, to obtain speckle point number compared with
Few speckle image.
The image processing method that above-described embodiment provides, can obtain Infrared Targets image and target depth image, according to
Infrared Targets image carries out Face datection and obtains target human face region.Then it is carried out at In vivo detection according to target depth image
Reason after In vivo detection success, then obtains the target face property parameters of target human face region, and is joined according to target face character
Number carries out face matching treatment.Face verification result to the end is obtained according to face matching result.In this way in the mistake of face verification
Cheng Zhong can carry out In vivo detection according to depth image, carry out face matching according to infrared image, improve the standard of face verification
True property.
Although should be understood that Fig. 2, Fig. 3, Fig. 5, Fig. 6 flow chart in each step according to arrow instruction according to
Secondary display, but these steps are not the inevitable sequence indicated according to arrow to be executed successively.Unless having herein explicitly
Bright, there is no stringent sequences to limit for the execution of these steps, these steps can execute in other order.Moreover, Fig. 2,
At least part step in Fig. 3, Fig. 5, Fig. 6 may include multiple sub-steps either these sub-steps of multiple stages or rank
Section is not necessarily to execute completion in synchronization, but can execute at different times, these sub-steps or stage
Execution sequence is also not necessarily and carries out successively, but can either the sub-step of other steps or stage be extremely with other steps
A few part executes in turn or alternately.
Fig. 7 is the hardware structure diagram that image processing method is realized in one embodiment.As shown in fig. 7, in the electronic equipment
It may include camera module 710, central processing unit (CPU) 720 and first processing units 730, wrapped in above-mentioned camera module 710
Include Laser video camera head 712, floodlight 714, RGB (Red/Green/Blue, red green blue color pattern) cameras 716 and radium-shine
Lamp 718.First processing units 730 include PWM (Pulse Width Modulation, pulse width modulation) module 732, SPI/
I2C (Serial Peripheral Interface/Inter-Integrated Circuit, Serial Peripheral Interface (SPI)/two-way two
Line synchronous serial interface) module 734, RAM (Random Access Memory, random access memory) module 736,
Depth Engine modules 738.Wherein, second processing unit 722 can be in TEE (Trusted execution
Environment, credible running environment) under CPU core, first processing units 730 be MCU (Microcontroller
Unit, micro-control unit) processor.It is understood that central processing unit 720 can be multinuclear operational mode, central processing
CPU core in device 720 can be transported at TEE or REE (Rich Execution Environment, natural running environment)
Row.TEE and REE is the operation mould of ARM modules (Advanced RISC Machines, Advanced Reduced Instruction Set processor)
Formula.Under normal conditions, the higher operation behavior needs of safety are executed at TEE in electronic equipment, other operation behaviors then may be used
It is executed at REE.In the embodiment of the present application, when the face verification that central processing unit 720 receives destination application initiation refers to
Enable, the CPU core i.e. second processing unit 722 run under TEE, can by SECURE SPI/I2C into MCU730 SPI/I2C
Module 734 sends face verification and instructs to first processing units 730.First processing units 730 are receiving face verification instruction
Afterwards, it is opened by floodlight 714 in the transmitting impulse wave control camera module 710 of PWM module 732 to acquire infrared image, control
Color-changing lamp 718 is opened to acquire speckle image in camera module 710 processed.Camera module 710 can be by collected infrared figure
Picture and speckle image send Depth Engine modules 738 in first processing units 730 to, and Depth Engine modules 738 can
Infrared anaglyph is calculated according to infrared image, depth image is calculated according to speckle image, and depth is obtained according to depth image
Anaglyph.Then infrared anaglyph and depth parallax image are sent to the second processing unit 722 run under TEE.The
Two processing units 722 can be corrected to obtain Infrared Targets image according to infrared anaglyph, and according to depth parallax image into
Row correction obtains target depth image.Then Face datection is carried out according to Infrared Targets image, detects above-mentioned Infrared Targets image
In whether there is target human face region, then according to target depth image to above-mentioned target human face region carry out In vivo detection;If
In vivo detection success, then detect whether target human face region matches with default human face region.It is obtained most according to face matching result
If face verification afterwards is as a result, face successful match, obtains the successful result of face verification;If it fails to match for face,
To the result of face verification failure.It is understood that after above-mentioned In vivo detection failure, then the knot of face verification failure is obtained
Fruit, and do not continue to carry out face matching treatment.Second processing unit 722 can test face after obtaining face verification result
Card result is sent to destination application.
Fig. 8 is the hardware structure diagram that image processing method is realized in another embodiment.As shown in figure 8, the hardware configuration
Include first processing units 80, camera module 82 and second processing unit 84.Camera module 82 includes Laser video camera
First 820, floodlight 822, RGB cameras 824 and color-changing lamp 826.Wherein, the CPU under TEE is may include in central processing unit
Kernel and the CPU core under the REE, first processing units 80 are the DSP processing modules opened up in central processing unit, at second
It is the CPU core under TEE to manage unit 84, and second processing unit 84 and first processing units 80 can pass through a safety
Buffering area (secure buffer) is attached, and can ensure the safety in image transmitting process in this way.Under normal conditions,
Central processing unit is needed processor cores being switched under TEE and is executed when handling the higher operation behavior of safety, safety
Lower operation behavior can then be executed at REE.In the embodiment of the present application, upper layer application can be received by second processing unit 84
The face verification of transmission instructs, then by PWM module emit impulse wave control floodlight 822 in camera module 82 open come
Infrared image is acquired, color-changing lamp 826 in camera module 82 is controlled and opens to acquire speckle image.Camera module 82 can will adopt
The infrared image and speckle image collected is sent in first processing units 80, and first processing units 80 can be according to speckle image meter
Calculation obtains depth image, and depth parallax image then is calculated according to depth image, and is calculated according to infrared image red
Outer anaglyph.Then infrared anaglyph and depth parallax image are sent to second processing unit 84.Second processing unit
84 can be corrected to obtain Infrared Targets image according to infrared anaglyph, and be corrected to obtain according to depth parallax image
Target depth image.Second processing unit 84 can carry out Face datection according to Infrared Targets image, detect above-mentioned Infrared Targets figure
It whether there is target human face region as in, In vivo detection then carried out to above-mentioned target human face region according to target depth image;
If In vivo detection success, then detect whether target human face region matches with default human face region.It is obtained according to face matching result
If last face verification obtains the successful result of face verification as a result, face successful match;If it fails to match for face,
Obtain the result of face verification failure.It is understood that after above-mentioned In vivo detection failure, then the knot of face verification failure is obtained
Fruit, and do not continue to carry out face matching treatment.Second processing unit 84, can be by face verification after obtaining face verification result
As a result it is sent to destination application.
Fig. 9 is the software architecture schematic diagram that image processing method is realized in one embodiment.As shown in figure 9, the software frame
Structure includes application layer 910, operating system 920 and secure operating environment 930.Wherein, the module being in secure operating environment 930
Including first processing units 931, camera module 932, second processing unit 933 and encrypting module 934 etc.;Operating system 930
In include safety management module 921, face management module 922, webcam driver 923 and camera frame 924;Application layer 910
In include application program 911.Application program 911 can initiate image capture instruction, and image capture instruction is sent to first
Processing unit 931 is handled.For example, being paid, being unlocked, U.S. face, augmented reality by acquiring face
When operations such as (Augmented Reality, AR), application program can initiate the image capture instruction of acquisition facial image.It can be with
Understand, the image capture instruction that application program 911 is initiated can be sent initially to second processing unit 933, then by second
Processing unit 933 is sent to first processing units 931.
After first processing units 931 receive image capture instruction, if judging image capture instruction to be carried out to face
The face verification of verification instructs, then control camera module 932 can be instructed to acquire infrared image and speckle pattern according to face verification
Picture, the infrared image and speckle image that camera module 932 acquires are transferred to first processing units 931.First processing units 931
Depth image including depth information is calculated according to speckle image, and depth parallax figure is calculated according to depth image
Infrared anaglyph is calculated according to infrared image in picture.Then by secure transmission tunnel by depth parallax image and infrared
Anaglyph is sent to second processing unit 933.Second processing unit 933 can be corrected to obtain mesh according to infrared anaglyph
Infrared image is marked, is corrected to obtain target depth image according to depth parallax image.Then it is carried out according to Infrared Targets image
Face datection detects and whether there is target human face region in above-mentioned Infrared Targets image, then according to target depth image to upper
It states target human face region and carries out In vivo detection;If In vivo detection success, then detect target human face region and default human face region and be
No matching.If obtaining face verification to the end according to face matching result as a result, face successful match, obtain face verification at
The result of work(;If it fails to match for face, the result of face verification failure is obtained.It is understood that above-mentioned In vivo detection loses
After losing, then obtain that face verification fails as a result, and not continuing to carry out face matching treatment.Second processing unit 933 obtains
Face verification result can be sent to encrypting module 934, after being encrypted by encrypting module 934, by encrypted face
Verification result is sent to safety management module 921.Usually, different application programs 911 has corresponding safety management module
921, encrypted face verification result can be decrypted for safety management module 921, and will be obtained after decryption processing
Face verification result is sent to corresponding face management module 922.Face verification result can be sent to by face management module 922
The application program 911 on upper layer, application program 911 are operated accordingly further according to face verification result.
If the face verification instruction that first processing units 931 receive is not face verification instruction, first processing units
931, which can control camera module 932, acquires speckle image, and calculates depth image according to speckle image, then according to depth
Image obtains depth parallax image.Depth parallax image can be sent to by first processing units 931 by non-secure transfer channel
Webcam driver 923, webcam driver 923 are corrected processing further according to depth parallax image and obtain target depth image, so
Target depth image is sent to camera frame 924 afterwards, then by camera frame 924 be sent to face management module 922 or
Application program 911.
Figure 10 is the structural schematic diagram of image processing apparatus in one embodiment.As shown in Figure 10, the image processing apparatus
1000 include face detection module 1002, In vivo detection module 1004, face matching module 1006 and face verification module 1008.
Wherein:
Face detection module 1002, for obtaining Infrared Targets image and target depth image, and it is red according to the target
Outer image carries out Face datection and determines target human face region, wherein the target depth image is for indicating Infrared Targets image pair
The depth information answered.
In vivo detection module 1004, for carrying out live body inspection to the target human face region according to the target depth image
Survey is handled.
If face matching module 1006 obtains the corresponding target of the target human face region for In vivo detection success
Face character parameter carries out face matching treatment to the target human face region according to the target face property parameters, obtains
Face matching result.
Face verification module 1008, for obtaining face verification result according to the face matching result.
The image processing apparatus that above-described embodiment provides, can obtain Infrared Targets image and target depth image, according to
Infrared Targets image carries out Face datection and obtains target human face region.Then it is carried out at In vivo detection according to target depth image
Reason after In vivo detection success, then obtains the target face property parameters of target human face region, and is joined according to target face character
Number carries out face matching treatment.Face verification result to the end is obtained according to face matching result.In this way in the mistake of face verification
Cheng Zhong can carry out In vivo detection according to depth image, carry out face matching according to infrared image, improve the standard of face verification
True property.
In one embodiment, face detection module 1002 is additionally operable to detect that face verification instructs when first processing units
When, control camera module acquires infrared image and depth image;Wherein, the first moment and the acquisition of the infrared image are acquired
Time interval between second moment of the depth image is less than first threshold;Infrared Targets are obtained according to the infrared image
Image obtains target depth image according to the depth image.
In one embodiment, face detection module 1002 is additionally operable to the first processing units according to the infrared image
Infrared anaglyph is calculated, and depth parallax image is calculated according to depth image;The first processing units will be red
Outer anaglyph and depth parallax image are sent to second processing unit;The second processing unit according to infrared anaglyph into
Row correction obtains Infrared Targets image, and is corrected according to depth parallax image to obtain target depth image.
In one embodiment, face detection module 1002 is additionally operable to detect the face area in the Infrared Targets image
Domain;If there are two or more human face regions in the Infrared Targets image, by the maximum face area of region area
Domain is as target human face region.
In one embodiment, In vivo detection module 1004 is additionally operable to extract the target human face region in target depth
Corresponding target face depth areas in image is spent, target live body property parameters are obtained according to the target face depth areas;
In vivo detection processing is carried out according to the target live body property parameters.
In one embodiment, In vivo detection module 1004 is additionally operable to extract the target human face region in target depth
Corresponding target face depth areas in image is spent, first object live body attribute ginseng is obtained according to the target face depth areas
Number;Speckle image is obtained, the speckle image is that the laser speckle acquired by Laser video camera head is irradiated on object and is formed
Image, the target depth image is calculated according to the speckle image;The target human face region is extracted in institute
Corresponding target face speckle regions in speckle image are stated, the second target live body category is obtained according to the target face speckle regions
Property parameter;In vivo detection processing is carried out according to the first object live body property parameters and the second target live body property parameters.
In one embodiment, if face verification module 1008 is additionally operable to face successful match, obtain recognition of face at
The result of work(;If it fails to match for face, the result of recognition of face failure is obtained;If In vivo detection fails, face knowledge is obtained
Not Shi Bai result.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus
As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When so that the processor executes the image processing method of above-described embodiment offer.
A kind of computer program product including instruction, when run on a computer so that computer executes above-mentioned
The image processing method that embodiment provides.
Used in this application may include to any reference of memory, storage, database or other media is non-volatile
And/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as
It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced
SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
Cannot the limitation to the application the scope of the claims therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the protection domain of the application patent should be determined by the appended claims.
Claims (10)
1. a kind of image processing method, which is characterized in that including:
Infrared Targets image and target depth image are obtained, and Face datection is carried out according to the Infrared Targets image and determines target
Human face region, wherein the target depth image is for indicating the corresponding depth information of Infrared Targets image;
In vivo detection processing is carried out to the target human face region according to the target depth image;
If In vivo detection success, obtains the corresponding target face property parameters of the target human face region, according to the target
Face character parameter carries out face matching treatment to the target human face region, obtains face matching result;
Face verification result is obtained according to the face matching result.
2. according to the method described in claim 1, it is characterized in that, the acquisition Infrared Targets image and target depth image,
Including:
When first processing units detect face verification instruction, control camera module acquisition infrared image and depth image;
Wherein, the time interval acquired between the first moment of the infrared image and the second moment of the acquisition depth image is less than
First threshold;
Infrared Targets image is obtained according to the infrared image, target depth image is obtained according to the depth image.
3. according to the method described in claim 2, it is characterized in that, described obtain Infrared Targets figure according to the infrared image
Picture obtains target depth image according to the depth image, including:
Infrared anaglyph is calculated according to the infrared image in the first processing units, and is calculated according to depth image
To depth parallax image;
Infrared anaglyph and depth parallax image are sent to second processing unit by the first processing units;
The second processing unit is corrected to obtain Infrared Targets image according to infrared anaglyph, and according to depth parallax figure
As being corrected to obtain target depth image.
4. according to the method described in claim 1, it is characterized in that, described carry out Face datection according to the Infrared Targets image
Determine target human face region, including:
Detect the human face region in the Infrared Targets image;
If there are two or more human face regions in the Infrared Targets image, by the maximum face area of region area
Domain is as target human face region.
5. according to the method described in claim 1, it is characterized in that, it is described according to the target depth image to the target person
Face region carries out In vivo detection processing, including:
The target human face region corresponding target face depth areas in the target depth image is extracted, according to the mesh
It marks face depth areas and obtains target live body property parameters;
In vivo detection processing is carried out according to the target live body property parameters.
6. according to the method described in claim 1, it is characterized in that, it is described according to the target depth image to the target person
Face region carries out In vivo detection processing, including:
The target human face region corresponding target face depth areas in the target depth image is extracted, according to the mesh
It marks face depth areas and obtains first object live body property parameters;
Speckle image is obtained, the speckle image is that the laser speckle acquired by Laser video camera head is irradiated on object and is formed
Image, the target depth image is calculated according to the speckle image;
The target human face region corresponding target face speckle regions in the speckle image are extracted, according to the target person
Face speckle regions obtain the second target live body property parameters;
In vivo detection processing is carried out according to the first object live body property parameters and the second target live body property parameters.
7. method according to any one of claim 1 to 6, which is characterized in that described according to the face matching treatment
Face verification is obtained as a result, including:
If face successful match, the successful result of recognition of face is obtained;
If it fails to match for face, the result of recognition of face failure is obtained;
The method further includes:
If In vivo detection fails, the result of recognition of face failure is obtained.
8. a kind of image processing apparatus, which is characterized in that including:
Face detection module, for obtaining Infrared Targets image and target depth image, and according to the Infrared Targets image into
Row Face datection determines target human face region, wherein the target depth image is for indicating the corresponding depth of Infrared Targets image
Information;
In vivo detection module, for carrying out In vivo detection processing to the target human face region according to the target depth image;
If face matching module obtains the corresponding target face character of the target human face region for In vivo detection success
Parameter carries out face matching treatment to the target human face region according to the target face property parameters, obtains face matching
As a result;
Face verification module, for obtaining face verification result according to the face matching result.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
The image processing method as described in any one of claim 1 to 7 is realized when processor executes.
10. a kind of electronic equipment, including memory and processor, computer-readable instruction is stored in the memory, it is described
When instruction is executed by the processor so that the processor executes the image procossing as described in any one of claim 1 to 7
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810403815.2A CN108805024B (en) | 2018-04-28 | 2018-04-28 | Image processing method, image processing device, computer-readable storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810403815.2A CN108805024B (en) | 2018-04-28 | 2018-04-28 | Image processing method, image processing device, computer-readable storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108805024A true CN108805024A (en) | 2018-11-13 |
CN108805024B CN108805024B (en) | 2020-11-24 |
Family
ID=64093671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810403815.2A Active CN108805024B (en) | 2018-04-28 | 2018-04-28 | Image processing method, image processing device, computer-readable storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108805024B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109284597A (en) * | 2018-11-22 | 2019-01-29 | 北京旷视科技有限公司 | A kind of face unlocking method, device, electronic equipment and computer-readable medium |
CN109614910A (en) * | 2018-12-04 | 2019-04-12 | 青岛小鸟看看科技有限公司 | A kind of face identification method and device |
CN109683698A (en) * | 2018-12-25 | 2019-04-26 | Oppo广东移动通信有限公司 | Payment verification method, apparatus, electronic equipment and computer readable storage medium |
CN110163097A (en) * | 2019-04-16 | 2019-08-23 | 深圳壹账通智能科技有限公司 | Discrimination method, device, electronic equipment and the storage medium of three-dimensional head portrait true or false |
CN110287900A (en) * | 2019-06-27 | 2019-09-27 | 深圳市商汤科技有限公司 | Verification method and verifying device |
CN110287672A (en) * | 2019-06-27 | 2019-09-27 | 深圳市商汤科技有限公司 | Verification method and device, electronic equipment and storage medium |
CN110462633A (en) * | 2019-06-27 | 2019-11-15 | 深圳市汇顶科技股份有限公司 | A kind of method, apparatus and electronic equipment of recognition of face |
CN110659617A (en) * | 2019-09-26 | 2020-01-07 | 杭州艾芯智能科技有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN111310528A (en) * | 2018-12-12 | 2020-06-19 | 马上消费金融股份有限公司 | Image detection method, identity verification method, payment method and payment device |
CN111382596A (en) * | 2018-12-27 | 2020-07-07 | 鸿富锦精密工业(武汉)有限公司 | Face recognition method and device and computer storage medium |
CN111882324A (en) * | 2020-07-24 | 2020-11-03 | 南京华捷艾米软件科技有限公司 | Face authentication method and system |
CN112711968A (en) * | 2019-10-24 | 2021-04-27 | 浙江舜宇智能光学技术有限公司 | Face living body detection method and system |
CN112861568A (en) * | 2019-11-12 | 2021-05-28 | Oppo广东移动通信有限公司 | Authentication method and device, electronic equipment and computer readable storage medium |
CN113327348A (en) * | 2021-05-08 | 2021-08-31 | 宁波盈芯信息科技有限公司 | Networking type 3D people face intelligence lock |
CN113469036A (en) * | 2021-06-30 | 2021-10-01 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and storage medium |
CN115797995A (en) * | 2022-11-18 | 2023-03-14 | 北京的卢铭视科技有限公司 | Face living body detection method, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1708135A4 (en) * | 2004-01-13 | 2009-04-08 | Fujitsu Ltd | Authenticator using organism information |
CN105516613A (en) * | 2015-12-07 | 2016-04-20 | 凌云光技术集团有限责任公司 | Intelligent exposure method and system based on face recognition |
CN106533667A (en) * | 2016-11-08 | 2017-03-22 | 深圳大学 | Multi-level key generating method based on dual-beam interference and user hierarchical authentication method |
CN107358157A (en) * | 2017-06-07 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN107451510A (en) * | 2016-05-30 | 2017-12-08 | 北京旷视科技有限公司 | Biopsy method and In vivo detection system |
CN107832677A (en) * | 2017-10-19 | 2018-03-23 | 深圳奥比中光科技有限公司 | Face identification method and system based on In vivo detection |
-
2018
- 2018-04-28 CN CN201810403815.2A patent/CN108805024B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1708135A4 (en) * | 2004-01-13 | 2009-04-08 | Fujitsu Ltd | Authenticator using organism information |
CN105516613A (en) * | 2015-12-07 | 2016-04-20 | 凌云光技术集团有限责任公司 | Intelligent exposure method and system based on face recognition |
CN107451510A (en) * | 2016-05-30 | 2017-12-08 | 北京旷视科技有限公司 | Biopsy method and In vivo detection system |
CN106533667A (en) * | 2016-11-08 | 2017-03-22 | 深圳大学 | Multi-level key generating method based on dual-beam interference and user hierarchical authentication method |
CN107358157A (en) * | 2017-06-07 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN107832677A (en) * | 2017-10-19 | 2018-03-23 | 深圳奥比中光科技有限公司 | Face identification method and system based on In vivo detection |
Non-Patent Citations (1)
Title |
---|
苟禹 等: "《中国计算机学会 信息保密专业委员会论文集》", 30 September 2006 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109284597A (en) * | 2018-11-22 | 2019-01-29 | 北京旷视科技有限公司 | A kind of face unlocking method, device, electronic equipment and computer-readable medium |
CN109614910A (en) * | 2018-12-04 | 2019-04-12 | 青岛小鸟看看科技有限公司 | A kind of face identification method and device |
CN111310528A (en) * | 2018-12-12 | 2020-06-19 | 马上消费金融股份有限公司 | Image detection method, identity verification method, payment method and payment device |
CN111310528B (en) * | 2018-12-12 | 2022-08-12 | 马上消费金融股份有限公司 | Image detection method, identity verification method, payment method and payment device |
CN109683698A (en) * | 2018-12-25 | 2019-04-26 | Oppo广东移动通信有限公司 | Payment verification method, apparatus, electronic equipment and computer readable storage medium |
CN111382596A (en) * | 2018-12-27 | 2020-07-07 | 鸿富锦精密工业(武汉)有限公司 | Face recognition method and device and computer storage medium |
CN110163097A (en) * | 2019-04-16 | 2019-08-23 | 深圳壹账通智能科技有限公司 | Discrimination method, device, electronic equipment and the storage medium of three-dimensional head portrait true or false |
CN110462633A (en) * | 2019-06-27 | 2019-11-15 | 深圳市汇顶科技股份有限公司 | A kind of method, apparatus and electronic equipment of recognition of face |
CN110287672A (en) * | 2019-06-27 | 2019-09-27 | 深圳市商汤科技有限公司 | Verification method and device, electronic equipment and storage medium |
CN110287900A (en) * | 2019-06-27 | 2019-09-27 | 深圳市商汤科技有限公司 | Verification method and verifying device |
WO2020258121A1 (en) * | 2019-06-27 | 2020-12-30 | 深圳市汇顶科技股份有限公司 | Face recognition method and apparatus, and electronic device |
CN110462633B (en) * | 2019-06-27 | 2023-05-26 | 深圳市汇顶科技股份有限公司 | Face recognition method and device and electronic equipment |
CN110659617A (en) * | 2019-09-26 | 2020-01-07 | 杭州艾芯智能科技有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN112711968A (en) * | 2019-10-24 | 2021-04-27 | 浙江舜宇智能光学技术有限公司 | Face living body detection method and system |
CN112861568A (en) * | 2019-11-12 | 2021-05-28 | Oppo广东移动通信有限公司 | Authentication method and device, electronic equipment and computer readable storage medium |
CN111882324A (en) * | 2020-07-24 | 2020-11-03 | 南京华捷艾米软件科技有限公司 | Face authentication method and system |
CN113327348A (en) * | 2021-05-08 | 2021-08-31 | 宁波盈芯信息科技有限公司 | Networking type 3D people face intelligence lock |
CN113469036A (en) * | 2021-06-30 | 2021-10-01 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and storage medium |
CN115797995A (en) * | 2022-11-18 | 2023-03-14 | 北京的卢铭视科技有限公司 | Face living body detection method, electronic equipment and storage medium |
CN115797995B (en) * | 2022-11-18 | 2023-09-01 | 北京的卢铭视科技有限公司 | Face living body detection method, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108805024B (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764052A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108805024A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108668078B (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108804895A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108549867A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108419017B (en) | Control method, apparatus, electronic equipment and the computer readable storage medium of shooting | |
CN108711054A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
US11256903B2 (en) | Image processing method, image processing device, computer readable storage medium and electronic device | |
CN109767467A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN108764053A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108830141A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN109040746B (en) | Camera calibration method and apparatus, electronic equipment, computer readable storage medium | |
TWI709110B (en) | Camera calibration method and apparatus, electronic device | |
CN108573170A (en) | Information processing method and device, electronic equipment, computer readable storage medium | |
WO2019206020A1 (en) | Image processing method, apparatus, computer-readable storage medium, and electronic device | |
CN108734676A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN109118581A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108921903A (en) | Camera calibration method, device, computer readable storage medium and electronic equipment | |
CN109213610A (en) | Data processing method, device, computer readable storage medium and electronic equipment | |
CN109040745A (en) | Camera method for self-calibrating and device, electronic equipment, computer storage medium | |
CN109327626A (en) | Image-pickup method, device, electronic equipment and computer readable storage medium | |
CN108564032A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN108769665A (en) | Data transmission method, device, electronic equipment and computer readable storage medium | |
CN108985255A (en) | Data processing method, device, computer readable storage medium and electronic equipment | |
CN108650472A (en) | Control method, apparatus, electronic equipment and the computer readable storage medium of shooting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |