CN105512637A - Image processing method and electric device - Google Patents

Image processing method and electric device Download PDF

Info

Publication number
CN105512637A
CN105512637A CN201510971623.8A CN201510971623A CN105512637A CN 105512637 A CN105512637 A CN 105512637A CN 201510971623 A CN201510971623 A CN 201510971623A CN 105512637 A CN105512637 A CN 105512637A
Authority
CN
China
Prior art keywords
image
depth
view information
subject
acquisition unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510971623.8A
Other languages
Chinese (zh)
Inventor
余庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510971623.8A priority Critical patent/CN105512637A/en
Publication of CN105512637A publication Critical patent/CN105512637A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing method and an electric device. The image processing method is applied to the electric device which includes a first image acquisition unit and a second image acquisition unit. The method includes the steps of: obtaining a first image of a shot object through the first image acquisition unit and obtaining a second image of the same shot object through the second image acquisition unit, the first image and the second image having the same shooting scopes; performing pre-determined image treatment on the first image and the second image so as to obtain depth information of the shot object; comparing the obtained depth information of the shot object with a pre-stored depth information and obtaining a comparison result; and when the comparison result satisfies a pre-determined condition, determining whether the shot object is a living body.

Description

Image processing method and electronic equipment
Technical field
The application relates to image processing method and electronic equipment.
Background technology
At present, face recognition technology is widely used in authentication.Such as, gate control system, electronic equipment etc. can apply face recognition technology to judge whether user is validated user.
But the face recognition technology of prior art exists is cheated by photo easily and loses the safe enough of authentication.On the other hand, support that the technology of living body faces identification often needs to be obtained by video to differentiate, need the time more grown obtain the video of user and carry out image procossing for this reason.This greatly reduces the practicality that face recognition technology is applied to authentication.
For this reason, expect to provide a kind of image processing method and electronic equipment, it can complete living body faces identification fast and carry out authentication, thus improves the experience of user.
Summary of the invention
According to the embodiment of the present invention, provide a kind of image processing method, be applied in the electronic equipment comprising the first image acquisition unit and the second image acquisition unit, described method comprises:
Obtained the first image of subject by described first image acquisition unit, obtained the second image of identical subject by the second image acquisition unit, described first image and described second image have identical coverage simultaneously;
Predetermined image process is performed to obtain the depth of view information of subject to described first image and described second image;
The depth of view information of the subject of acquisition is compared to obtain comparative result with the depth of view information model prestored; And
When described comparative result meets predetermined condition, determine that described subject is live body.
Alternatively, described first image acquisition unit and described second image acquisition unit have identical resolution and field angle.
Alternatively, perform predetermined image process to described first image and described second image also to comprise with the depth of view information obtaining subject:
The depth of view information of subject is calculated according to the much information of the parallax information of the same characteristic features point comprised in described first image and described second image.
Alternatively, also comprise according to the depth of view information of the much information calculating subject of the phase information of the same characteristic features point comprised in described first image and described second image:
Described first image and described second image are spliced side by side;
The same characteristic features point alignd in described first image and described second image, makes described same characteristic features point on identical horizontal line;
Distance between focal length according to the distance between described first image acquisition unit and described second image acquisition unit, described first image acquisition unit or described second image acquisition unit and the same characteristic features point as the parallax information of unique point, calculates the depth of view information value of unique point;
The depth of view information value calculating the unique point obtained is converted to gray-scale value, thus generates with the depth of view information image of gray scale display; And
According to the depth of view information of depth of view information Image Acquisition subject.
Alternatively, subject is face.Depth of view information according to depth of view information Image Acquisition subject comprises further:
Face datection is performed to obtain the coordinate of facial image to described first image;
By the virtual borderlines of facial image that obtains to described depth of view information image to obtain the position of facial image in described depth of view information image; And
Obtain the gray-scale value of relevant position to generate the depth of view information of corresponding face gray-scale value matrix as facial image.
Alternatively, the depth of view information model prestored described in generates in the following manner:
Obtain the face depth of view information image of multiple characteristic feature and be scaled prescribed level;
Average treatment is performed to obtain an average face depth of view information image to multiple depth of view information image;
Calculate the statistic histogram proper vector of described average face depth of view information image; And
Store described statistic histogram proper vector as depth of view information model.
Alternatively, wherein, after generation depth of view information model, described image processing method also comprises:
Choose existing arbitrary face depth of view information image and be scaled prescribed level;
Calculate the statistic histogram proper vector of the face depth of view information image chosen;
The statistic histogram proper vector calculated and described depth of view information model are calculated matching value;
Repeat above-mentioned calculating repeatedly to obtain matching value scope.
Alternatively, the depth of view information of the subject of acquisition is compared to obtain comparative result with the depth of view information model prestored; And when described comparative result meets predetermined condition, determine that described subject is that live body comprises further:
The statistic histogram proper vector of facial image is calculated according to the depth of view information of facial image in the first image;
The statistic histogram proper vector of the facial image calculated and described depth of view information model are calculated matching value; And
When the matching value calculated falls within the scope of described matching value, determine that described subject is live body.
According to another embodiment of the present invention, provide a kind of electronic equipment, comprising:
First image acquisition unit, is configured to the first image obtaining subject;
Second image acquisition unit, is configured to the second image obtaining identical subject, and described first image and described second image have identical coverage;
Processing unit, is configured to perform predetermined image process to obtain the depth of view information of subject to described first image and described second image, the depth of view information of the subject of acquisition is compared to obtain comparative result with the depth of view information model prestored; And when described comparative result meets predetermined condition, determine that described subject is live body.
Alternatively, described first image acquisition unit and described second image acquisition unit have identical resolution and field angle.
Alternatively, described processing unit is further configured to:
The depth of view information of subject is calculated according to the much information of the parallax information of the same characteristic features point comprised in described first image and described second image.
Alternatively, described processing unit is further configured to:
Described first image and described second image are spliced side by side;
The same characteristic features point alignd in described first image and described second image, makes described same characteristic features point on identical horizontal line;
Distance between focal length according to the distance between described first image acquisition unit and described second image acquisition unit, described first image acquisition unit or described second image acquisition unit and the same characteristic features point as the parallax information of unique point, calculates the depth of view information value of unique point;
The depth of view information value calculating the unique point obtained is converted to gray-scale value, thus generates with the depth of view information image of gray scale display; And
According to the depth of view information of depth of view information Image Acquisition subject.
Alternatively, subject is face.Described processing unit is further configured to:
Face datection is performed to obtain the coordinate of facial image to described first image;
By the virtual borderlines of facial image that obtains to described depth of view information image to obtain the position of facial image in described depth of view information image; And
Obtain the gray-scale value of relevant position to generate the depth of view information of corresponding face gray-scale value matrix as facial image.
Alternatively, the depth of view information model prestored described in generates in the following manner:
Obtain the face depth of view information image of multiple characteristic feature and be scaled prescribed level;
Average treatment is performed to obtain an average face depth of view information image to multiple depth of view information image;
Calculate the statistic histogram proper vector of described average face depth of view information image; And
Store described statistic histogram proper vector as depth of view information model.
Alternatively, described processing unit is further configured to:
Choose existing arbitrary face depth of view information image and be scaled prescribed level;
Calculate the statistic histogram proper vector of the face depth of view information image chosen;
The statistic histogram proper vector calculated and described depth of view information model are calculated matching value;
Repeat above-mentioned calculating repeatedly to obtain matching value scope.
Alternatively, described processing unit is further configured to:
The statistic histogram proper vector of facial image is calculated according to the depth of view information of facial image in the first image;
The statistic histogram proper vector of the facial image calculated and described depth of view information model are calculated matching value; And
When the matching value calculated falls within the scope of described matching value, determine that described subject is live body.
Therefore, according to image processing method and the electronic equipment of the embodiment of the present invention, it can complete living body faces identification fast and carry out authentication, thus improves the experience of user.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the image processing method illustrated according to a first embodiment of the present invention.
Fig. 2 is the process flow diagram that the depth of view information calculating subject in image processing method is according to a first embodiment of the present invention described.
Fig. 3 is the schematic diagram of the depth of view information that computed image in image processing method is according to a first embodiment of the present invention described.
Fig. 4 is the process flow diagram that the depth of view information calculating subject in image processing method is according to a first embodiment of the present invention described.
Fig. 5 illustrates the process flow diagram generating depth of view information model in image processing method according to a first embodiment of the present invention.
Fig. 6 illustrates the process flow diagram calculating matching value scope in image processing method according to a first embodiment of the present invention.
Fig. 7 illustrates in image processing method according to a first embodiment of the present invention to determine that described subject is the process flow diagram of live body.
Fig. 8 is the block diagram of the electronic equipment illustrated according to a second embodiment of the present invention.
Fig. 9 is the block diagram of the electronic equipment illustrated according to a third embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in disclosure embodiment, be clearly and completely described the technical scheme in disclosure embodiment, obviously, described embodiment is disclosure part embodiment, instead of whole embodiments.
< first embodiment >
Below with reference to Fig. 1-6 description image processing method according to a first embodiment of the present invention.Image processing method is according to a first embodiment of the present invention applied in the electronic equipment comprising the first image acquisition unit and the second image acquisition unit.Such electronic equipment can be any electronic equipment, as long as it has two image acquisition units, such as, and two cameras.Such electronic equipment can be such as smart mobile phone, panel computer, notebook computer etc.
In this first embodiment, suppose that two cameras have identical resolution and field angle.
As shown in Figure 1, image processing method 100 according to a first embodiment of the present invention comprises:
Step S01: the first image being obtained subject by the first image acquisition unit, obtained the second image of identical subject by the second image acquisition unit, described first image and described second image have identical coverage simultaneously;
Step S102: predetermined image process is performed to obtain the depth of view information of subject to described first image and described second image;
Step S103: the depth of view information of the subject of acquisition is compared to obtain comparative result with the depth of view information model prestored; And
Step S104: when described comparative result meets predetermined condition, determines that described subject is live body.
Particularly, in step S101, the first image acquisition unit (such as, the first camera) can be pre-set and the second image acquisition unit (such as, first camera), make the first image acquisition unit and the second image acquisition unit have identical coverage.In addition, the acquisition parameters of the first image acquisition unit and the second image acquisition unit can be set, make the first image acquisition unit and the second image acquisition unit have identical image resolution ratio, field angle etc.In addition, in the electronic device, suppose that the first image acquisition unit and the second image acquisition unit are all Front camera.After the first image acquisition unit and the second image acquisition unit are provided with, the first image of subject can be obtained by described first image acquisition unit, obtained the second image of identical subject by the second image acquisition unit, described first image and described second image of acquisition have identical coverage simultaneously.
Then, in step s 102, predetermined image process can be performed to obtain the depth of view information of subject to described first image and described second image.That is, by performing predetermined image process to the first image obtained and the second image, the depth of view information of the subject in image can be obtained.After a while, the process of the depth of view information obtaining subject is described in detail with reference to Fig. 2-4.
Then, in step s 103, the depth of view information of the subject of acquisition can be compared to obtain comparative result with the depth of view information prestored model.That is, the depth of view information of subject is compared with the depth of view information model obtained in advance, can determine whether the depth of view information of subject meets predetermined condition.
Finally, in step S104, when described comparative result meets predetermined condition, can determine that described subject is live body.
Below, the process of the depth of view information obtaining subject is described in detail with reference to Fig. 2-3.
Such as, the depth of view information of subject is calculated according to the much information of the parallax information of the same characteristic features point comprised in described first image and described second image.
As shown in Figure 2, also comprise according to the depth of view information of the much information calculating subject of the phase information of the same characteristic features point comprised in described first image and described second image:
Step S201: the first image and the second image are spliced side by side;
Step S202: the same characteristic features point alignd in described first image and described second image, makes described same characteristic features point on identical horizontal line;
Step S203: the distance between the focal length according to the distance between described first image acquisition unit and described second image acquisition unit, described first image acquisition unit or described second image acquisition unit and the same characteristic features point as the parallax information of unique point, calculates the depth of view information value of unique point;
Step S204: the depth of view information value calculating the unique point obtained is converted to gray-scale value, thus generate with the depth of view information image of gray scale display; And
Step S205: according to the depth of view information of depth of view information Image Acquisition subject.
Particularly, in step s 201, first described first image and described second image are spliced side by side.With reference to figure 3, such as, be stitched together on the right side of the left side of the second image and the first image.
Then, in step S202, the same characteristic features point alignd in described first image and described second image, makes described same characteristic features point on identical horizontal line.As shown in Figure 3, by the mode of the first image and the alignment of the second image top, make the point of the same characteristic features in two images on identical horizontal line, as unique point x1 and x2, x3 and x4, and x5 and x6.Note, although merely illustrate three in Fig. 3 to identical unique point, but one of skill in the art will appreciate that because the first camera and second camera are set up and have identical coverage, so most of unique point one_to_one corresponding in the first image and the second image.
Then, in step S203, can according to the focal length of the distance between described first image acquisition unit and described second image acquisition unit, described first image acquisition unit or described second image acquisition unit and as the parallax information of unique point same characteristic features point between distance, calculate the depth of view information value of unique point.
Particularly, the depth of view information value of unique point can be calculated by following formula (1):
The depth of field=focal length × parallax range/parallax formula (1)
Wherein, parallax range refers to the distance between the first image acquisition unit and the second image acquisition unit, parallax refers to the distance between character pair point, such as, unique point x1 in Fig. 3 and x2, x3 and x4, and the distance between x5 and x6, parallax=| x2-x1|, focal length is the focal length of the image acquisition unit being set to master image.In the present embodiment, suppose that the first image is master image, therefore focal length is the focal length of the first image acquisition unit.
In addition, in the present embodiment, although the first image acquisition unit and the second image acquisition unit are set to have identical coverage, the first image and second image of actual acquisition may be incomplete same, as shown in Figure 3.In this case, can unique point alignment after, the image suitably choosing preset range as required as calculating object, as in Fig. 3 by the scope that dotted line collimation mark goes out.
According to above-mentioned formula (1), by calculating multiple unique points of the first image and the second image, obtain the depth of view information value of multiple unique point.
Then, in step S204, the depth of view information value calculating the unique point obtained can be converted to gray-scale value, thus generate with the depth of view information image of gray scale display.
Such as, suppose that tonal range is 0-255, therefore the depth of view information value of the unique point of acquisition can be mapped as gray-scale value.Particularly, suppose that with the gray-scale value of the position of camera distance 0 be 0, be 255 with the gray-scale value of the position of camera distance 1 meter, according to the depth of view information value of unique point, correspondingly the depth of view information value of each unique point be mapped as gray-scale value.It should be noted that the distance that gray-scale value can be set as required in addition and map.
With the photo of plane unlike, the zones of different as the subject of live body has different depth of view information.Therefore, if when user uses photo to carry out authentication, the regional being identified subject in this photo is had identical depth of view information, therefore this image is the subject of photo instead of live body.
On the other hand, if identify that the regional of subject has different depth of view information, therefore this image is live body.
Finally, in step S205, can according in step S204 or the depth of view information of depth of view information Image Acquisition subject.
In one embodiment, in order to carry out In vivo detection, suppose that subject is face.
In this embodiment, as shown in Figure 4, comprise according to the method 400 of the depth of view information of depth of view information Image Acquisition subject:
Step S401: Face datection is performed to obtain the coordinate of facial image to described first image;
Step S402: by the virtual borderlines of facial image that obtains to described depth of view information image to obtain the position of facial image in described depth of view information image; And
Step S403: obtain the gray-scale value of relevant position to generate the depth of view information of corresponding face gray-scale value matrix as facial image.
Particularly, in step S401, first Face datection performed to the first image and identify face, then determining the coordinate of facial image in the first image.
Then, in step S402, because the depth of view information image and the first image that obtain are incomplete same, so to need the virtual borderlines of the facial image of acquisition to described depth of view information image to obtain the position of facial image in described depth of view information image.
Finally, in step S403, the gray-scale value of relevant position can be obtained to generate the depth of view information of corresponding face gray-scale value matrix as facial image.
The process of the depth of view information obtaining facial image is described above with reference to Fig. 2-4.Below, describe with reference to Fig. 5 and how to set up for judging that whether facial image is the depth of view information model of live body.
As shown in Figure 5, the method 500 generating depth of view information model comprises:
Step S501: obtain the face depth of view information image of multiple characteristic feature and be scaled prescribed level;
Step S502: average treatment is performed to obtain an average face depth of view information image to multiple depth of view information image;
Step S503: the statistic histogram proper vector calculating described average face depth of view information image; And
Step S504: store described statistic histogram proper vector as depth of view information model.
Particularly, in step S501, such as, to the face of all kinds of characteristic feature, collection of taking pictures can be carried out, thus obtain the face depth of view information image of corresponding multiple characteristic features respectively.In addition, can be prescribed level by the face depth of view information image scaling of acquisition, such as, 24*24.
In step S502, can to all face depth of view information image averaging process obtained in step S501 to obtain an average face depth of view information image.
Then, in step S503, can to the average face depth of view information image counting statistics histogram feature vector obtained in step S502.
Such as, can use below shown in formula (2) calculate.
H ( k ) = n k N , k = 0,1 , . . . , L - 1 Formula (2)
The feature value of k representative image in above formula, L is that feature can value number, n kbe have the number that eigenwert is the pixel of k in image, N is the sum of image pixel.
Finally, in step S504, can be stored in step S503 calculate acquisition described statistic histogram proper vector as depth of view information model.
For above or depth of view information model, need for it arranges certain matching range.That is, the image fallen in the preset range of this model can be judged as live body image.
As shown in Figure 6, the method 600 calculating matching value scope comprises:
Step S601: choose existing arbitrary face depth of view information image and be scaled prescribed level;
Step S602: the statistic histogram proper vector calculating the face depth of view information image chosen;
Step S603: the statistic histogram proper vector calculated and described depth of view information model are calculated matching value; And
Arrange S604: repeat above-mentioned calculating repeatedly to obtain matching value scope.
Particularly, in step s 601, arbitrary one can be selected from all face depth of view information images obtained, and be scaled prescribed level, such as, 24*24.
Then, in step S602, such as, calculate the statistic histogram proper vector of the face depth of view information image of this selection according to above-described mode.
Then, in step S603, such as, by the matching value of the statistic histogram proper vector of face depth of view information image obtained in following formula (3) calculation procedure S602 and the depth of view information model stored:
P ( Q , D ) = &Sigma; K - 0 L - 1 m i n &lsqb; H Q ( k ) , H D ( k ) &rsqb; &Sigma; K - 0 L - 1 H Q ( k ) Formula (3)
Wherein, H q(k) and H dk () is respectively the face depth image statistic histogram proper vector of current selection and the average face depth image statistic histogram proper vector as model, the matching value of P (Q, D) for calculating.
Finally, in step s 604, repeat the operation of previous step S601-S603, use one group of existing face depth image and carry out matching value calculating as the average face depth image of model, an experimental matching value scope can be obtained.
After obtaining matching value scope, as shown in Figure 7, the method 700 being live body according to the determination subject of the embodiment of the present invention comprises:
Step S701: the statistic histogram proper vector calculating facial image according to the depth of view information of facial image in the first image;
Step S702: the statistic histogram proper vector of the facial image calculated and described depth of view information model are calculated matching value; And
Step S703: when the matching value calculated falls within the scope of described matching value, determine that described subject is live body.
Particularly, in step s 701, according to the depth of view information of the facial image of the face identified in the first image (that is, subject), the statistic histogram proper vector of facial image can be calculated.
Then, in step S702, the statistic histogram proper vector of the facial image calculated and described depth of view information model can be calculated matching value.Such as, above-described formula (3) can be used to calculate matching value.
Finally, in step S703, when the matching value calculated falls within the scope of described matching value, determine that described subject is live body.
Described above is the embodiment that the first camera and second camera are identical cameras.In another embodiment, the first camera and second camera can also be set to have different resolution and field angle.
In this embodiment, zoom degree when can use according to user, automatic decision is based on the information of which camera, and will as the first image recited above using the image obtained as main camera, and the focal length of this main camera calculates as the focal length used in formula (2) above.
Therefore, according to the image processing method of the embodiment of the present invention, do not need to complete live body by video to differentiate, and only need a sub-picture of two the camera shooting subjects arranged on an electronic device just can complete living body faces identification immediately, and carry out authentication.Therefore, the experience of user is substantially increased.
< second embodiment >
Fig. 8 is the block diagram of the electronic equipment 800 schematically illustrated according to the disclosure second embodiment.As shown in Figure 8, electronic equipment 800 comprises one or more processor 810, memory storage 820, input media 830, output unit 840, communicator 850 and two camera 860a and 860b, and these assemblies are interconnected by bindiny mechanism's (not shown) of bus system 870 and/or other form.The assembly and the structure that it should be noted that the electronic equipment 800 shown in Fig. 8 are illustrative, and not restrictive, and as required, electronic equipment 800 also can have other assemblies and structure.
Processor 810 can be the processing unit of CPU (central processing unit) (CPU) or other form with data-handling capacity and/or instruction execution capability, and other assembly that can control in electronic equipment 800 is with the function of carry out desired.
Memory storage 820 can comprise one or more computer program, and described computer program can comprise various forms of computer-readable recording medium, such as volatile memory and/or nonvolatile memory.Described volatile memory such as can comprise random access memory (RAM) and/or cache memory (cache) etc.Described nonvolatile memory such as can comprise ROM (read-only memory) (ROM), hard disk, flash memory etc.Described computer-readable recording medium can store one or more computer program instructions, and processor 810 can run described programmed instruction, to realize the image processing method that composition graphs 1 to Fig. 7 above describes.Various application program and various data can also be stored, the various data etc. that such as view data and described application program use and/or produce in described computer-readable recording medium.
Input media 830 can be that user is used for inputting the device of instruction, and it is one or more to comprise in keyboard, mouse, microphone and touch-screen etc.Described instruction is such as the instruction using following camera 860 to take image.Output unit 840 externally (such as user) can export various information (such as image or sound), and it is one or more to comprise in display, loudspeaker etc.Communicator 850 can be communicated with other device (such as personal computer, server, transfer table, base station etc.) by network or other technology, described network can be the Internet, WLAN (wireless local area network), mobile communications network etc., and other technology described such as can comprise Bluetooth communication, infrared communication etc.Camera 860a and 860b can take pending image (such as photo, video etc.), and is stored in memory storage 820 by captured image and uses for other assembly.
As mentioned above, camera 860a and 860b can be identical camera, also can be the camera with different resolution and field angle.Such as, one is the low pixel camera head of wide-angle, infinity focal length, and another focuses high-pixel camera head for tradition.When carrying out recognition of face shooting facial image, two cameras are taken simultaneously, and through focusing, make two cameras have approximate coverage and picture quality.
Therefore, according to the electronic equipment of the embodiment of the present invention, living body faces identification can be completed fast and carry out authentication, thus improving the experience of user.
< the 3rd embodiment >
Electronic equipment 900 according to a third embodiment of the present invention comprises:
First image acquisition unit 901, is configured to the first image obtaining subject;
Second image acquisition unit 902, is configured to the second image obtaining identical subject, and described first image and described second image have identical coverage;
Processing unit 903, is configured to perform predetermined image process to obtain the depth of view information of subject to described first image and described second image, the depth of view information of the subject of acquisition is compared to obtain comparative result with the depth of view information model prestored; And when described comparative result meets predetermined condition, determine that described subject is live body.
Alternatively, described first image acquisition unit 901 and described second image acquisition unit 902 have identical resolution and field angle.
Alternatively, described processing unit 903 is further configured to:
The depth of view information of subject is calculated according to the much information of the parallax information of the same characteristic features point comprised in described first image and described second image.
Alternatively, described processing unit 903 is further configured to:
Described first image and described second image are spliced side by side;
The same characteristic features point alignd in described first image and described second image, makes described same characteristic features point on identical horizontal line;
Distance between focal length according to the distance between described first image acquisition unit 901 and described second image acquisition unit 902, described first image acquisition unit 901 or described second image acquisition unit 902 and the same characteristic features point as the parallax information of unique point, calculates the depth of view information value of unique point;
The depth of view information value calculating the unique point obtained is converted to gray-scale value, thus generates with the depth of view information image of gray scale display; And
According to the depth of view information of depth of view information Image Acquisition subject.
Alternatively, subject is face.Described processing unit 903 is further configured to:
Face datection is performed to obtain the coordinate of facial image to described first image;
By the virtual borderlines of facial image that obtains to described depth of view information image to obtain the position of facial image in described depth of view information image; And
Obtain the gray-scale value of relevant position to generate the depth of view information of corresponding face gray-scale value matrix as facial image.
Alternatively, the depth of view information model prestored described in generates in the following manner:
Obtain the face depth of view information image of multiple characteristic feature and be scaled prescribed level;
Average treatment is performed to obtain an average face depth of view information image to multiple depth of view information image;
Calculate the statistic histogram proper vector of described average face depth of view information image; And
Store described statistic histogram proper vector as depth of view information model.
Alternatively, described processing unit 903 is further configured to:
Choose existing arbitrary face depth of view information image and be scaled prescribed level;
Calculate the statistic histogram proper vector of the face depth of view information image chosen;
The statistic histogram proper vector calculated and described depth of view information model are calculated matching value;
Repeat above-mentioned calculating repeatedly to obtain matching value scope.
Alternatively, described processing unit 903 is further configured to:
The statistic histogram proper vector of facial image is calculated according to the depth of view information of facial image in the first image;
The statistic histogram proper vector of the facial image calculated and described depth of view information model are calculated matching value; And
When the matching value calculated falls within the scope of described matching value, determine that described subject is live body.
Therefore, according to the electronic equipment of the embodiment of the present invention, living body faces identification can be completed fast and carry out authentication, thus improving the experience of user.
It should be noted that embodiment is above only be used as example, the invention is not restricted to such example, but can various change be carried out.
It should be noted that, in this manual, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
Finally, also it should be noted that, above-mentioned a series of process not only comprises with the order described here temporally process that performs of sequence, and comprises process that is parallel or that perform respectively instead of in chronological order.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that the present invention can add required hardware platform by software and realize, and can certainly all be implemented by hardware.Based on such understanding, what technical scheme of the present invention contributed to background technology can embody with the form of software product in whole or in part, this computer software product can be stored in storage medium, as ROM (ROM (read-only memory))/RAM (random access memory), magnetic disc, CD etc., comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform the method described in some part of each embodiment of the present invention or embodiment.
Above to invention has been detailed introduction, applying specific case herein and setting forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (16)

1. an image processing method, be applied in the electronic equipment comprising the first image acquisition unit and the second image acquisition unit, described method comprises:
Obtained the first image of subject by described first image acquisition unit, obtained the second image of identical subject by the second image acquisition unit, described first image and described second image have identical coverage simultaneously;
Predetermined image process is performed to obtain the depth of view information of subject to described first image and described second image;
The depth of view information of the subject of acquisition is compared to obtain comparative result with the depth of view information model prestored; And
When described comparative result meets predetermined condition, determine that described subject is live body.
2. image processing method as claimed in claim 1, wherein, described first image acquisition unit and described second image acquisition unit have identical resolution and field angle.
3. image processing method as claimed in claim 1, wherein, performs predetermined image process to described first image and described second image and also comprises with the depth of view information obtaining subject:
The depth of view information of subject is calculated according to the parallax information of the same characteristic features point in described first image and described second image.
4. image processing method as claimed in claim 3, wherein, the depth of view information calculating subject according to the much information of the phase information of the same characteristic features point comprised in described first image and described second image also comprises:
Described first image and described second image are spliced side by side;
The same characteristic features point alignd in described first image and described second image, makes described same characteristic features point on identical horizontal line;
Distance between focal length according to the distance between described first image acquisition unit and described second image acquisition unit, described first image acquisition unit or described second image acquisition unit and the same characteristic features point as the parallax information of unique point, calculates the depth of view information value of unique point;
The depth of view information value calculating the unique point obtained is converted to gray-scale value, thus generates with the depth of view information image of gray scale display; And
According to the depth of view information of depth of view information Image Acquisition subject.
5. image processing method as claimed in claim 4, wherein, subject is face, and
Depth of view information according to depth of view information Image Acquisition subject comprises further:
Face datection is performed to obtain the coordinate of facial image to described first image;
By the virtual borderlines of facial image that obtains to described depth of view information image to obtain the position of facial image in described depth of view information image; And
Obtain the gray-scale value of relevant position to generate the depth of view information of corresponding face gray-scale value matrix as facial image.
6. image processing method as claimed in claim 5, wherein, described in the depth of view information model that prestores generate in the following manner:
Obtain the face depth of view information image of multiple characteristic feature and be scaled prescribed level;
Average treatment is performed to obtain an average face depth of view information image to multiple depth of view information image;
Calculate the statistic histogram proper vector of described average face depth of view information image; And
Store described statistic histogram proper vector as depth of view information model.
7. image processing method as claimed in claim 6, wherein, after generation depth of view information model, described method also comprises:
Choose existing arbitrary face depth of view information image and be scaled prescribed level;
Calculate the statistic histogram proper vector of the face depth of view information image chosen;
The statistic histogram proper vector calculated and described depth of view information model are calculated matching value;
Repeat above-mentioned calculating repeatedly to obtain matching value scope.
8. image processing method as claimed in claim 7, wherein, compares to obtain comparative result with the depth of view information model prestored by the depth of view information of the subject of acquisition; And when described comparative result meets predetermined condition, determine that described subject is that live body comprises further:
The statistic histogram proper vector of facial image is calculated according to the depth of view information of facial image in the first image;
The statistic histogram proper vector of the facial image calculated and described depth of view information model are calculated matching value; And
When the matching value calculated falls within the scope of described matching value, determine that described subject is live body.
9. an electronic equipment, comprising:
First image acquisition unit, is configured to the first image obtaining subject;
Second image acquisition unit, is configured to the second image obtaining identical subject, and described first image and described second image have identical coverage;
Processing unit, is configured to perform predetermined image process to obtain the depth of view information of subject to described first image and described second image, the depth of view information of the subject of acquisition is compared to obtain comparative result with the depth of view information model prestored; And when described comparative result meets predetermined condition, determine that described subject is live body.
10. electronic equipment as claimed in claim 9, wherein, described first image acquisition unit and described second image acquisition unit have identical resolution and field angle.
11. electronic equipments as claimed in claim 9, wherein, described processing unit is further configured to:
The depth of view information of subject is calculated according to the much information of the parallax information of the same characteristic features point comprised in described first image and described second image.
12. electronic equipments as claimed in claim 11, wherein, described processing unit is further configured to:
Described first image and described second image are spliced side by side;
The same characteristic features point alignd in described first image and described second image, makes described same characteristic features point on identical horizontal line;
Distance between focal length according to the distance between described first image acquisition unit and described second image acquisition unit, described first image acquisition unit or described second image acquisition unit and the same characteristic features point as the parallax information of unique point, calculates the depth of view information value of unique point;
The depth of view information value calculating the unique point obtained is converted to gray-scale value, thus generates with the depth of view information image of gray scale display; And
According to the depth of view information of depth of view information Image Acquisition subject.
13. electronic equipments as claimed in claim 12, wherein, subject is face, and
Described processing unit is further configured to:
Face datection is performed to obtain the coordinate of facial image to described first image;
By the virtual borderlines of facial image that obtains to described depth of view information image to obtain the position of facial image in described depth of view information image; And
Obtain the gray-scale value of relevant position to generate the depth of view information of corresponding face gray-scale value matrix as facial image.
14. electronic equipments as claimed in claim 13, wherein, described in the depth of view information model that prestores generate in the following manner:
Obtain the face depth of view information image of multiple characteristic feature and be scaled prescribed level;
Average treatment is performed to obtain an average face depth of view information image to multiple depth of view information image;
Calculate the statistic histogram proper vector of described average face depth of view information image; And
Store described statistic histogram proper vector as depth of view information model.
15. electronic equipments as claimed in claim 14, wherein, described processing unit is further configured to:
Choose existing arbitrary face depth of view information image and be scaled prescribed level;
Calculate the statistic histogram proper vector of the face depth of view information image chosen;
The statistic histogram proper vector calculated and described depth of view information model are calculated matching value;
Repeat above-mentioned calculating repeatedly to obtain matching value scope.
16. electronic equipments as claimed in claim 15, wherein, described processing unit is further configured to:
The statistic histogram proper vector of facial image is calculated according to the depth of view information of facial image in the first image;
The statistic histogram proper vector of the facial image calculated and described depth of view information model are calculated matching value; And
When the matching value calculated falls within the scope of described matching value, determine that described subject is live body.
CN201510971623.8A 2015-12-22 2015-12-22 Image processing method and electric device Pending CN105512637A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510971623.8A CN105512637A (en) 2015-12-22 2015-12-22 Image processing method and electric device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510971623.8A CN105512637A (en) 2015-12-22 2015-12-22 Image processing method and electric device

Publications (1)

Publication Number Publication Date
CN105512637A true CN105512637A (en) 2016-04-20

Family

ID=55720604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510971623.8A Pending CN105512637A (en) 2015-12-22 2015-12-22 Image processing method and electric device

Country Status (1)

Country Link
CN (1) CN105512637A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information
CN107341894A (en) * 2017-08-22 2017-11-10 无锡北斗星通信息科技有限公司 Intelligent System of Vehicle unseals system
CN107368817A (en) * 2017-07-26 2017-11-21 湖南云迪生物识别科技有限公司 Face identification method and device
CN107527410A (en) * 2017-08-22 2017-12-29 无锡北斗星通信息科技有限公司 A kind of Intelligent System of Vehicle solution encapsulation method
CN107563329A (en) * 2017-09-01 2018-01-09 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107609463A (en) * 2017-07-20 2018-01-19 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107657219A (en) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 Method for detecting human face and Related product
CN108182412A (en) * 2017-12-29 2018-06-19 百度在线网络技术(北京)有限公司 For the method and device of detection image type
CN109241832A (en) * 2018-07-26 2019-01-18 维沃移动通信有限公司 A kind of method and terminal device of face In vivo detection
CN109376643A (en) * 2018-10-16 2019-02-22 杭州悉住信息科技有限公司 Face identification method and system
CN111275879A (en) * 2020-02-20 2020-06-12 Oppo广东移动通信有限公司 Currency identification method and device and mobile terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040037450A1 (en) * 2002-08-22 2004-02-26 Bradski Gary R. Method, apparatus and system for using computer vision to identify facial characteristics
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
US8989455B2 (en) * 2012-02-05 2015-03-24 Apple Inc. Enhanced face detection using depth information
CN104834901A (en) * 2015-04-17 2015-08-12 北京海鑫科金高科技股份有限公司 Binocular stereo vision-based human face detection method, device and system
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040037450A1 (en) * 2002-08-22 2004-02-26 Bradski Gary R. Method, apparatus and system for using computer vision to identify facial characteristics
US8989455B2 (en) * 2012-02-05 2015-03-24 Apple Inc. Enhanced face detection using depth information
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN104834901A (en) * 2015-04-17 2015-08-12 北京海鑫科金高科技股份有限公司 Binocular stereo vision-based human face detection method, device and system
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information
CN106991377B (en) * 2017-03-09 2020-06-05 Oppo广东移动通信有限公司 Face recognition method, face recognition device and electronic device combined with depth information
US10824890B2 (en) 2017-07-20 2020-11-03 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and storage medium
CN107609463A (en) * 2017-07-20 2018-01-19 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107368817B (en) * 2017-07-26 2020-02-21 湖南云迪生物识别科技有限公司 Face recognition method and device
CN107368817A (en) * 2017-07-26 2017-11-21 湖南云迪生物识别科技有限公司 Face identification method and device
CN107527410A (en) * 2017-08-22 2017-12-29 无锡北斗星通信息科技有限公司 A kind of Intelligent System of Vehicle solution encapsulation method
CN107341894B (en) * 2017-08-22 2019-09-24 济宁职业技术学院 Intelligent System of Vehicle unseals system
CN107341894A (en) * 2017-08-22 2017-11-10 无锡北斗星通信息科技有限公司 Intelligent System of Vehicle unseals system
CN107563329A (en) * 2017-09-01 2018-01-09 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107657219A (en) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 Method for detecting human face and Related product
CN107657219B (en) * 2017-09-12 2021-06-15 Oppo广东移动通信有限公司 Face detection method and related product
CN108182412A (en) * 2017-12-29 2018-06-19 百度在线网络技术(北京)有限公司 For the method and device of detection image type
CN109241832A (en) * 2018-07-26 2019-01-18 维沃移动通信有限公司 A kind of method and terminal device of face In vivo detection
CN109241832B (en) * 2018-07-26 2021-03-30 维沃移动通信有限公司 Face living body detection method and terminal equipment
CN109376643A (en) * 2018-10-16 2019-02-22 杭州悉住信息科技有限公司 Face identification method and system
CN111275879A (en) * 2020-02-20 2020-06-12 Oppo广东移动通信有限公司 Currency identification method and device and mobile terminal

Similar Documents

Publication Publication Date Title
CN105512637A (en) Image processing method and electric device
US11704771B2 (en) Training super-resolution convolutional neural network model using a high-definition training image, a low-definition training image, and a mask image
WO2020224457A1 (en) Image processing method and apparatus, electronic device and storage medium
US8693785B2 (en) Image matching devices and image matching methods thereof
US9195881B2 (en) Image transformation apparatus and method
CN107545248B (en) Biological characteristic living body detection method, device, equipment and storage medium
CN110674719A (en) Target object matching method and device, electronic equipment and storage medium
US10291838B2 (en) Focusing point determining method and apparatus
CN104584079A (en) Device and method for augmented reality applications
CN105427233A (en) Method and device for removing watermark
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN109522937B (en) Image processing method and device, electronic equipment and storage medium
KR101756605B1 (en) Method and device for characteristic extraction
KR102367648B1 (en) Method and apparatus for synthesizing omni-directional parallax view, and storage medium
CN106295530A (en) Face identification method and device
CN107133577B (en) Fingerprint identification method and device
CN104252828A (en) Vision-protective display method, vision-protective display device and terminal equipment
CN108846861B (en) Image homography matrix calculation method and device, mobile terminal and storage medium
CN111881740B (en) Face recognition method, device, electronic equipment and medium
CN107220614A (en) Image-recognizing method, device and computer-readable recording medium
KR101176743B1 (en) Apparatus and method for recognizing object, information content providing apparatus and information content managing server
CN105530502A (en) Method and apparatus for generating disparity map based on image frames photographed by stereo camera
CN105654093A (en) Feature extraction method and apparatus thereof
CN108198148A (en) The method and device of image procossing
US20210295016A1 (en) Living body recognition detection method, medium and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160420