CN105447483A - Living body detection method and device - Google Patents

Living body detection method and device Download PDF

Info

Publication number
CN105447483A
CN105447483A CN201511030874.2A CN201511030874A CN105447483A CN 105447483 A CN105447483 A CN 105447483A CN 201511030874 A CN201511030874 A CN 201511030874A CN 105447483 A CN105447483 A CN 105447483A
Authority
CN
China
Prior art keywords
image
images
frequency response
strength ratio
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511030874.2A
Other languages
Chinese (zh)
Other versions
CN105447483B (en
Inventor
范浩强
印奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Kuang Shi Data Technology Co., Ltd.
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Beijing Aperture Science and Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd, Beijing Aperture Science and Technology Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201511030874.2A priority Critical patent/CN105447483B/en
Publication of CN105447483A publication Critical patent/CN105447483A/en
Application granted granted Critical
Publication of CN105447483B publication Critical patent/CN105447483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention provides a living body detection method and device. The method comprises the steps: receiving at least two groups of object images which are respectively obtained through the collection of objects under the condition that structured light with at least two types of different spatial frequencies irradiates the objects; calculating at least two frequency response intensity images corresponding to the object images; and determining whether the objects are living bodies or not based on the frequency response intensity images. According to the embodiment of the invention, the method and device judge whether the objects are living bodies or not through the frequency response conditions of the detected objected to the structured light with various types of spatial frequencies, thereby achieving the high-safety detection of the living bodies with no need of the cooperation with the detected objects.

Description

Biopsy method and device
Technical field
The present invention relates to technical field of face recognition, relate more specifically to a kind of biopsy method and device.
Background technology
Current, the scene that face identification system is applied to security protection more and more, financial field needs authentication, opens an account as bank is long-range, gate control system, remote transaction operation demonstration etc.In the application of these high level of securitys, except guaranteeing that the human face similarity degree of authenticatee meets the storehouse, the end stored in database, first need to verify that authenticatee is a legal biological living.That is, face identification system needs security from attacks person to use the modes such as photo, video, mask or three-dimensional face model (being made up of materials such as paper, gypsum or rubber) to attack.
More existing biopsy methods need detected person to coordinate, in the ill-matched situation of detected person, existing biopsy method can only defend the planar object such as photo, video usually, and for the faceform with certain number of people 3D shape, protection effect is good not.
Therefore, in order to the recognition of face In vivo detection problem that solution coordinates by no means, need to provide a kind of new biopsy method.
Summary of the invention
Consider the problems referred to above and propose the present invention.The invention provides a kind of biopsy method and device.
According to an aspect of the present invention, provide a kind of biopsy method, comprise: receive at least two group objects images, this at least two group objects image obtains for described object collection when having the structured light object of different space frequency at least two kinds respectively; Calculate at least two the frequency response intensity images corresponding respectively with at least two group objects images; And determine whether described object is live body based at least two frequency response intensity images.
Exemplarily, determine whether described object is that live body comprises based at least two frequency response intensity images: obtain one or more strength ratio image based on the intensity relation between at least two frequency response intensity images; Face datection is carried out to one of at least two frequency response intensity images, to determine human face region; For each in one or more strength ratio image, calculate the average intensity value of the pixel in this strength ratio image, corresponding with human face region region; And judge whether to exist in one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then determining described to liking live body, if there is no, then determining that described object is not live body.
Exemplarily, obtain one or more strength ratio image based on the intensity relation between described at least two frequency response intensity images to comprise: from least two frequency response intensity images, select Specific frequency response intensity image; And for each in the residual frequency response intensity image at least two frequency response intensity images, calculate the ratio of the intensity level of the pixel of this residual frequency response intensity image and the intensity level of the respective pixel in Specific frequency response intensity image, and obtain one or more strength ratio image according to the ratio of calculated intensity level.
Exemplarily, for each in one or more strength ratio image, the average intensity value calculating the pixel in this strength ratio image, corresponding with human face region region comprises: the total intensity value calculating the pixel in this strength ratio image, corresponding with human face region region; And calculate total intensity value with in this strength ratio image, and the area ratio in the corresponding region of human face region, to obtain average intensity value.
Exemplarily, predetermined number equals the total number of all strength ratio images in described one or more strength ratio image.
Exemplarily, preset range carries out training based on actual face and obtains.
Exemplarily, structured light comprises infrared light.
Exemplarily, each group at least two group objects images comprise at least two respectively described in the structured light with the same space frequency and out of phase when object for the object images that described object collection obtains.
Exemplarily, before reception at least two group objects images, biopsy method comprises further: adopt object described at least two kinds of structured light with different space frequency; And when the described object of each irradiation, for described object acquisition target image, to obtain at least two group objects images.
Exemplarily, described calculating at least two frequency response intensity images corresponding respectively with described at least two group objects images comprise: according to the relation between the pixel of the corresponding position in each image in every group objects image, calculate the frequency response intensity image corresponding with this group objects image.
According to a further aspect of the invention, provide a kind of living body detection device, comprise: receiver module, for receiving at least two group objects images, at least two group objects images obtain for object collection when having the structured light object of different space frequency at least two kinds respectively; Computing module, for calculating at least two the frequency response intensity images corresponding respectively with at least two group objects images; And live body determination module, for whether being live body based at least two frequency response intensity image determination objects.
Exemplarily, live body determination module comprises: strength ratio image obtains submodule, for obtaining one or more strength ratio image based on the intensity relation between at least two frequency response intensity images; Face datection submodule, for carrying out Face datection to one of at least two frequency response intensity images, to determine human face region; Mean intensity calculating sub module, for for each in one or more strength ratio image, calculates the average intensity value of the pixel in this strength ratio image, corresponding with human face region region; And judgement submodule, for judge whether to exist in one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then determine to as if live body, if there is no, then determine that object is not live body.
Exemplarily, strength ratio image obtains submodule and comprises: selection unit, for selecting Specific frequency response intensity image from least two frequency response intensity images; And strength ratio computing unit, for for each in the residual frequency response intensity image at least two frequency response intensity images, calculate the ratio of the intensity level of the pixel of this residual frequency response intensity image and the intensity level of the respective pixel in Specific frequency response intensity image, and obtain one or more strength ratio image according to the ratio of calculated intensity level.
Exemplarily, mean intensity calculating sub module comprises: the first computing unit, for calculating the total intensity value of the pixel in this strength ratio image, corresponding with human face region region; And second computing unit, for calculate total intensity value with in this strength ratio image, the area ratio in corresponding with human face region region, to obtain average intensity value.
Exemplarily, predetermined number equals the total number of all strength ratio images in one or more strength ratio image.
Exemplarily, preset range carries out training based on actual face and obtains.
Exemplarily, structured light comprises infrared light.
Exemplarily, each group at least two group objects images comprise at least two respectively when having the structured light object of the same space frequency and out of phase for object collection obtain object images.
Exemplarily, living body detection device comprises further: light emission module, for adopting at least two kinds of structured light objects with different space frequency; And image capture module, for when each irradiation object, for object acquisition target image, to obtain at least two group objects images.
Exemplarily, computing module, specifically for according to the relation between the pixel of the corresponding position in each image in every group objects image, calculates the frequency response intensity image corresponding with this group objects image.
According to biopsy method and the device of the embodiment of the present invention, judge whether object is live body by the detected frequency response situation of object to the structured light of multiple spatial frequency, this mode can realize high security In vivo detection and without the need to the cooperation of detected object.
Accompanying drawing explanation
Be described in more detail the embodiment of the present invention in conjunction with the drawings, above-mentioned and other object of the present invention, Characteristics and advantages will become more obvious.Accompanying drawing is used to provide the further understanding to the embodiment of the present invention, and forms a part for instructions, is used from explanation the present invention, is not construed as limiting the invention with the embodiment of the present invention one.In the accompanying drawings, identical reference number represents same parts or step usually.
Fig. 1 illustrates the schematic block diagram for realizing according to the biopsy method of the embodiment of the present invention and the exemplary electronic device of device;
Fig. 2 illustrates the indicative flowchart of biopsy method according to an embodiment of the invention;
Fig. 3 illustrates the schematic diagram utilizing structured light object and acquisition target image according to an embodiment of the invention;
Fig. 4 illustrates the schematic grating fringe figure of structured light according to an embodiment of the invention;
Fig. 5 illustrates and determines that whether object is the indicative flowchart of the step of live body according to an embodiment of the invention;
Fig. 6 illustrates the schematic block diagram of living body detection device according to an embodiment of the invention;
Fig. 7 illustrates the schematic block diagram of the live body determination module in living body detection device according to an embodiment of the invention; And
Fig. 8 illustrates the schematic block diagram of In vivo detection system according to an embodiment of the invention.
Embodiment
In order to make the object, technical solutions and advantages of the present invention more obvious, describe in detail below with reference to accompanying drawings according to example embodiment of the present invention.Obviously, described embodiment is only a part of embodiment of the present invention, instead of whole embodiment of the present invention, should be understood that the present invention not by the restriction of example embodiment described herein.Based on the embodiment of the present invention described in the present invention, other embodiments all that those skilled in the art obtain when not paying creative work all should fall within protection scope of the present invention.
First, with reference to Fig. 1, the exemplary electronic device 100 for realizing biopsy method according to the embodiment of the present invention and device is described.
As shown in Figure 1, electronic equipment 100 comprises one or more processor 102, one or more memory storage 104, input media 106, output unit 108, image collecting device 110 and light emitting devices 114, and these assemblies are interconnected by bindiny mechanism's (not shown) of bus system 112 and/or other form.The assembly and the structure that it should be noted that the electronic equipment 100 shown in Fig. 1 are illustrative, and not restrictive, and as required, described electronic equipment also can have other assemblies and structure.
Described processor 102 can be the processing unit of CPU (central processing unit) (CPU) or other form with data-handling capacity and/or instruction execution capability, and other assembly that can control in described electronic equipment 100 is with the function of carry out desired.
Described memory storage 104 can comprise one or more computer program, and described computer program can comprise various forms of computer-readable recording medium, such as volatile memory and/or nonvolatile memory.Described volatile memory such as can comprise random access memory (RAM) and/or cache memory (cache) etc.Described nonvolatile memory such as can comprise ROM (read-only memory) (ROM), hard disk, flash memory etc.Described computer-readable recording medium can store one or more computer program instructions, processor 102 can run described programmed instruction, to realize the function of client functionality and/or other expectation (realized by processor) in the embodiment of the present invention hereinafter described.Various application program and various data can also be stored, the various data etc. that such as described application program uses and/or produces in described computer-readable recording medium.
Described input media 106 can be that user is used for inputting the device of instruction, and it is one or more to comprise in keyboard, mouse, microphone and touch-screen etc.
Described output unit 108 externally (such as user) can export various information (such as image and/or sound), and it is one or more to comprise in display, loudspeaker etc.
Described image collecting device 110 can take the image (such as photo, frame of video etc.) of expectation, and is stored in described memory storage 104 by captured image and uses for other assembly.Image collecting device 110 can adopt any suitable camera installation to realize, and such as the shooting of camera, video camera or mobile terminal is first-class.
Described light emitting devices 114 can emitting structural light.Exemplarily, the light source that light emitting devices 114 uses is infrared light supply, and its structured light launched is infrared light.Be understandable that, if light emitting devices 114 adopts infrared light supply, image collecting device 110 also should have the ability of corresponding infrared imaging.Described light emitting devices 114 independently or can launch the structured light having and expect spatial frequency and expect phase place under the control of processor 102 in the expected time.Light emitting devices 114 can adopt the equipment such as laser instrument or projector to realize.
Exemplarily, for realizing the image acquisition end etc. that may be implemented as such as smart mobile phone, panel computer, gate control system, personal computer etc. according to the biopsy method of the embodiment of the present invention and the exemplary electronic device of device.
Below, with reference to Fig. 2, the biopsy method according to the embodiment of the present invention is described.Fig. 2 illustrates the indicative flowchart of biopsy method 200 according to an embodiment of the invention.As shown in Figure 2, biopsy method 200 comprises the following steps.
In step S210, receive at least two group objects images, this at least two group objects image obtains for object collection when having the structured light object of different space frequency at least two kinds respectively.
" object " as herein described is the object participating in In vivo detection, the photo that it can be real face, assailant uses, video, mask or faceform etc.
According to an embodiment, structured light can comprise infrared light (as described above).The wavelength of infrared light can be such as 808nm or 850nm.Because people is insensitive to infrared light, therefore use infrared light can avoid causing infringement to detected person, thus the Consumer's Experience of detected person can be improved.In addition, alternatively, line-structured light can be adopted to carry out In vivo detection to object.The quantity of information that the view data recorded by line-structured light is comprised is comparatively large, but process is simple, therefore contributes to carrying out In vivo detection quickly and accurately.
Exemplarily, light emitting devices can be used to launch successively structured light that at least two kinds have different space frequency, for irradiation object.When light emitting devices uses structured light object, image collecting device can be utilized to gather image for object, to obtain object images.Subsequently, the object images collected can to send to after storer is used for by processor process or directly send processor to by image collecting device.Hereafter be projector with light emitting devices and image collector is set to camera for example is described.Fig. 3 illustrates the schematic diagram utilizing structured light object and acquisition target image according to an embodiment of the invention.Comparatively enrich reliable information to collect, projector should be spatially close with camera, as shown in Figure 3.
The form of structured light is described below.With reference to figure 4, the schematic grating fringe figure of structured light is according to an embodiment of the invention shown.As shown in Figure 4, two kinds of structured lights launched respectively by projector, and these two kinds of structured lights are divided into two groups, i.e. L group and R group.What illustrate on the left of Fig. 4 is L group, and it comprises the structured light corresponding with the three amplitude grating bar graphs with the first spatial frequency; What illustrate on the right side of Fig. 4 is R group, and it comprises the structured light corresponding with the three amplitude grating bar graphs with second space frequency.Six amplitude grating bar graphs present sinusoidal pattern nicking as shown in Figure 4.First spatial frequency is different with second space frequency.Exemplarily, the first spatial frequency can be 1/24 of second space frequency, and that is, the space periodic of L group structured light is 24 times of the space periodic of R group structured light.It will be appreciated, of course, that this ratio is only exemplary, it can be any suitable value, and the present invention does not limit this.The phase place of the structured light corresponding from three amplitude grating bar graphs respectively in L group structured light is different, and the phase differential between them is 120 degree.The phase place of the structured light corresponding from three amplitude grating bar graphs respectively in R group structured light is also different, and the phase differential between them is also 120 degree.Certainly, above-mentioned phase differential is only exemplary, and it can be any suitable value, and the present invention does not limit this.
Projector can be utilized successively with the structured light object corresponding to every amplitude grating bar graph, and when each irradiation object, utilize camera acquires object images.Fig. 4 illustrates six amplitude grating bar graphs, correspondingly, should collect six object images.These six object images can be divided into two groups according to the spatial frequency of each self-corresponding structured light.
Although Fig. 4 has the structured light of different frequency exemplarily with two kinds, but, be appreciated that the spatial frequency of structured light can more than two kinds.Correspondingly, the object images obtained can more than two groups.In addition, the phase place of the structured light under the same space frequency can be less than or more than three kinds.Accordingly, the number of the object images in every group objects image can be less than or more than three.
In step S220, calculate at least two the frequency response intensity images corresponding respectively with at least two group objects images.
In units of every group objects image, a frequency response intensity image can be calculated for this group objects image.In one embodiment, according to the relation between the pixel of the corresponding position in each image in every group objects image, the frequency response intensity image corresponding with this group objects image can be calculated.The computation process of frequency response intensity image is described below.
For i-th object images in certain group objects image, can with (the x of I [i, x, y] expression in this i-th object images, y) intensity level of the pixel of position, represents the offset phase of the structured light corresponding to i-th object images with ai.Following formula can be adopted to represent the frequency response process of structured light on object:
q[x,y]*sin(p+a i)+r=I[i,x,y](1)
Wherein, p represents the initial phase of this group objects image, r represents the intensity level of bias light (ambient) composition, it is a complementary variable, q [x, y] represent the intensity level of this frequency response intensity image corresponding to group objects image in the pixel of (x, y) position.P, q [x, y], r is unknown number.
For certain group objects image, above-mentioned formula (1) can be listed for each object images wherein, thus can system of equations be formed.System of equations according to composition solves q [x, y], and then obtains frequency response intensity image.
Four object images are comprised below for certain group objects image.Suppose that the offset phase of the structured light corresponding to each object images is respectively a 0, a 1, a 2, a 3, so, following formula can be listed:
q [ x , y ] * sin ( p + a 0 ) + r = I [ 1 , x , y ] q [ x , y ] * sin ( p + a 1 ) + r = I [ 2 , x , y ] q [ x , y ] * s i n ( p + a 2 ) + r = I [ 3 , x , y ] q [ x , y ] * s i n ( p + a 3 ) + r = I [ 4 , x , y ] - - - ( 2 )
Be understandable that, owing to having three unknown number p in frequency response formula, q [x, y], r, the system of equations therefore in formula (2) belongs to over-determined systems.In this case, the least square solution of the derivation of equation (2) q [x, y] can be calculated, and then obtain frequency response intensity image.
Seen from the above description, when comprise in a group objects image more than four with object images corresponding to the structured light with out of phase time, the system of equations of composition belongs to over-determined systems, needs to seek least square solution.When comprise in a group objects image three with object images corresponding to the structured light with out of phase time, the system of equations of composition belongs to just determines system of equations, can seek exact solution.When only comprise in a group objects image two with object images corresponding to the structured light with out of phase time, the system of equations of composition belongs to Indeterminate Equation Group, needs to seek elementary solution.When only comprising two object images in a group objects image, following formula can be adopted to ask elementary solution:
q[x,y]=max(I[1,x,y],I[2,x,y])-min(I[1,x,y]-I[2,x,y])(3)
Alternatively, for every group, there is the structured light of the same space frequency, more than the three kinds structured lights with out of phase can be adopted to carry out irradiation object and gather the object images of more than three, can accuracy of detection be improved like this.
The computation process of frequency response intensity image is further described below in conjunction with the structured light shown in Fig. 4.
For the structured light corresponding to six amplitude grating bar graphs shown in Fig. 4, suppose that three object images obtained under the structured light corresponding to three amplitude grating bar graphs of L group are the 1st, the 2nd and the 3rd object images, three object images obtained under the structured light corresponding to three amplitude grating bar graphs of R group are the 4th, the 5th and the 6th object images.
For the object images corresponding to L group structured light, represent the intensity level of frequency response intensity image in the pixel of (x, y) position with A [x, y]:
A [ x , y ] * sin p + r = I [ 1 , x , y ] A [ x , y ] * s i n ( p + 2 / 3 π ) + r = I [ 2 , x , y ] A [ x , y ] * sin ( p + 4 / 3 π ) + r = I [ 3 , x , y ] - - - ( 4 )
Can calculate based on formula (4): A [x, y]=(2*I [1, x, y]-I [2, x, y]-I [3, x, y]) 2+ 3* (I [2, x, y]-I [3, x, y]) 2/ 9.
In like manner, for the object images corresponding to R group structured light, represent the intensity level of frequency response intensity image in the pixel of (x, y) position with B [x, y], can obtain: B [x, y]=(2*I [4, x, y]-I [5, x, y]-I [6, x, y]) 2+ 3* (I [5, x, y]-I [6, x, y]) 2/ 9.
Obtain in the embodiment of strength ratio image based on the intensity relation between frequency response intensity image as described below, A [x, y]=(2*I [1, x, y]-I [2, x, y]-I [3, x, y]) can be made 2+ 3* (I [2, x, y]-I [3, x, y]) 2and B [x, y]=(2*I [4, x, y]-I [5, x, y]-I [6, x, y]) 2+ 3* (I [5, x, y]-I [6, x, y]) 2.
In step S230, whether be live body based at least two frequency response intensity image determination objects.
After calculating obtains at least two frequency response intensity images, whether can be live body based on these at least two frequency response intensity image determination objects.
Because the frequency response situation of object under the structured light of multiple spatial frequency of unlike material is different, the frequency response intensity image that the object such as frequency response intensity image and screen, paper, gypsum, rubber that the skin of people is corresponding is corresponding has very large difference, therefore can distinguish living body faces and non-living body assailant by frequency response intensity image.This mode coordinates without the need to detected object, and can pick out living body faces and the three-dimensional face model with certain number of people shape, is a kind of detection method with high security and ease for use.
Exemplarily, can realize in the unit with storer and processor or system according to the biopsy method of the embodiment of the present invention.
Man face image acquiring end place can be deployed according to the biopsy method of the embodiment of the present invention, such as, in security protection application, the image acquisition end of gate control system can be deployed in; In financial application field, personal terminal place can be deployed in, such as smart phone, panel computer, personal computer etc.
Alternatively, server end (or high in the clouds) and personal terminal place can also be deployed in distributing according to the biopsy method of the embodiment of the present invention.Such as, in financial application field, can at personal terminal acquisition target image, object images is sent to server end (or high in the clouds) by personal terminal, and server end (or high in the clouds) carries out In vivo detection according to object images.
According to the biopsy method of the embodiment of the present invention, judge whether object is live body by the detected frequency response situation of object to the structured light of multiple spatial frequency, this mode can realize high security In vivo detection and without the need to the cooperation of detected object.
Fig. 5 illustrates and determines that whether object is the indicative flowchart of the step (the step S230 namely shown in Fig. 2) of live body according to an embodiment of the invention.As shown in Figure 5, step S230 can comprise the following steps.
In step S231, obtain one or more strength ratio image based on the intensity relation between at least two frequency response intensity images.
Exemplarily, step S231 can specifically comprise: from least two frequency response intensity images, select Specific frequency response intensity image; And for each in the residual frequency response intensity image at least two frequency response intensity images, calculate the ratio of the intensity level of the pixel of this residual frequency response intensity image and the intensity level of the respective pixel in Specific frequency response intensity image, and obtain one or more strength ratio image according to the ratio of calculated intensity level.Strength ratio image can be obtained quickly and easily in this way.Illustrate below.
Still for the embodiment shown in Fig. 4, it has two frequency response intensity image A [x, y] and B [x, y].Represent strength ratio image with R [x, y], it can calculate in the following manner: R [x, y]=B [x, y]/A [x, y] or R [x, y]=A [x, y]/B [x, y].Strength ratio image also can calculate in the following manner: R [x, y]=(A [x, y]-B [x, y])/A [x, y]=1-B [x, y]/A [x, y] or R [x, y]=(B [x, y]-A [x, y])/B [x, y]=1-A [x, y]/B [x, y].
For existing more than the situation of two frequency response intensity images, combination can be carried out to frequency intensity image neatly and carrying out calculating strength and compare image.Such as, existence four frequency response intensity image A [x, y], B [x, y], C [x, y] and D [x, y] is supposed.Then can select a frequency response intensity image A [x from these four frequency response intensity images, y], combine successively with remaining frequency response image, form three combinations, the strength ratio image of each combination can be calculated subsequently according to the account form of the strength ratio image of above-mentioned two frequency response intensity images.Like this, three strength ratio images, such as R can be obtained 1[x, y]=B [x, y]/A [x, y], R 2[x, y]=C [x, y]/A [x, y] and R 3[x, y]=D [x, y]/A [x, y].Judge whether the average intensity value of strength ratio image time (i.e. step S234), can judge successively for each strength ratio image in preset range subsequently.The account form of above-mentioned strength ratio image is only example, and other any suitable array modes also can be adopted to obtain strength ratio image, as described below.In one example, said frequencies response intensity image can be divided into two to combine A [x, y] and B [x, y] and C [x, y] and D [x, y], and correspondingly calculate two strength ratio image R 1[x, y]=B [x, y]/A [x, y], R 2[x, y]=C [x, y]/D [x, y].In another example, a strength ratio image can be calculated for four frequency response intensity images, such as R [x, y]=(B [x, y]+C [x, y]+D [x, y])/A [x, y]=B [x, y]/A [x, y]+C [x, y]/A [x, y]+D [x, y]/A [x, y].
Those skilled in the art describe according to above the account form being appreciated that the strength ratio image of the frequency response intensity image of any number, do not repeat them here.In addition, it is to be appreciated that foregoing description is only example, any suitable implementation based on the intensity relation acquisition strength ratio image between frequency response intensity image all should fall within the scope of protection of the present invention.
In step S232, Face datection is carried out to one of at least two frequency response intensity images, to determine human face region.
In this step, can at least two frequency response intensity images optional one, determine whether comprise face in selected frequency response intensity image, and in selected frequency response intensity image, orient human face region when comprising face in selected frequency response intensity image.For the embodiment shown in Fig. 4, the frequency response intensity image corresponding to prioritizing selection L group structured light, that is, the frequency response intensity image corresponding to one group of structured light that prioritizing selection spatial frequency is lower.Compared with other frequency response intensity image, the frequency response intensity image corresponding to the structured light that spatial frequency is lower carrying out Face datection can be fairly simple and accurate, than the human face region being easier to identify in image.
The human-face detector that training in advance is good can be utilized to come locating human face region in frequency response intensity image.Such as, algorithm such as human face detection and tracing such as Ha Er (Haar) algorithm, Adaboost algorithm etc. can be utilized in advance on the basis of a large amount of picture to train human-face detector, for the single-frame images of input, the human-face detector that this training in advance is good can orient human face region rapidly.
Should be appreciated that the present invention not by the restriction of the concrete method for detecting human face adopted; no matter be existing method for detecting human face or the method for detecting human face developed in the future; can be applied in the biopsy method according to the embodiment of the present invention, and also should be included in protection scope of the present invention.
In step S233, for each in one or more strength ratio image, calculate the average intensity value of the pixel in this strength ratio image, corresponding with human face region region.
Exemplarily, step S233 can specifically comprise: the total intensity value calculating the pixel in this strength ratio image, corresponding with human face region region; And calculate total intensity value with in this strength ratio image, and the area ratio in the corresponding region of human face region, to obtain average intensity value.Average intensity value can be calculated more exactly in this way.
Illustrate the computing method of average intensity value below.Suppose that determined human face region is rectangle (x0 in step S232, y0) ~ (x1, y1), can by the average intensity value u of the pixel in the region corresponding with this human face region in following formulae discovery strength ratio image R [x, y]:
u = &Sigma; x 0 &le; x < x 1 &Sigma; y 0 &le; y < y 1 R &lsqb; x y &rsqb; ( x 1 - x 0 ) ( y 1 - y 0 ) - - - ( 5 )
For the situation obtaining multiple strength ratio image in step S231, an average intensity value can be calculated for each strength ratio image, thus obtain multiple average intensity value.
In step S234, judge whether to exist in one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then go to step S235, if there is no, then go to step S236.
In step S235, determine liking live body.
In step S236, determine that object is not live body.
Predetermined number can be determined as required.If the strength ratio image obtained in step S231 is only one, then predetermined number equals 1.That is, directly can judge that the average intensity value of this strength ratio image is whether in preset range, if so, then determine to as if live body, otherwise, determine that object is not live body.If the strength ratio image obtained in step S231 is multiple, then predetermined number can be equal to or less than the total number of strength ratio image.Such as, if the strength ratio image obtained in step S231 is three, then predetermined number can equal 3.For these three strength ratio images, they each corresponding oneself preset range.Directly can judge that the average intensity value of these three strength ratio images is whether all in each self-corresponding preset range.If so, then thinking that these three strength ratio images meet the demands, determining liking live body, otherwise, determine that object is not live body.Again such as, if the strength ratio image obtained in step S231 is three, then predetermined number can equal 2.Can judge that the average intensity value that whether there are two strength ratio images in these three strength ratio images is in each self-corresponding preset range.If existed, then thinking that these three strength ratio images meet the demands, determining liking live body, if there is no (that is, only have one or there is no the strength ratio image of average intensity value in each self-corresponding preset range), then determine that object is not live body.
Preset range can be limited by one or more threshold value, and it can carry out training based on actual face and obtain.Such as, can use in advance various there is different space frequency the actual face of structured light and gather the object images of a large amount of actual faces.Then, calculate the frequency response intensity image corresponding to structured light of various spatial frequency, sum up the feature of the strength ratio image of the intensity relation acquisition based on the frequency response intensity image corresponding to the structured light of various different space frequency, to obtain suitable threshold value, namely suitable preset range.
Object is judged to be whether the mode of live body is the In vivo detection mode that a kind of accuracy is higher according to strength ratio image.
Exemplarily, each group at least two group objects images comprise at least two respectively when having the structured light object of the same space frequency and out of phase for object collection obtain object images.As described above, for every group objects image, when calculated frequency response intensity image, need the system of equations solving such as formula (2).Therefore, for the structured light with the same space frequency, the phase settings of the structured light under the same space frequency can be at least two kinds, correspondingly, the object images of acquisition is at least two.
Exemplarily, before step S210, described biopsy method 200 may further include: adopt at least two kinds of structured light objects with different space frequency; And when each irradiation object, for object acquisition target image, to obtain at least two group objects images.Composition graphs 3 and Fig. 4 describe the method utilizing structured light object and acquisition target image above.As described above, light emitting devices (such as projector) can be used to object emitting structural light, and use image collecting device (such as camera) acquisition target image.
Fig. 6 shows the schematic block diagram of living body detection device 600 according to an embodiment of the invention.
As shown in Figure 6, receiver module 610, computing module 620 and live body determination module 630 is comprised according to the living body detection device 600 of the embodiment of the present invention.
Receiver module 610 is for receiving at least two group objects images, and this at least two group objects image obtains for object collection when having the structured light object of different space frequency at least two kinds respectively.The programmed instruction that receiver module 610 can store in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1 realizes.
Computing module 620 is for calculating at least two the frequency response intensity images corresponding respectively with at least two group objects images.The programmed instruction that computing module 620 can store in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1 realizes.
Whether live body determination module 630 is for being live bodies based at least two frequency response intensity image determination objects.The programmed instruction that live body determination module 630 can store in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1 realizes, and can perform according to the step S231 to S236 in the biopsy method of the embodiment of the present invention.
Fig. 7 shows the schematic block diagram of the live body determination module 630 in living body detection device 600 according to an embodiment of the invention.
As shown in Figure 7, described live body determination module 630 can comprise strength ratio image acquisition submodule 6310, Face datection submodule 6320, mean intensity calculating sub module 6330 and judge submodule 6340.
Strength ratio image obtains submodule 6310 for obtaining one or more strength ratio image based on the intensity relation between at least two frequency response intensity images.Strength ratio image obtains the programmed instruction that submodule 6310 can store in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1 and realizes.
Described Face datection submodule 6320 for carrying out Face datection to one of at least two frequency response intensity images, to determine human face region.Described Face datection submodule 6320 can be human-face detector, and the programmed instruction that can store in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1 realizes.
Mean intensity calculating sub module 6330, for for each in one or more strength ratio image, calculates the average intensity value of the pixel in this strength ratio image, corresponding with human face region region.The programmed instruction that described mean intensity calculating sub module 6330 can store in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1 realizes.
Described judge submodule 6340 for judge whether to exist in one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then determine, to liking live body, if there is no, then to determine that object is not live body.The programmed instruction that described judgement submodule 6340 can store in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1 realizes.
According to the embodiment of the present invention, strength ratio image obtains submodule 6310 can comprise selection unit and strength ratio computing unit.Selection unit is used for selecting Specific frequency response intensity image from least two frequency response intensity images.Strength ratio computing unit is used for for each in the residual frequency response intensity image at least two frequency response intensity images, calculate the ratio of the intensity level of the pixel of this residual frequency response intensity image and the intensity level of the respective pixel in Specific frequency response intensity image, and obtain one or more strength ratio image according to the ratio of calculated intensity level.
According to the embodiment of the present invention, described mean intensity calculating sub module 6330 comprises the first computing unit and the second computing unit.First computing unit is for calculating the total intensity value of the pixel in this strength ratio image, corresponding with human face region region.Second computing unit for calculate total intensity value with in this strength ratio image, with the area ratio in the corresponding region of human face region, to obtain average intensity value.
According to the embodiment of the present invention, predetermined number equals the total number of all strength ratio images in one or more strength ratio image.
According to the embodiment of the present invention, preset range carries out training based on actual face and obtains.
According to the embodiment of the present invention, structured light comprises infrared light.
According to the embodiment of the present invention, each group at least two group objects images comprise at least two respectively when having the structured light object of the same space frequency and out of phase for object collection obtain object images.
According to the embodiment of the present invention, described living body detection device 600 comprises light emission module and image capture module further.Light emission module is at least two kinds have a different space frequency described in adopting structured light object.Image capture module is used for when each irradiation object, for object acquisition target image, to obtain at least two group objects images.Light emission module can adopt the light emitting devices 114 shown in Fig. 1 to realize, and image capture module can adopt the image collecting device 110 shown in Fig. 1 to realize.
According to the embodiment of the present invention, computing module 620 in described living body detection device 600, specifically for according to the relation between the pixel of the corresponding position in each image in every group objects image, calculates the frequency response intensity image corresponding with this group objects image.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Fig. 8 shows the schematic block diagram of In vivo detection system 800 according to an embodiment of the invention.In vivo detection system 800 comprises image collecting device 810, light emitting devices 820, memory storage 830 and processor 840.
Image collecting device 810 is for acquisition target image.Light emitting devices 820 is for emitting structural light.
Described memory storage 830 stores the program code for realizing according to the corresponding steps in the biopsy method of the embodiment of the present invention.
Described processor 840 is for running the program code stored in described memory storage 830, to perform the corresponding steps according to the biopsy method of the embodiment of the present invention, and for realizing according to the receiver module 610 in the living body detection device 600 of the embodiment of the present invention, computing module 620 and live body determination module 630.
In one embodiment, following steps are performed: receive at least two group objects images, this at least two group objects image obtains for object collection when having the structured light object of different space frequency at least two kinds respectively when described program code is run by described processor 840; Calculate at least two the frequency response intensity images corresponding respectively with at least two group objects images; And whether be live body based at least two frequency response intensity image determination objects.
In one embodiment, when described program code is run by described processor 840, the performed step being whether live body based at least two frequency response intensity image determination objects comprises: obtain one or more strength ratio image based on the intensity relation between at least two frequency response intensity images; Face datection is carried out to one of at least two frequency response intensity images, to determine human face region; For each in one or more strength ratio image, calculate the average intensity value of the pixel in this strength ratio image, corresponding with human face region region; And judge whether to exist in one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then determine to as if live body, if there is no, then determine that object is not live body.
In one embodiment, when described program code is run by described processor 840, the performed step obtaining one or more strength ratio image based on the intensity relation between at least two frequency response intensity images comprises: from least two frequency response intensity images, select Specific frequency response intensity image; And for each in the residual frequency response intensity image at least two frequency response intensity images, calculate the ratio of the intensity level of the pixel of this residual frequency response intensity image and the intensity level of the respective pixel in Specific frequency response intensity image, and obtain one or more strength ratio image according to the ratio of calculated intensity level.
In one embodiment, when described program code is run by described processor 840, the performed step each in described one or more strength ratio image being calculated to the average intensity value of the pixel in this strength ratio image, corresponding with human face region region comprises: calculate in this strength ratio image, and the corresponding region of human face region in the total intensity value of pixel; And calculate total intensity value with in this strength ratio image, and the area ratio in the corresponding region of human face region, to obtain average intensity value.
In one embodiment, predetermined number equals the total number of all strength ratio images in one or more strength ratio image.
In one embodiment, preset range carries out training based on actual face and obtains.
In one embodiment, structured light comprises infrared light.
In one embodiment, each group at least two group objects images comprise at least two respectively when having the structured light object of the same space frequency and out of phase for object collection obtain object images.
In one embodiment, the described calculating performed when described program code is run by described processor 840 at least two frequency response intensity images corresponding respectively with described at least two group objects images comprise: according to the relation between the pixel of the corresponding position in each image in every group objects image, calculate the frequency response intensity image corresponding with this group objects image.
In addition, according to the embodiment of the present invention, additionally provide a kind of storage medium, store programmed instruction on said storage, when described programmed instruction is run by computing machine or processor for performing the corresponding steps of the biopsy method of the embodiment of the present invention, and for realizing according to the corresponding module in the living body detection device of the embodiment of the present invention.Described storage medium such as can comprise the combination in any of the storage card of smart phone, the memory unit of panel computer, the hard disk of personal computer, ROM (read-only memory) (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc ROM (read-only memory) (CD-ROM), USB storage or above-mentioned storage medium.
In one embodiment, described computer program instructions by each functional module of living body detection device that can realize during computer run according to the embodiment of the present invention, and/or can perform the biopsy method according to the embodiment of the present invention.
In one embodiment, described computer program instructions by during computer run perform following steps: receive at least two group objects images, this at least two group objects image obtains for object collection when having the structured light object of different space frequency at least two kinds respectively; Calculate at least two the frequency response intensity images corresponding respectively with at least two group objects images; And whether be live body based at least two frequency response intensity image determination objects.
In one embodiment, described computer program instructions is being comprised by the step being whether live body based at least two frequency response intensity image determination objects performed during computer run: obtain one or more strength ratio image based on the intensity relation between at least two frequency response intensity images; Face datection is carried out to one of at least two frequency response intensity images, to determine human face region; For each in one or more strength ratio image, calculate the average intensity value of the pixel in this strength ratio image, corresponding with human face region region; And judge whether to exist in one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then determine to as if live body, if there is no, then determine that object is not live body.
In one embodiment, described computer program instructions is being comprised by the step obtaining one or more strength ratio image based on the intensity relation between at least two frequency response intensity images performed during computer run: from least two frequency response intensity images, select Specific frequency response intensity image; And for each in the residual frequency response intensity image at least two frequency response intensity images, calculate the ratio of the intensity level of the pixel of this residual frequency response intensity image and the intensity level of the respective pixel in Specific frequency response intensity image, and obtain one or more strength ratio image according to the ratio of calculated intensity level.
In one embodiment, described computer program instructions is being comprised by the step each in described one or more strength ratio image being calculated to the average intensity value of the pixel in this strength ratio image, corresponding with human face region region performed during computer run: calculate in this strength ratio image, and the corresponding region of human face region in the total intensity value of pixel; And calculate total intensity value with in this strength ratio image, and the area ratio in the corresponding region of human face region, to obtain average intensity value.
In one embodiment, predetermined number equals the total number of all strength ratio images in one or more strength ratio image.
In one embodiment, preset range carries out training based on actual face and obtains.
In one embodiment, structured light comprises infrared light.
In one embodiment, each group at least two group objects images comprise at least two respectively when having the structured light object of the same space frequency and out of phase for object collection obtain object images.
In one embodiment, described computer program instructions comprises at least two frequency response intensity images corresponding respectively with described at least two group objects images by described calculating performed during computer run: according to the relation between the pixel of the corresponding position in each image in every group objects image, calculate the frequency response intensity image corresponding with this group objects image.
Can run by the processor of the electronic equipment of the In vivo detection according to the embodiment of the present invention computer program instructions stored in memory according to each module in the In vivo detection system of the embodiment of the present invention to realize, or the computer instruction that can store in the computer-readable recording medium of the computer program according to the embodiment of the present invention is realized by during computer run.
According to biopsy method and device, In vivo detection system and the storage medium of the embodiment of the present invention, judge whether object is live body by the detected frequency response situation of object to the structured light of multiple spatial frequency, this mode can realize high security In vivo detection and without the need to the cooperation of detected object.
Although example embodiment has been described with reference to the drawings here, it has been only exemplary for should understanding above-mentioned example embodiment, and is not intended to limit the scope of the invention to this.Those of ordinary skill in the art can make various changes and modifications wherein, and do not depart from scope and spirit of the present invention.All such changes and modifications are intended to be included within the scope of the present invention required by claims.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
In several embodiments that the application provides, should be understood that disclosed equipment and method can realize by another way.Such as, apparatus embodiments described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another equipment can be integrated into, or some features can be ignored, or do not perform.
In instructions provided herein, describe a large amount of detail.But can understand, embodiments of the invention can be put into practice when not having these details.In some instances, be not shown specifically known method, structure and technology, so that not fuzzy understanding of this description.
Similarly, be to be understood that, in order to simplify the present invention and to help to understand in each inventive aspect one or more, in the description to exemplary embodiment of the present invention, each feature of the present invention is grouped together in single embodiment, figure or the description to it sometimes.But, this method of the present invention should be construed to the following intention of reflection: namely the present invention for required protection requires feature more more than the feature clearly recorded in each claim.Or rather, as corresponding claims reflect, its inventive point is to solve corresponding technical matters by the characteristic feature being less than single embodiment disclosed in certain.Therefore, the claims following embodiment are incorporated to this embodiment thus clearly, and wherein each claim itself is as independent embodiment of the present invention.
It will be appreciated by those skilled in the art that, except mutually repelling between feature, any combination can be adopted to combine all processes of all features disclosed in this instructions (comprising adjoint claim, summary and accompanying drawing) and so disclosed any method or equipment or unit.Unless expressly stated otherwise, each feature disclosed in this instructions (comprising adjoint claim, summary and accompanying drawing) can by providing identical, alternative features that is equivalent or similar object replaces.
In addition, those skilled in the art can understand, although embodiments more described herein to comprise in other embodiment some included feature instead of further feature, the combination of the feature of different embodiment means and to be within scope of the present invention and to form different embodiments.Such as, in detail in the claims, the one of any of embodiment required for protection can use with arbitrary array mode.
All parts embodiment of the present invention with hardware implementing, or can realize with the software module run on one or more processor, or realizes with their combination.It will be understood by those of skill in the art that the some or all functions that microprocessor or digital signal processor (DSP) can be used in practice to realize according to some modules in the article analytical equipment of the embodiment of the present invention.The present invention can also be embodied as part or all the device program (such as, computer program and computer program) for performing method as described herein.Realizing program of the present invention and can store on a computer-readable medium like this, or the form of one or more signal can be had.Such signal can be downloaded from internet website and obtain, or provides on carrier signal, or provides with any other form.
The present invention will be described instead of limit the invention to it should be noted above-described embodiment, and those skilled in the art can design alternative embodiment when not departing from the scope of claims.In the claims, any reference symbol between bracket should be configured to limitations on claims.Word " comprises " not to be got rid of existence and does not arrange element in the claims or step.Word "a" or "an" before being positioned at element is not got rid of and be there is multiple such element.The present invention can by means of including the hardware of some different elements and realizing by means of the computing machine of suitably programming.In the unit claim listing some devices, several in these devices can be carry out imbody by same hardware branch.Word first, second and third-class use do not represent any order.Can be title by these word explanations.
The above; be only the specific embodiment of the present invention or the explanation to embodiment; protection scope of the present invention is not limited thereto; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (20)

1. a biopsy method, comprising:
Receive at least two group objects images, described at least two group objects images obtain for described object collection when having the structured light object of different space frequency at least two kinds respectively;
Calculate at least two the frequency response intensity images corresponding respectively with described at least two group objects images; And
Determine whether described object is live body based on described at least two frequency response intensity images.
2. biopsy method as claimed in claim 1, wherein, describedly determine whether described object is that live body comprises based on described at least two frequency response intensity images:
One or more strength ratio image is obtained based on the intensity relation between described at least two frequency response intensity images;
Face datection is carried out, to determine human face region to one of described at least two frequency response intensity images;
For each in described one or more strength ratio image, calculate the average intensity value of the pixel in this strength ratio image, corresponding with described human face region region; And
Judge whether to exist in described one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then determining described to liking live body, if there is no, then determining that described object is not live body.
3. biopsy method as claimed in claim 2, wherein, describedly obtains one or more strength ratio image based on the intensity relation between described at least two frequency response intensity images and comprises:
Specific frequency response intensity image is selected from described at least two frequency response intensity images; And
For each in the residual frequency response intensity image in described at least two frequency response intensity images, calculate the intensity level of the pixel of this residual frequency response intensity image and the ratio of the intensity level of the respective pixel in described Specific frequency response intensity image, and obtain described one or more strength ratio image according to the ratio of calculated intensity level.
4. biopsy method as claimed in claim 2, wherein, described for each in described one or more strength ratio image, the average intensity value calculating the pixel in this strength ratio image, corresponding with described human face region region comprises:
Calculate the total intensity value of the pixel in this strength ratio image, corresponding with described human face region region; And
Calculate the area ratio in this strength ratio image with described, corresponding with the described human face region region of described total intensity value, to obtain described average intensity value.
5. biopsy method as claimed in claim 2, wherein, described predetermined number equals the total number of all strength ratio images in described one or more strength ratio image.
6. biopsy method as claimed in claim 2, wherein, described preset range carries out training based on actual face and obtains.
7. biopsy method as claimed in claim 1, wherein, described structured light comprises infrared light.
8. biopsy method as claimed in claim 1, wherein, each group in described at least two group objects images comprise at least two respectively described in the structured light with the same space frequency and out of phase when object for the object images that described object collection obtains.
9. biopsy method as claimed in claim 1, wherein, before described reception at least two group objects image, described biopsy method comprises further:
Object described at least two kinds of structured light with different space frequency described in employing; And
Each irradiate described object time, for described object acquisition target image, with at least two group objects images described in obtaining.
10. biopsy method as claimed in claim 1, wherein, described calculating at least two frequency response intensity images corresponding respectively with described at least two group objects images comprise: according to the relation between the pixel of the corresponding position in each image in every group objects image, calculate the frequency response intensity image corresponding with this group objects image.
11. 1 kinds of living body detection devices, comprising:
Receiver module, for receiving at least two group objects images, described at least two group objects images obtain for described object collection when having the structured light object of different space frequency at least two kinds respectively;
Computing module, for calculating at least two the frequency response intensity images corresponding respectively with described at least two group objects images; And
Based on described at least two frequency response intensity images, live body determination module, for determining whether described object is live body.
12. living body detection devices as claimed in claim 11, wherein, described live body determination module comprises:
Strength ratio image obtains submodule, for obtaining one or more strength ratio image based on the intensity relation between described at least two frequency response intensity images;
Face datection submodule, for carrying out Face datection, to determine human face region to one of described at least two frequency response intensity images;
Mean intensity calculating sub module, for for each in described one or more strength ratio image, calculates the average intensity value of the pixel in this strength ratio image, corresponding with described human face region region; And
Judge submodule, for judge whether to exist in described one or more strength ratio image predetermined number, the strength ratio image of average intensity value in each self-corresponding preset range, if existed, then determine described to as if live body, if there is no, then determine that described object is not live body.
13. living body detection devices as claimed in claim 12, wherein, described strength ratio image obtains submodule and comprises:
Selection unit, for selecting Specific frequency response intensity image from described at least two frequency response intensity images; And
Strength ratio computing unit, for for each in the residual frequency response intensity image in described at least two frequency response intensity images, calculate the intensity level of the pixel of this residual frequency response intensity image and the ratio of the intensity level of the respective pixel in described Specific frequency response intensity image, and obtain described one or more strength ratio image according to the ratio of calculated intensity level.
14. living body detection devices as claimed in claim 12, wherein, described mean intensity calculating sub module comprises:
First computing unit, for calculating the total intensity value of the pixel in this strength ratio image, corresponding with described human face region region; And
Second computing unit, for calculating the area ratio in this strength ratio image with described, corresponding with the described human face region region of described total intensity value, to obtain described average intensity value.
15. living body detection devices as claimed in claim 12, wherein, described predetermined number equals the total number of all strength ratio images in described one or more strength ratio image.
16. living body detection devices as claimed in claim 12, wherein, described preset range carries out training based on actual face and obtains.
17. living body detection devices as claimed in claim 11, wherein, described structured light comprises infrared light.
18. living body detection devices as claimed in claim 11, wherein, each group in described at least two group objects images comprise at least two respectively described in the structured light with the same space frequency and out of phase when object for the object images that described object collection obtains.
19. living body detection devices as claimed in claim 11, wherein, described living body detection device comprises further:
Light emission module, for object described at least two kinds have a different space frequency described in adopting structured light; And
Image capture module, for each irradiate described object time, for described object acquisition target image, with at least two group objects images described in obtaining.
20. living body detection devices as claimed in claim 11, wherein, described computing module, specifically for according to the relation between the pixel of the corresponding position in each image in every group objects image, calculates the frequency response intensity image corresponding with this group objects image.
CN201511030874.2A 2015-12-31 2015-12-31 Biopsy method and device Active CN105447483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511030874.2A CN105447483B (en) 2015-12-31 2015-12-31 Biopsy method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511030874.2A CN105447483B (en) 2015-12-31 2015-12-31 Biopsy method and device

Publications (2)

Publication Number Publication Date
CN105447483A true CN105447483A (en) 2016-03-30
CN105447483B CN105447483B (en) 2019-03-22

Family

ID=55557643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511030874.2A Active CN105447483B (en) 2015-12-31 2015-12-31 Biopsy method and device

Country Status (1)

Country Link
CN (1) CN105447483B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
CN108509888A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108881674A (en) * 2017-06-05 2018-11-23 北京旷视科技有限公司 image collecting device and image processing method
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium
CN108875508A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 In vivo detection algorithm update method, device, client, server and system
CN110598571A (en) * 2019-08-15 2019-12-20 中国平安人寿保险股份有限公司 Living body detection method, living body detection device and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1924892A (en) * 2006-09-21 2007-03-07 杭州电子科技大学 Method and device for vivi-detection in iris recognition
US20100053362A1 (en) * 2003-08-05 2010-03-04 Fotonation Ireland Limited Partial face detector red-eye filter method and apparatus
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN104881632A (en) * 2015-04-28 2015-09-02 南京邮电大学 Hyperspectral face recognition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100053362A1 (en) * 2003-08-05 2010-03-04 Fotonation Ireland Limited Partial face detector red-eye filter method and apparatus
CN1924892A (en) * 2006-09-21 2007-03-07 杭州电子科技大学 Method and device for vivi-detection in iris recognition
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN104881632A (en) * 2015-04-28 2015-09-02 南京邮电大学 Hyperspectral face recognition method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
US11030437B2 (en) 2016-05-30 2021-06-08 Beijing Kuangshi Technology Co., Ltd. Liveness detection method and liveness detection system
CN108881674A (en) * 2017-06-05 2018-11-23 北京旷视科技有限公司 image collecting device and image processing method
CN108875508A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 In vivo detection algorithm update method, device, client, server and system
CN108875508B (en) * 2017-11-23 2021-06-29 北京旷视科技有限公司 Living body detection algorithm updating method, device, client, server and system
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium
CN108875519B (en) * 2017-12-19 2023-05-26 北京旷视科技有限公司 Object detection method, device and system and storage medium
CN108509888A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108509888B (en) * 2018-03-27 2022-01-28 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN110598571A (en) * 2019-08-15 2019-12-20 中国平安人寿保险股份有限公司 Living body detection method, living body detection device and computer-readable storage medium

Also Published As

Publication number Publication date
CN105447483B (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN105447483A (en) Living body detection method and device
CN110807385B (en) Target detection method, target detection device, electronic equipment and storage medium
CN109978893B (en) Training method, device, equipment and storage medium of image semantic segmentation network
CN106599772B (en) Living body verification method and device and identity authentication method and device
JP6452186B2 (en) Insurance compensation fraud prevention method, system, apparatus and readable recording medium based on coincidence of multiple photos
US8798325B2 (en) Efficient and fault tolerant license plate matching method
CN105938552A (en) Face recognition method capable of realizing base image automatic update and face recognition device
CN106033601B (en) The method and apparatus for detecting abnormal case
US9754192B2 (en) Object detection utilizing geometric information fused with image data
CN108256404B (en) Pedestrian detection method and device
US20180144496A1 (en) A method of detecting objects within a 3d environment
CN108573268A (en) Image-recognizing method and device, image processing method and device and storage medium
CN109214366A (en) Localized target recognition methods, apparatus and system again
CN108875522A (en) Face cluster methods, devices and systems and storage medium
CN106203305A (en) Human face in-vivo detection method and device
CN109241888B (en) Neural network training and object recognition method, device and system and storage medium
CA3115188C (en) Apparatus and method for providing application service using satellite image
CN108009466B (en) Pedestrian detection method and device
CN107944382B (en) Method for tracking target, device and electronic equipment
CN105404886A (en) Feature model generating method and feature model generating device
CN103208008A (en) Fast adaptation method for traffic video monitoring target detection based on machine vision
CN105517680A (en) Device, system and method for recognizing human face, and computer program product
US10175768B2 (en) Gesture recognition devices, gesture recognition methods, and computer readable media
Giyenko et al. Application of convolutional neural networks for visibility estimation of CCTV images
JP2010287065A (en) Biometric authentication device, authentication accuracy evaluation device and biometric authentication method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20180921

Address after: 221007 Fuxing North Road, Gulou District, Xuzhou, Jiangsu 219

Applicant after: Xuzhou Kuang Shi Data Technology Co., Ltd.

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 A block 2, South Road, Haidian District Academy of Sciences, Beijing 313

Applicant before: MEGVII INC.

Applicant before: Beijing maigewei Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant