Specific embodiment
In order to enable the object, technical solutions and advantages of the present invention become apparent, root is described in detail below with reference to accompanying drawings
According to example embodiments of the present invention.Obviously, described embodiment is only a part of the embodiments of the present invention, rather than this hair
Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Based on described in the present invention
The embodiment of the present invention, those skilled in the art's obtained all other embodiment in the case where not making the creative labor
It should all fall under the scope of the present invention.
Firstly, describing the example for realizing biopsy method according to an embodiment of the present invention and device referring to Fig.1
Electronic equipment 100.
As shown in Figure 1, electronic equipment 100 include one or more processors 102, it is one or more storage device 104, defeated
Enter device 106, output device 108, image collecting device 110 and light emitting devices 114, these components pass through bus system
112 and/or other forms bindiny mechanism's (not shown) interconnection.It should be noted that electronic equipment 100 shown in FIG. 1 component and
Structure be it is illustrative, and not restrictive, as needed, the electronic equipment also can have other assemblies and structure.
The processor 102 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution
The processing unit of the other forms of ability, and the other components that can control in the electronic equipment 100 are desired to execute
Function.
The storage device 104 may include one or more computer program products, and the computer program product can
To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy
The property lost memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non-
Volatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..In the computer readable storage medium
On can store one or more computer program instructions, processor 102 can run described program instruction, to realize hereafter institute
The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired functions.In the meter
Can also store various application programs and various data in calculation machine readable storage medium storing program for executing, for example, the application program use and/or
The various data etc. generated.
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat
One or more of gram wind and touch screen etc..
The output device 108 can export various information (such as image and/or sound) to external (such as user), and
It and may include one or more of display, loudspeaker etc..
Described image acquisition device 110 can be shot desired image (such as photo, video frame etc.), and will be captured
Image be stored in the storage device 104 for other components use.Image collecting device 110 can be using any suitable
Camera installation realize, such as camera, video camera or camera of mobile terminal etc..
The light emitting devices 114 can be with emitting structural light.Illustratively, the light source that light emitting devices 114 uses is red
Outer light source, the structure light launched are infrared lights.It is understood that if light emitting devices 114 uses infrared light supply,
Image collecting device 110 should also be as the ability of corresponding infrared imaging.The light emitting devices 114 can independently or
In the expected time, transmitting has the structure light of desired spatial frequency and desired phase under the control of processor 102.Light emitting devices
114 can be realized using equipment such as laser or projectors.
Illustratively, the exemplary electronic device for realizing biopsy method according to an embodiment of the present invention and device can
With the Image Acquisition end etc. for being implemented as smart phone, tablet computer, access control system, personal computer etc..
In the following, biopsy method according to an embodiment of the present invention will be described with reference to Fig. 2.Fig. 2 shows according to the present invention one
The schematic flow chart of the biopsy method 200 of a embodiment.As shown in Fig. 2, biopsy method 200 includes following step
Suddenly.
In step S210, at least two groups object images are received, which has at least two respectively
It is obtained in the case where the structure light irradiation object of different space frequency for object acquisition.
" object " as described herein is the object for participating in In vivo detection, can be true face, attacker uses
Photo, video, mask or faceform etc..
According to one embodiment, structure light may include infrared light (as described above).The wavelength of infrared light can be example
Such as 808nm or 850nm.Since people is insensitive to infrared light, can cause to invade to avoid to detected person using infrared light,
So as to improve the user experience of detected person.Further optionally, living body inspection can be carried out to object using line-structured light
It surveys.The information content for being included by the image data that line-structured light measures is larger, but handles simply, therefore facilitates accurate fast
In vivo detection is carried out fastly.
As an example, light emitting devices, which can be used, successively launches at least two structures with different space frequency
Light is used for irradiation object.When light emitting devices uses structure light irradiation object, image collecting device can use for object
Image is acquired, to obtain object images.Then, collected object images can be sent to memory use by image collecting device
It is handled by processor in later or is transmitted directly to processor.Hereafter using light emitting devices as projector and image collecting device
To be described for camera.Fig. 3 shows utilization structure light irradiation object and acquisition according to an embodiment of the invention
The schematic diagram of object images.Reliable information is relatively enriched in order to collect, projector should be spatially close with camera, such as
Shown in Fig. 3.
The form of structure light is described below.With reference to Fig. 4, the schematic of structure light according to an embodiment of the invention is shown
Grating fringe figure.As shown in figure 4, projector launches two kinds of structure lights respectively, both structure lights are divided into two groups, i.e. L group and R
Group.L group is shown on the left of Fig. 4 comprising structure light corresponding with having three amplitude grating bar graphs of the first spatial frequency;Figure
4 right sides show R group comprising structure light corresponding with the three amplitude grating bar graphs with second space frequency.Six width light
Sinusoidal pattern nicking as shown in Figure 4 is presented in grid bar graph.First spatial frequency is different with second space frequency.Illustratively,
First spatial frequency can be the 1/24 of second space frequency, that is to say, that the space periodic of L group structure light is R group structure light
24 times of space periodic.It will be appreciated, of course, that the ratio is exemplary only, any suitable value can be, the present invention
It is limited not to this.The phase of structure light corresponding from three amplitude grating bar graphs respectively in L group structure light is different, they it
Between phase difference be 120 degree.The phase of structure light corresponding from three amplitude grating bar graphs respectively in R group structure light is also different,
Phase difference between them is also 120 degree.Certainly, above-mentioned phase difference is merely exemplary, and can be any suitable value, this
Invention is limited not to this.
Projector be can use successively with structure light irradiation object corresponding to every amplitude grating bar graph, and irradiated every time
When object, camera acquisition target image is utilized.Fig. 4 shows six amplitude grating bar graphs, correspondingly, should collect six objects
Image.This six object images can be divided into two groups according to the spatial frequency of corresponding structure light.
Although Fig. 4 is with two kinds of structure lights with different frequency as an example, however, it is to be appreciated that the space of structure light
Frequency can be more than two kinds.Correspondingly, object images obtained can be more than two groups.In addition, under the same space frequency
The phase of structure light can be fewer of more than three kinds.Correspondingly, the number of the object images in every group objects image can be lacked
In or more than three.
In step S220, at least two frequency response intensity image corresponding at least two groups object images is calculated.
As unit of every group objects image, a frequency response intensity map can be calculated for this group objects image
Picture.It in one embodiment, can be between the pixel according to the corresponding position in each image in every group objects image
Relationship calculates frequency response intensity image corresponding with the group objects image.The calculating of frequency response intensity image is described below
Process.
For i-th of object images in Mr. Yu's group objects image, it can be indicated with I [i, x, y] in i-th of object
The intensity value of pixel at position (x, y) of image indicates the offset phase of structure light corresponding to i-th of object images with ai
Position.Frequency response process of the structure light on object can be indicated using following formula:
q[x,y]*sin(p+ai)+r=I [i, x, y] (1)
Wherein, p represents the initial phase of the group objects image, and r represents the intensity value of bias light (ambient) ingredient,
It is a complementary variable, q [x, y] represents frequency response intensity image corresponding to the group objects image in the position (x, y)
The intensity value of the pixel at place.P, q [x, y], r are unknown numbers.
For Mr. Yu's group objects image, above-mentioned formula (1) can be listed for each object images therein, so as to
To form equation group.It is solved according to the equation group of composition q [x, y], and then obtains frequency response intensity image.
Below for including four object images in certain group objects image.Assuming that structure corresponding to each object images
The offset phase of light is respectively a0,a1,a2,a3, it is possible to list following formula:
It is understood that since there are three unknown number p, q [x, y], r for tool in frequency response formula, therefore in formula (2)
Equation group belong to over-determined systems.In such a case, it is possible to the least square solution of the derivation of equation (2) calculates q [x, y], into
And obtain frequency response intensity image.
Seen from the above description, when right with the structure light institute with out of phase comprising four or more in a group objects image
When the object images answered, the equation group of composition belongs to over-determined systems, needs to seek least square solution.When in a group objects image
When comprising three with object images corresponding to the structure light with out of phase, the equation group of composition, which belongs to, just determines equation group,
It can seek accurately to solve.When in a group objects image only comprising two with have out of phase structure light corresponding to object diagram
When picture, the equation group of composition belongs to Indeterminate Equation Group, needs to seek elementary solution.It only include two objects in a group objects image
In the case where image, elementary solution can be asked using following formula:
Q [x, y]=max (I [1, x, y], I [2, x, y])-min (I [1, x, y]-I [2, x, y]) (3)
Optionally, for every group of structure light with the same space frequency, there can be out of phase using three kinds or more
Structure light carry out irradiation object and acquire three or more object images, detection accuracy can be improved in this way.
The calculating process of frequency response intensity image is further described below with reference to structure light shown in Fig. 4.
For structure light corresponding to six amplitude gratings bar graph shown in Fig. 4, it is assumed that in three amplitude grating bar graph institutes of L group
Corresponding lower three object images obtained of structure light irradiation are the 1st, the 2nd and the 3rd object images, in three amplitude gratings of R group
Lower three object images obtained of the irradiation of structure light corresponding to bar graph are the 4th, the 5th and the 6th object images.
For object images corresponding to L group structure light, indicate frequency response intensity image in the position (x, y) with A [x, y]
The intensity value of the pixel at place:
It can be calculated based on formula (4): A [x, y]=(2*I [1, x, y]-I [2, x, y]-I [3, x, y])2+3*(I
[2,x,y]-I[3,x,y])2/9。
Similarly, for object images corresponding to R group structure light, with B [x, y] indicate frequency response intensity image (x,
Y) intensity value of the pixel at position, it is available: B [x, y]=(2*I [4, x, y]-I [5, x, y]-I [6, x, y])2+3*(I
[5,x,y]-I[6,x,y])2/9。
Reality of the intensity than image is obtained in the intensity relationship as described below based between frequency response intensity image
It applies in example, A [x, y]=(2*I [1, x, y]-I [2, x, y]-I [3, x, y]) can be enabled2+3*(I[2,x,y]-I[3,x,y])2And
And B [x, y]=(2*I [4, x, y]-I [5, x, y]-I [6, x, y])2+3*(I[5,x,y]-I[6,x,y])2。
In step S230, determine whether object is living body based at least two frequency response intensity images.
After calculating at least two frequency response intensity images of acquisition, at least two frequency responses intensity can be based on
Image determines whether object is living body.
Due to unlike material object a variety of spatial frequencys structure light irradiation under frequency response situation it is different, people
The corresponding frequency response intensity image of skin and the corresponding frequency response intensity image of objects such as screen, paper, gypsum, rubber have
Very big difference, therefore living body faces and non-living body attacker can be distinguished by frequency response intensity image.This mode without
The object cooperation that need to be detected, and living body faces can be identified with the three-dimensional face model of certain number of people shape, it is
A kind of detection method with high security and ease for use.
Illustratively, biopsy method according to an embodiment of the present invention can be in setting with memory and processor
It is realized in standby, device or system.
Biopsy method according to an embodiment of the present invention can be deployed at man face image acquiring end, for example, in security protection
Application field can be deployed in the Image Acquisition end of access control system;In financial application field, can be deployed at personal terminal,
Smart phone, tablet computer, personal computer etc..
Alternatively, biopsy method according to an embodiment of the present invention can also be deployed in server end (or cloud with being distributed
End) and personal terminal at., can be in personal terminal acquisition target image for example, in financial application field, personal terminal is by object
Image is transmitted in server end (or cloud), and server end (or cloud) carries out In vivo detection according to object images.
Biopsy method according to an embodiment of the present invention, by detected object to the structure light of a variety of spatial frequencys
Frequency response situation judge whether object is living body, the In vivo detection of high security may be implemented without quilt in this mode
The cooperation of the object of detection.
Fig. 5 shows the step of whether determining object according to an embodiment of the invention is living body (step i.e. shown in Fig. 2
S230 schematic flow chart).As shown in figure 5, step S230 may comprise steps of.
In step S231, one or more is obtained based on the intensity relationship between at least two frequency response intensity images
A intensity compares image.
Illustratively, step S231 can be specifically included: select specific frequency from least two frequency response intensity images
Rate response intensity image;And for every in the residual frequency response intensity image at least two frequency response intensity images
One, calculate the intensity value of the pixel of the residual frequency response intensity image and the corresponding picture in Specific frequency response intensity image
The ratio between the intensity value of element, and one or more intensity are obtained according to the ratio between intensity value calculated and compare image.In this way
Intensity can quickly and easily be obtained and compare image.It is exemplified below.
Still by taking embodiment shown in Fig. 4 as an example, there are two frequency response intensity image A [x, y] and B [x, y] for tool.With
R [x, y] indicates that intensity than image, can calculate in the following manner: R [x, y]=B [x, y]/A [x, y] or R [x, y]
=A [x, y]/B [x, y].Intensity can also calculate in the following manner than image: R [x, y]=(A [x, y]-B [x, y])/A [x,
Y]=1-B [x, y]/A [x, y] or R [x, y]=(B [x, y]-A [x, y])/B [x, y]=1-A [x, y]/B [x, y].
For the case where there are more than two frequency response intensity images, group neatly can be carried out to frequency intensity image
It closes and compares image to calculate intensity.For example, it is assumed that there are four frequency response intensity image A [x, y], B [x, y], C [x, y] and D
[x,y].A frequency response intensity image A [x, y] can be then selected from this four frequency response intensity images, and it is remaining
Frequency response image is successively combined, and three combinations are formed, then can be according to two above-mentioned frequency response intensity maps
The intensity of picture calculates each combined intensity than the calculation of image and compares image.In this way, available three intensity is than image,
Such as R1[x, y]=B [x, y]/A [x, y], R2[x, y]=C [x, y]/A [x, y] and R3[x, y]=D [x, y]/A [x, y].With
Judge afterwards intensity than image average intensity value whether within a predetermined range (i.e. step S234) when, each intensity ratio can be directed to
Image is successively judged.Above-mentioned intensity is only example than the calculation of image, can also use other any suitable group
Conjunction mode obtains intensity than image, as described below.In one example, said frequencies response intensity image can be divided into two
A combination A [x, y] and B [x, y] and C [x, y] and D [x, y], and two intensity are accordingly calculated than image R1[x, y]=B
[x, y]/A [x, y], R2[x, y]=C [x, y]/D [x, y].In another example, four frequency response intensity maps can be directed to
As calculating an intensity than image, such as R [x, y]=(B [x, y]+C [x, y]+D [x, y])/A [x, y]=B [x, y]/A [x,
y]+C[x,y]/A[x,y]+D[x,y]/A[x,y]。
Those skilled in the art are understood that the intensity ratio of any number of frequency response intensity image as described above
The calculation of image, details are not described herein.In addition, it is necessary to understand, foregoing description is only example, any suitable to be based on
Intensity relationship between frequency response intensity image, which obtains intensity, should all fall into guarantor of the invention than the implementation of image
It protects in range.
In step S232, Face datection is carried out to one of at least two frequency response intensity images, to determine human face region.
In this step, it can determine optional one at least two frequency response intensity images in selected frequency
Whether include face in rate response intensity image, and in the case where include face in selected frequency response intensity image
Human face region is oriented in selected frequency response intensity image.It is preferential to select L group structure light for embodiment shown in Fig. 4
Corresponding frequency response intensity image, that is to say, that frequency corresponding to the preferential selection lower one group of structure light of spatial frequency
Rate response intensity image.Compared with other frequency response intensity images, the frequency corresponding to the lower structure light of spatial frequency
Carrying out Face datection on response intensity image can be fairly simple and accurate, is easier to identify the human face region in image.
It can use preparatory trained human-face detector and come the locating human face region in frequency response intensity image.Example
Such as, the Face datections such as Ha Er (Haar) algorithm, Adaboost algorithm and recognizer can be advanced in the base of a large amount of pictures
Human-face detector is trained on plinth, for the single-frame images of input, trained human-face detector can rapidly be determined in advance for this
Position goes out human face region.
It should be appreciated that the present invention is not limited by the method for detecting human face specifically used, either existing method for detecting human face
Or the method for detecting human face of exploitation in the future can be applied in biopsy method according to an embodiment of the present invention, and
It also should include within the scope of the present invention.
In step S233, for one or more intensity than each of image, calculate the intensity than it is in image,
The average intensity value of pixel in region corresponding with human face region.
Illustratively, step S233 can be specifically included: be calculated in the intensity than in image, corresponding with human face region
Region in pixel total intensity value;And calculate total intensity value in the intensity than in image, opposite with human face region
The area ratio in the region answered, to obtain average intensity value.Average intensity value can be accurately calculated in this way.
The calculation method of average intensity value is exemplified below.Assuming that identified human face region is square in step S232
Shape (x0, y0)~(x1, y1) can be calculated by the following formula intensity than corresponding with the human face region in image R [x, y]
The average intensity value u of pixel in region:
For obtaining the case where multiple intensity are than image in step S231, can be calculated for each intensity than image
One average intensity value, to obtain multiple average intensity values.
In step S234, judge one or more intensity than whether there is in image predetermined number, average intensity value exists
Intensity in corresponding preset range is than image, if it is present step S235 is gone to, if it does not exist, then going to step
Rapid S236.
In step S235, determine that object is living body.
In step S236, determine that object is not living body.
Depending on predetermined number can according to need.If the intensity obtained in step S231 is only one than image,
Predetermined number is equal to 1.That is, can directly judge average intensity value of this intensity than image whether in preset range
It is interior, if it is, determining that object is living body, otherwise, it determines object is not living body.If the intensity ratio obtained in step S231
Image be it is multiple, then predetermined number can be equal to or less than intensity than image total number.For example, if being obtained in step S231
The intensity obtained is three than image, then predetermined number can be equal to 3.For these three intensity are than image, each pair
Answer the preset range of oneself.It can directly judge average intensity value of these three intensity than image whether all corresponding
In preset range.If it is, thinking that these three intensity are met the requirements than image, determine that object is living body, otherwise, it determines object
It is not living body.In another example predetermined number can be equal to 2 if the intensity obtained in step S231 than image is three.It can
To judge these three intensity than whether there is two intensity in image than the average intensity value of image in corresponding predetermined model
In enclosing.If it is present thinking that these three intensity are met the requirements than image, determine that object is living body, if there is no (namely
Say, only one or without intensity of the average intensity value in corresponding preset range than image), it is determined that object is not
Living body.
Preset range can be limited by one or more threshold values, can be trained and be obtained based on practical face.
For example, the structure light using various with different space frequency can irradiate practical face in advance and acquire a large amount of practical face
Object images.Then, frequency response intensity image corresponding to the structure light of various spatial frequencys is calculated, is summarized based on various
The intensity that the intensity relationship of frequency response intensity image corresponding to the structure light of different space frequency obtains is than image
Feature, to obtain suitable threshold value, i.e., suitable preset range.
Judge that object whether be the mode of living body is a kind of higher In vivo detection side of accuracy than image according to intensity
Formula.
Illustratively, each group at least two groups object images includes at least two respectively with the same space frequency
With the object images obtained in the case where the structure light irradiation object of out of phase for object acquisition.As described above, for
Every group objects image needs to solve the equation group of such as formula (2) in calculated frequency response intensity image.Therefore, for tool
It can be at least two by the phase settings of the structure light under the same space frequency for the structure light for having the same space frequency
Kind, correspondingly, the object images of acquisition are at least two.
Illustratively, before step S210, the biopsy method 200 be may further include: use at least two
Kind has the structure light irradiation object of different space frequency;And in each irradiation object, for object acquisition target image,
To obtain at least two groups object images.Above have been combined Fig. 3 and Fig. 4 describe utilize structure light irradiation object and acquisition pair
As the method for image.As described above, light emitting devices (such as projector) can be used to object emitting structural light, and use
Image collecting device (such as camera) acquisition target image.
Fig. 6 shows the schematic block diagram of living body detection device 600 according to an embodiment of the invention.
As shown in fig. 6, living body detection device 600 according to an embodiment of the present invention includes receiving module 610, computing module
620 and living body determining module 630.
Receiving module 610 is for receiving at least two groups object images, and at least two groups object images are respectively at least two
It is obtained in the case where structure light irradiation object with different space frequency for object acquisition.Receiving module 610 can be by Fig. 1
Shown in the program instruction that stores in 102 Running storage device 104 of processor in electronic equipment realize.
Computing module 620 is for calculating at least two frequency response intensity map corresponding at least two groups object images
Picture.The program that computing module 620 can store in 102 Running storage device 104 of processor in electronic equipment as shown in Figure 1
Instruction is to realize.
Living body determining module 630 is used to determine whether object is living body based at least two frequency response intensity images.It is living
The program that body determining module 630 can store in 102 Running storage device 104 of processor in electronic equipment as shown in Figure 1
Instruction can execute the step S231 to S236 in biopsy method according to an embodiment of the present invention to realize.
Fig. 7 shows showing for the living body determining module 630 in living body detection device 600 according to an embodiment of the invention
Meaning property block diagram.
As shown in fig. 7, the living body determining module 630 may include intensity than image acquisition submodule 6310, face inspection
Survey submodule 6320, mean intensity computational submodule 6330 and judging submodule 6340.
Intensity obtains submodule 6310 than image and is used for based on the intensity ratio between at least two frequency response intensity images
Example relationship obtains one or more intensity and compares image.Intensity than image obtain submodule 6310 can electronics as shown in Figure 1 set
The program instruction that stores in 102 Running storage device 104 of processor in standby is realized.
The Face datection submodule 6320 is used to carry out Face datection to one of at least two frequency response intensity images,
To determine human face region.The Face datection submodule 6320 can be human-face detector, can electronic equipment as shown in Figure 1
In 102 Running storage device 104 of processor in the program instruction that stores realize.
Mean intensity computational submodule 6330 is used to calculate at this one or more intensity than each of image
Average intensity value of the intensity than the pixel in region in image, corresponding with human face region.The mean intensity calculates son
The program instruction that module 6330 can store in 102 Running storage device 104 of processor in electronic equipment as shown in Figure 1 comes
It realizes.
The judging submodule 6340 be used to judge one or more intensity than whether there is in image predetermined number, it is flat
Equal intensity of the intensity value in corresponding preset range is than image, if it is present determining that object is living body, if do not deposited
, it is determined that object is not living body.The judging submodule 6340 can processor 102 in electronic equipment as shown in Figure 1
The program instruction that stores in Running storage device 104 is realized.
According to embodiments of the present invention, it may include selecting unit and intensity than calculating that intensity, which obtains submodule 6310 than image,
Unit.Selecting unit is used to select Specific frequency response intensity image from least two frequency response intensity images.Intensity ratio
Computing unit is used for for each of the residual frequency response intensity image at least two frequency response intensity images, meter
Calculate the strong of the intensity value of the pixel of the residual frequency response intensity image and the respective pixel in Specific frequency response intensity image
The ratio between angle value, and one or more intensity are obtained according to the ratio between intensity value calculated and compare image.
According to embodiments of the present invention, the mean intensity computational submodule 6330 includes that the first computing unit and second calculate
Unit.First computing unit is for calculating in the intensity than the pixel in region in image, corresponding with human face region
Total intensity value.Second computing unit for calculate total intensity value in the intensity than in image, corresponding with human face region
The area ratio in region, to obtain average intensity value.
According to embodiments of the present invention, predetermined number is equal to one or more intensity than all intensity in image than image
Total number.
According to embodiments of the present invention, preset range is to be trained and obtained based on practical face.
According to embodiments of the present invention, structure light includes infrared light.
According to embodiments of the present invention, each group at least two groups object images includes at least two respectively with same
The object images obtained in the case where the structure light irradiation object of spatial frequency and out of phase for object acquisition.
According to embodiments of the present invention, the living body detection device 600 further comprises optical transmitter module and Image Acquisition mould
Block.Optical transmitter module is used for the structure light irradiation object using described at least two with different space frequency.Image Acquisition mould
Block is used in each irradiation object, for object acquisition target image, to obtain at least two groups object images.Optical transmitter module
It can be realized using light emitting devices 114 shown in FIG. 1, image capture module can use image collecting device shown in FIG. 1
110 realize.
According to embodiments of the present invention, the computing module 620 in the living body detection device 600 is specifically used for according to every group pair
As the corresponding position in each image in image pixel between relationship, calculate corresponding with group objects image frequency
Response intensity image.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
Fig. 8 shows the schematic block diagram of In vivo detection system 800 according to an embodiment of the invention.In vivo detection system
System 800 includes image collecting device 810, light emitting devices 820, storage device 830 and processor 840.
Image collecting device 810 is used for acquisition target image.Light emitting devices 820 is used for emitting structural light.
The storage of storage device 830 is for realizing the corresponding steps in biopsy method according to an embodiment of the present invention
Program code.
The processor 840 is for running the program code stored in the storage device 830, to execute according to the present invention
The corresponding steps of the biopsy method of embodiment, and for realizing living body detection device 600 according to an embodiment of the present invention
In receiving module 610, computing module 620 and living body determining module 630.
In one embodiment, following steps are executed when said program code is run by the processor 840: being received extremely
Few two group objects images, at least two groups object images structure light irradiation pair at least two with different space frequency respectively
It is obtained as in the case where for object acquisition;It is strong to calculate at least two frequency responses corresponding at least two groups object images
Spend image;And determine whether object is living body based at least two frequency response intensity images.
In one embodiment, performed based at least two when said program code is run by the processor 840
A frequency response intensity image determine the step of whether object is living body include: based at least two frequency response intensity images it
Between intensity relationship obtain one or more intensity and compare image;People is carried out to one of at least two frequency response intensity images
Face detection, to determine human face region;For one or more intensity than each of image, calculate in the intensity than in image
, the average intensity value of pixel in corresponding with human face region region;And judge one or more intensity than in image
With the presence or absence of predetermined number, intensity that average intensity value is in corresponding preset range than image, if it is present really
Determining object is living body, if it does not exist, then determining that object is not living body.
In one embodiment, performed based at least two when said program code is run by the processor 840
It includes: from least that intensity relationship between a frequency response intensity image, which obtains one or more intensity than the step of image,
Specific frequency response intensity image is selected in two frequency response intensity images;And at least two frequency response intensity maps
Each of residual frequency response intensity image as in, calculates the intensity value of the pixel of the residual frequency response intensity image
With the ratio between the intensity value of respective pixel in Specific frequency response intensity image, and according to the ratio between intensity value calculated obtain one
A or multiple intensity compare image.
In one embodiment, performed for described one when said program code is run by the processor 840
A or multiple intensity are calculated in the intensity than each of image than in region in image, corresponding with human face region
The step of average intensity value of pixel includes: to calculate in the intensity than in region in image, corresponding with human face region
The total intensity value of pixel;And calculate total intensity value in the intensity than region in image, corresponding with human face region
Area ratio, to obtain average intensity value.
In one embodiment, it is more total than image than all intensity in image to be equal to one or more intensity for predetermined number
Number.
In one embodiment, preset range is to be trained and obtained based on practical face.
In one embodiment, structure light includes infrared light.
In one embodiment, each group at least two groups object images includes at least two respectively with same sky
Between the object images that are obtained for object acquisition in the case where the structure light irradiation object of frequency and out of phase.
In one embodiment, the calculating performed when said program code is run by the processor 840 with
The corresponding at least two frequency responses intensity image of at least two groups object images includes: according in every group objects image
Each image in corresponding position pixel between relationship, calculate corresponding with group objects image frequency response intensity
Image.
In addition, according to embodiments of the present invention, additionally providing a kind of storage medium, storing program on said storage
Instruction, when described program instruction is run by computer or processor for executing the biopsy method of the embodiment of the present invention
Corresponding steps, and for realizing the corresponding module in living body detection device according to an embodiment of the present invention.The storage medium
It such as may include the storage card of smart phone, the storage unit of tablet computer, the hard disk of personal computer, read-only memory
(ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc read-only memory (CD-ROM), USB storage,
Or any combination of above-mentioned storage medium.
In one embodiment, the computer program instructions may be implemented real according to the present invention when being run by computer
Each functional module of the living body detection device of example is applied, and/or In vivo detection according to an embodiment of the present invention can be executed
Method.
In one embodiment, the computer program instructions execute following steps when being run by computer: receiving extremely
Few two group objects images, at least two groups object images structure light irradiation pair at least two with different space frequency respectively
It is obtained as in the case where for object acquisition;It is strong to calculate at least two frequency responses corresponding at least two groups object images
Spend image;And determine whether object is living body based at least two frequency response intensity images.
In one embodiment, the computer program instructions are performed based at least two when being run by computer
Frequency response intensity image determines that the step of whether object is living body includes: based between at least two frequency response intensity images
Intensity relationship obtain one or more intensity and compare image;Face is carried out to one of at least two frequency response intensity images
Detection, to determine human face region;For one or more intensity than each of image, calculate in the intensity than in image
, the average intensity value of pixel in corresponding with human face region region;And judge one or more intensity than in image
With the presence or absence of predetermined number, intensity that average intensity value is in corresponding preset range than image, if it is present really
Determining object is living body, if it does not exist, then determining that object is not living body.
In one embodiment, the computer program instructions are performed based at least two when being run by computer
It includes: from least two that intensity relationship between frequency response intensity image, which obtains one or more intensity than the step of image,
Specific frequency response intensity image is selected in a frequency response intensity image;And at least two frequency response intensity images
In each of residual frequency response intensity image, calculate the intensity value of the pixel of the residual frequency response intensity image with
The ratio between the intensity value of respective pixel in Specific frequency response intensity image, and one is obtained according to the ratio between intensity value calculated
Or multiple intensity compare image.
In one embodiment, the computer program instructions are performed for one when being run by computer
Or multiple intensity are calculated in the intensity than each of image than the picture in region in image, corresponding with human face region
The step of average intensity value of element includes: to calculate in the intensity than the picture in region in image, corresponding with human face region
The total intensity value of element;And calculate total intensity value in face of the intensity than region in image, corresponding with human face region
The ratio between product, to obtain average intensity value.
In one embodiment, it is more total than image than all intensity in image to be equal to one or more intensity for predetermined number
Number.
In one embodiment, preset range is to be trained and obtained based on practical face.
In one embodiment, structure light includes infrared light.
In one embodiment, each group at least two groups object images includes at least two respectively with same sky
Between the object images that are obtained for object acquisition in the case where the structure light irradiation object of frequency and out of phase.
In one embodiment, the computer program instructions calculating performed when being run by computer and institute
Stating the corresponding at least two frequency responses intensity image of at least two groups object images includes: according in every group objects image
Relationship between the pixel of corresponding position in each image calculates frequency response intensity map corresponding with the group objects image
Picture.
Each module in In vivo detection system according to an embodiment of the present invention can pass through work according to an embodiment of the present invention
The processor computer program instructions that store in memory of operation of the electronic equipment that physical examination is surveyed realize, or can be in root
The computer instruction stored in computer readable storage medium according to the computer program product of the embodiment of the present invention is by computer
It is realized when operation.
Biopsy method and device according to an embodiment of the present invention, In vivo detection system and storage medium, pass through by
The object of detection judges the frequency response situation of the structure light of a variety of spatial frequencys whether object is living body, and this mode can
To realize cooperation of the In vivo detection of high security without detected object.
Although describing example embodiment by reference to attached drawing here, it should be understood that above example embodiment are only exemplary
, and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various changes wherein
And modification, it is made without departing from the scope of the present invention and spiritual.All such changes and modifications are intended to be included in appended claims
Within required the scope of the present invention.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, apparatus embodiments described above are merely indicative, for example, the division of the unit, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another equipment is closed or is desirably integrated into, or some features can be ignored or not executed.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of the various inventive aspects,
To in the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure,
Or in descriptions thereof.However, the method for the invention should not be construed to reflect an intention that i.e. claimed
The present invention claims features more more than feature expressly recited in each claim.More precisely, as corresponding
As claims reflect, inventive point is that all features less than some disclosed single embodiment can be used
Feature solves corresponding technical problem.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in the tool
Body embodiment, wherein each, the claims themselves are regarded as separate embodiments of the invention.
It will be understood to those skilled in the art that any combination pair can be used other than mutually exclusive between feature
All features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed any method
Or all process or units of equipment are combined.Unless expressly stated otherwise, this specification (is wanted including adjoint right
Ask, make a summary and attached drawing) disclosed in each feature can be replaced with an alternative feature that provides the same, equivalent, or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any
Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice
Microprocessor or digital signal processor (DSP) realize some moulds in article analytical equipment according to an embodiment of the present invention
The some or all functions of block.The present invention is also implemented as a part or complete for executing method as described herein
The program of device (for example, computer program and computer program product) in portion.It is such to realize that program of the invention can store
On a computer-readable medium, it or may be in the form of one or more signals.Such signal can be from internet
Downloading obtains on website, is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch
To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame
Claim.
The above description is merely a specific embodiment or to the explanation of specific embodiment, protection of the invention
Range is not limited thereto, and anyone skilled in the art in the technical scope disclosed by the present invention, can be easily
Expect change or replacement, should be covered by the protection scope of the present invention.Protection scope of the present invention should be with claim
Subject to protection scope.