CN107633198A - Biopsy method, device, equipment and storage medium - Google Patents
Biopsy method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN107633198A CN107633198A CN201710613143.3A CN201710613143A CN107633198A CN 107633198 A CN107633198 A CN 107633198A CN 201710613143 A CN201710613143 A CN 201710613143A CN 107633198 A CN107633198 A CN 107633198A
- Authority
- CN
- China
- Prior art keywords
- picture
- user
- photographed
- image capturing
- visible image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses biopsy method, device, equipment and storage medium, wherein method includes:For user to be detected, the first picture and second picture photographed using visible image capturing head is obtained respectively, wherein, the first picture is the picture photographed in the case where there is light, and second picture is the picture photographed in the case of unglazed;Determine whether user is live body according to the first picture and second picture.Using scheme of the present invention, it is possible to increase the accuracy of testing result.
Description
【Technical field】
The present invention relates to Computer Applied Technology, more particularly to biopsy method, device, equipment and storage medium.
【Background technology】
Face recognition technology has natural original excellent in actual applications compared with other biometrics identification technologies
Gesture:It can directly be obtained by camera, identification process can be completed in a non-contact manner, it is convenient and swift.
One is also brought while having been supplied in the every field such as finance, education, scenic spot, trip's fortune, social security at present, but facilitate
A little problems, such as easily obtain so that face is easily replicated by some with modes such as photo, videos, and robber is stolen so as to reach
With the purpose of information.Especially in emerging financial industry, face recognition technology is gradually applied to remotely open an account, withdraw the money, propping up
Pay etc., it is related to the vital interests of user.
So-called In vivo detection, it is exactly in face generally therefore, having also been proposed In vivo detection technology in the prior art
Prove it is " living person " corresponding to this face during identification.
The source of non-living body is that comparison is extensive, such as may include the photo that is shown on mobile phone and Pad and video, various beats
Photo (bending comprising various situations, fold, cut out, digging a hole) of the unlike material of print etc..
In vivo detection has its application in social security, the online important events such as open an account, such as, pass through and verify and determine old user
After identity is true and alive getting for old-age pension could be carried out, when opening an account on the net, the true, effective of user profile is ensured with this
With safety etc..
In existing In vivo detection mode, user's picture generally is gathered using camera, and user's picture is carried out special
Sign extraction, and then determine whether user is live body according to the feature extracted.
But the accuracy of the testing result of this mode is than relatively low, it is easy to falsely determines that non-living body for live body.
【The content of the invention】
In view of this, the invention provides biopsy method, device, equipment and storage medium, it is possible to increase detection knot
The accuracy of fruit.
Concrete technical scheme is as follows:
A kind of biopsy method, including:
For user to be detected, the first picture and second picture photographed using visible image capturing head is obtained respectively,
Wherein, first picture is the picture photographed in the case where there is light, and the second picture is to be clapped in the case of unglazed
The picture taken the photograph;
Determine whether the user is live body according to first picture and the second picture.
According to one preferred embodiment of the present invention, it is described obtain respectively the first picture for being photographed using visible image capturing head and
Second picture includes:
Obtain first picture and described second photographed for the unified visible image capturing head set of different user
Picture.
According to one preferred embodiment of the present invention, it is described obtain respectively the first picture for being photographed using visible image capturing head and
Second picture includes:
The intelligent terminal that user uses is sent, the visible image capturing head on the intelligent terminal is obtained to photograph
First picture and the second picture.
According to one preferred embodiment of the present invention, it is described that the use is determined according to first picture and the second picture
Whether family is that live body includes:
First picture and the second picture are inputed into the disaggregated model that training in advance obtains, the institute exported
State user whether be live body testing result.
According to one preferred embodiment of the present invention, first picture and the second picture are RGB pictures;
It is described first picture and the second picture are inputed into the disaggregated model that training in advance obtains to include:
By the RGB triple channels information in first picture and the RGB triple channels information in the second picture together
Input to the disaggregated model.
A kind of living body detection device, including:Acquiring unit and detection unit;
The acquiring unit, for for user to be detected, obtaining photographed using visible image capturing head respectively
One picture and second picture, wherein, first picture is the picture photographed in the case where there is light, and the second picture is
The picture photographed in the case of unglazed, first picture and the second picture are sent to the detection unit;
The detection unit, for determining whether the user is living according to first picture and the second picture
Body.
According to one preferred embodiment of the present invention, the acquiring unit obtains takes the photograph for the unified visible ray set of different user
First picture and the second picture photographed as head.
According to one preferred embodiment of the present invention, the acquiring unit obtains the intelligent terminal that user uses is sent, position
First picture and the second picture photographed in the visible image capturing head on the intelligent terminal.
According to one preferred embodiment of the present invention, the detection unit inputs to first picture and the second picture
The disaggregated model that training in advance obtains, the user exported whether be live body testing result.
According to one preferred embodiment of the present invention, first picture and the second picture are RGB pictures;
The detection unit is by the RGB triple channels information in first picture and the RGB tri- in the second picture
Channel information inputs to the disaggregated model together.
A kind of computer equipment, including memory, processor and be stored on the memory and can be in the processor
The computer program of upper operation, method as described above is realized during the computing device described program.
A kind of computer-readable recording medium, computer program is stored thereon with, it is real when described program is executed by processor
Existing method as described above.
It is can be seen that based on above-mentioned introduction using scheme of the present invention, for user to be detected, utilization can be obtained respectively
The first picture and second picture that visible image capturing head photographs, wherein, the first picture photographs in the case where there is light
Picture, second picture are the picture photographed in the case of unglazed, can be determined afterwards according to the first picture and second picture
Whether user is live body, compared to prior art, in scheme of the present invention respectively shooting have light and it is unglazed in the case of user
Picture, and then the picture that photographs is combined under both of these case to distinguish whether user is live body, so as to improve testing result
Accuracy.
【Brief description of the drawings】
Fig. 1 is the flow chart of biopsy method first embodiment of the present invention.
Fig. 2 is the flow chart of biopsy method second embodiment of the present invention.
Fig. 3 is the composition structural representation of living body detection device embodiment of the present invention.
Fig. 4 shows the block diagram suitable for being used for the exemplary computer system/server 12 for realizing embodiment of the present invention.
【Embodiment】
In order that technical scheme is clearer, clear, develop simultaneously embodiment referring to the drawings, to institute of the present invention
The scheme of stating is further described.
Obviously, described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on the present invention
In embodiment, all other embodiment that those skilled in the art are obtained under the premise of creative work is not made, all
Belong to the scope of protection of the invention.
Fig. 1 is the flow chart of biopsy method first embodiment of the present invention, as shown in figure 1, including in detail below
Implementation.
In 101, for user to be detected, the first picture photographed using visible image capturing head and are obtained respectively
Two pictures, wherein, the first picture is the picture photographed in the case where there is light, and second picture is to be shot in the case of unglazed
The picture arrived.
In 102, determine whether user is live body according to the first picture and second picture.
When needing to carry out In vivo detection, picture shooting can be carried out to user using visible image capturing head, and can shoot
Two user's pictures, respectively in the first picture photographed in the case of having light and photographed in the case of unglazed
Two pictures.
The light source of visible image capturing head is visible light source, such as can be feux rouges or blue light.
Visible image capturing the first two times shooting picture interlude can be very short, so the first picture and the second figure that photograph
Image content in piece has almost no change, and has light or unglazed when differing only in shooting.
The shooting order of first picture and second picture does not require, that is to say, that first can be clapped in the case where there is light
The first picture is taken the photograph, shoots second picture in the case of unglazed again afterwards, or, can also be first shot in the case of unglazed
Two pictures, shoot the first picture in the case where there is light again afterwards.
After the first picture and second picture is got, you can determine user with reference to the first picture and second picture
Whether it is live body.
According to the difference for the mode for obtaining the first picture and second picture, said process can at least have following two realizations
Mode.
One) mode one
The first picture and second picture photographed for the unified visible image capturing head set of different user is obtained, according to
First picture and second picture determine whether user is live body.
That is, the user for each needing progress In vivo detection, is carried out using same visible image capturing head
Picture shooting.
For example for a company, the disengaging of employee can be controlled by gate control system, such as can be in a certain position in doorway
One visible image capturing head of installation is put, using the visible image capturing to needing into introduction the user of (enter company in) to carry out the
The shooting of one picture and second picture, it can determine whether user is living according to the first picture and second picture photographed afterwards
Body, if live body, and determine that user is company personnel, then can open door, allow user to enter.
Two) mode two
Obtain that user uses it is that intelligent terminal is sent, the visible image capturing head on intelligent terminal photographs the
One picture and second picture, determine whether user is live body according to the first picture and second picture.
User oneself can be allowed compared to mode one, in which to shoot picture, and then the first picture that will be photographed
Be sent to background processing system with second picture, background processing system after the first picture and second picture is got, according to
First picture and second picture determine whether user is live body.
The intelligent terminal can be mobile phone, personal digital assistant (PDA, Personal Digital Assistant), nothing
Line handheld device, tablet personal computer (Tablet Computer), PC (PC, Personal Computer) etc..
By taking mobile phone as an example, user can install an application program (App) on mobile phone, when need carry out In vivo detection
When, user can open App, and correspondingly, the visible image capturing head that App can be on calling mobile phone carries out the first picture and second picture
Shooting, and then the first picture and second picture that photograph are sent to background processing system.
No matter which kind of above-mentioned mode used, get visible image capturing head shooting the first picture and second picture it
Afterwards, you can determine whether user is live body according to the first picture and second picture.
It is preferred that the first picture and second picture can be inputed to the disaggregated model that training in advance obtains, it is defeated so as to obtain
The user gone out whether be live body testing result.
Disaggregated model can be obtained by training in advance, obtain disaggregated model for training, it is also necessary to obtain sufficient amount in advance
Positive sample and negative sample.
Wherein, positive sample is the picture sample photographed for true man (living person), and negative sample is for various attack patterns
The picture sample photographed, attack pattern may include to be attacked using the photo and video that show on mobile phone or pad, using each
The photo of the unlike material of kind printing is attacked etc..
By the way of being attacked using the photo shown on mobile phone or pad, in the case where there is light, the figure that photographs
Generally occur in piece it is reflective, even if camera is unglazed, if there is sunshine under natural environment etc., it is also possible to occur anti-
Light, and if carrying out picture shooting for true man (living person), then the problem of being not in reflective.
In addition, by the way of for being attacked using the photo printed, because the face in the photo that prints is one
Individual plane, and the face of true man then shows certain solid and on-plane surface, therefore, is there is light and unglazed situation, shadow becomes
Change is different.
Based on These characteristics, can be distinguished by carrying out com-parison and analysis etc. to the first picture and second picture in picture
User is whether user to be detected is live body.
The process being trained to disaggregated model is the process learnt, even These characteristics are arrived in disaggregated model study, with
Just live body and non-living body are efficiently differentiated.
Disaggregated model can be neural network model etc., for training obtained disaggregated model, can will be seen that light video camera head is clapped
The first picture and second picture taken the photograph input to disaggregated model, so as to the user that is exported whether be live body detection knot
Fruit.
As a rule, the first picture and second picture are RGB pictures, then, can be by the RGB triple channels in the first picture
RGB triple channel information in information and second picture inputs to disaggregated model together.
After the In vivo detection result of disaggregated model output is obtained, after being carried out accordingly according to specific testing result
Continuous processing.For example for the gate control system described in mode one, if testing result is live body, and determine that user is company person
Work, then door can be opened, allows user to enter, if testing result is non-living body, door will not be opened, to prevent disabled user from entering
Company, ensure security.For another example, first picture and second picture are shot by mobile phone for the user described in mode two
Mode, In vivo detection result can be returned to the mobile phone of user, and then show user etc..
Based on above-mentioned introduction, Fig. 2 is the flow chart of biopsy method second embodiment of the present invention, as shown in Fig. 2
Including implementation in detail below.
In 201, sufficient amount of positive sample and negative sample are obtained.
Wherein, positive sample is the picture sample photographed for true man (living person), and negative sample is for various attack patterns
The picture sample photographed.
In 202, train to obtain disaggregated model according to the positive sample and negative sample that get.
Disaggregated model can be neural network model etc., and how to train to obtain disaggregated model is prior art.
In 203, for user to be detected, the first picture photographed using visible image capturing head and are obtained respectively
Two pictures, wherein, the first picture is the picture photographed in the case where there is light, and second picture is to be shot in the case of unglazed
The picture arrived.
In 204, the first picture and second picture are inputed into disaggregated model, whether the user exported is live body
Testing result.
RGB triple channels information in first picture and the RGB triple channels information in second picture can be inputed to together
Disaggregated model, so as to obtain disaggregated model output user whether be live body testing result.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because
According to the present invention, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, and involved action and module are not necessarily of the invention
It is necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiments.
In a word, using scheme described in above-mentioned each method embodiment, for user to be detected, it can respectively obtain and utilize visible ray
The first picture and second picture that camera photographs, wherein, the first picture is the picture photographed in the case where there is light, the
Two pictures are the picture photographed in the case of unglazed, the first picture and second picture can be inputed into disaggregated model afterwards,
So as to exported whether be live body testing result, compared to prior art, shot respectively in above-mentioned each method embodiment
Have light and it is unglazed in the case of user's picture, and then combine under both of these case the picture that photographs to distinguish whether user is living
Body, so as to improve the accuracy of testing result, moreover, distinguishing whether user is live body using disaggregated model, without carrying out
Feature extraction described in the prior art etc. is handled, and so as to simplify handling process, reduces implementation complexity, also, pass through
The mode of deep learning ensure that the accuracy of testing result.
Above is the introduction on embodiment of the method, below by way of device embodiment, enters to advance to scheme of the present invention
One step explanation.
Fig. 3 is the composition structural representation of living body detection device embodiment of the present invention, as shown in figure 3, including:Obtain
Unit 301 and detection unit 302.
Acquiring unit 301, for for user to be detected, obtaining first photographed using visible image capturing head respectively
Picture and second picture, wherein, the first picture is the picture photographed in the case where there is light, and second picture is in unglazed feelings
The picture photographed under condition, the first picture and second picture are sent to detection unit 302.
Detection unit 302, for determining whether user is live body according to the first picture and second picture.
When needing to carry out In vivo detection, picture shooting can be carried out to user using visible image capturing head, and can shoot
Two user's pictures, respectively in the first picture photographed in the case of having light and photographed in the case of unglazed
Two pictures.
The light source of visible image capturing head is visible light source, such as can be feux rouges or blue light.
Visible image capturing the first two times shooting picture interlude can be very short, so the first picture and the second figure that photograph
Image content in piece has almost no change, and has light or unglazed when differing only in shooting.
The shooting order of first picture and second picture does not require, that is to say, that first can be clapped in the case where there is light
The first picture is taken the photograph, shoots second picture in the case of unglazed again afterwards, or, can also be first shot in the case of unglazed
Two pictures, shoot the first picture in the case where there is light again afterwards.
The mode of the first picture of acquisition of acquiring unit 301 and second picture can at least have following two:
1) acquiring unit 301 obtain the first picture for being photographed for the unified visible image capturing head set of different user and
Second picture.
I.e. for the user for each needing progress In vivo detection, picture bat is carried out using same visible image capturing head
Take the photograph.
For example for a company, the disengaging of employee can be controlled by gate control system, such as can be in a certain position in doorway
One visible image capturing head of installation is put, using the visible image capturing to needing into introduction the user of (enter company in) to carry out the
The shooting of one picture and second picture, it can determine whether user is living according to the first picture and second picture photographed afterwards
Body, if live body, and determine that user is company personnel, then can open door, allow user to enter.
2) acquiring unit 301 obtains the intelligent terminal that user uses the is sent, visible ray on intelligent terminal and taken the photograph
The first picture and second picture photographed as head.
In which, user oneself can be allowed to shoot picture, and then the first picture and second picture that photograph are sent out
Give background processing system.
By taking mobile phone as an example, user can install an App on mobile phone, and when needing to carry out In vivo detection, user can beat
App is opened, correspondingly, the visible image capturing head that App can be on calling mobile phone carries out the shooting of the first picture and second picture, and then will
The first picture and second picture photographed is sent to background processing system.
Which kind of above-mentioned mode is either used, acquiring unit 301 is getting the first picture of visible image capturing head shooting
After second picture, detection unit 302 can be sent it to, and then by detection unit 302 according to the first picture and the second figure
Piece determines whether user is live body.
It is preferred that detection unit 302 can input to the first picture and second picture the disaggregated model that training in advance obtains,
So as to the user that is exported whether be live body testing result.
Disaggregated model can be obtained by training in advance, obtain disaggregated model for training, it is also necessary to obtain sufficient amount in advance
Positive sample and negative sample.
Wherein, positive sample is the picture sample photographed for true man (living person), and negative sample is for various attack patterns
The picture sample photographed.
Disaggregated model can be neural network model etc., and for training obtained disaggregated model, detection unit 302 can will be seen that
The first picture and second picture that light video camera head photographs input to disaggregated model, so as to which whether the user exported is live body
Testing result.
As a rule, the first picture and second picture are RGB pictures, then, detection unit 302 can be by the first picture
RGB triple channels information and second picture in RGB triple channel information input to disaggregated model together.
The specific workflow of Fig. 3 shown device embodiments refer to the respective description in aforementioned approaches method embodiment, no
Repeat again.
As can be seen that using scheme described in said apparatus embodiment, for user to be detected, can obtain respectively using visible
The first picture and second picture that light video camera head photographs, wherein, the first picture is the picture photographed in the case where there is light,
Second picture is the picture photographed in the case of unglazed, the first picture and second picture can be inputed into classification mould afterwards
Type, so as to exported whether be live body testing result, compared to prior art, shot respectively in said apparatus embodiment
Have light and it is unglazed in the case of user's picture, and then combine under both of these case the picture that photographs to distinguish whether user is living
Body, so as to improve the accuracy of testing result, moreover, distinguishing whether user is live body using disaggregated model, without carrying out
Feature extraction described in the prior art etc. is handled, and so as to simplify handling process, reduces implementation complexity, also, pass through
The mode of deep learning ensure that the accuracy of testing result.
Fig. 4 shows the block diagram suitable for being used for the exemplary computer system/server 12 for realizing embodiment of the present invention.
The computer system/server 12 that Fig. 4 is shown is only an example, should not be to the function and use range of the embodiment of the present invention
Bring any restrictions.
As shown in figure 4, computer system/server 12 is showed in the form of universal computing device.Computer system/service
The component of device 12 can include but is not limited to:One or more processor (processing unit) 16, memory 28, connect not homology
The bus 18 of system component (including memory 28 and processor 16).
Bus 18 represents the one or more in a few class bus structures, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.Lift
For example, these architectures include but is not limited to industry standard architecture (ISA) bus, MCA (MAC)
Bus, enhanced isa bus, VESA's (VESA) local bus and periphery component interconnection (PCI) bus.
Computer system/server 12 typically comprises various computing systems computer-readable recording medium.These media can be appointed
What usable medium that can be accessed by computer system/server 12, including volatibility and non-volatile media, it is moveable and
Immovable medium.
Memory 28 can include the computer system readable media of form of volatile memory, such as random access memory
Device (RAM) 30 and/or cache memory 32.Computer system/server 12 may further include it is other it is removable/no
Movably, volatile/non-volatile computer system storage medium.Only as an example, storage system 34 can be used for reading and writing
Immovable, non-volatile magnetic media (Fig. 4 is not shown, is commonly referred to as " hard disk drive ").Although not shown in Fig. 4, can
To provide the disc driver being used for may move non-volatile magnetic disk (such as " floppy disk ") read-write, and to removable non-volatile
Property CD (such as CD-ROM, DVD-ROM or other optical mediums) read-write CD drive.In these cases, it is each to drive
Dynamic device can be connected by one or more data media interfaces with bus 18.Memory 28 can include at least one program
Product, the program product have one group of (for example, at least one) program module, and these program modules are configured to perform the present invention
The function of each embodiment.
Program/utility 40 with one group of (at least one) program module 42, such as memory 28 can be stored in
In, such program module 42 includes --- but being not limited to --- operating system, one or more application program, other programs
Module and routine data, the realization of network environment may be included in each or certain combination in these examples.Program mould
Block 42 generally performs function and/or method in embodiment described in the invention.
Computer system/server 12 can also be (such as keyboard, sensing equipment, aobvious with one or more external equipments 14
Show device 24 etc.) communication, it can also enable a user to lead to the equipment that the computer system/server 12 interacts with one or more
Letter, and/or any set with make it that the computer system/server 12 communicated with one or more of the other computing device
Standby (such as network interface card, modem etc.) communicates.This communication can be carried out by input/output (I/O) interface 22.And
And computer system/server 12 can also pass through network adapter 20 and one or more network (such as LAN
(LAN), wide area network (WAN) and/or public network, such as internet) communication.As shown in figure 4, network adapter 20 passes through bus
18 communicate with other modules of computer system/server 12.It should be understood that although not shown in the drawings, computer can be combined
Systems/servers 12 use other hardware and/or software module, include but is not limited to:Microcode, device driver, at redundancy
Manage unit, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processor 16 is stored in the program in memory 28 by operation, so as to perform at various function application and data
Reason, such as realize the method in Fig. 1 or 2 illustrated embodiments, i.e., for user to be detected, obtain utilize visible image capturing respectively
Head the first picture and second picture that photograph, wherein, the first picture is the picture photographed in the case where there is light, the second figure
Piece is the picture photographed in the case of unglazed, determines whether user is live body according to the first picture and second picture.
Specific implementation refer to the related description in foregoing embodiments, repeat no more.
The present invention discloses a kind of computer-readable recording medium, computer program is stored thereon with, the program quilt
The method in embodiment as shown in the figures 1 and 2 will be realized during computing device.
Any combination of one or more computer-readable media can be used.Computer-readable medium can be calculated
Machine readable signal medium or computer-readable recording medium.Computer-readable recording medium for example can be --- but it is unlimited
In system, device or the device of --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or it is any more than combination.Calculate
The more specifically example (non exhaustive list) of machine readable storage medium storing program for executing includes:Electrical connection with one or more wires, just
Take formula computer disk, hard disk, random access memory (RAM), read-only storage (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this document, computer-readable recording medium can any include or store journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.
Computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including --- but
It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be
Any computer-readable medium beyond computer-readable recording medium, the computer-readable medium can send, propagate or
Transmit for by instruction execution system, device either device use or program in connection.
The program code included on computer-readable medium can be transmitted with any appropriate medium, including --- but it is unlimited
In --- wireless, electric wire, optical cable, RF etc., or above-mentioned any appropriate combination.
It can be write with one or more programming languages or its combination for performing the computer that operates of the present invention
Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++,
Also include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with
Fully perform, partly perform on the user computer on the user computer, the software kit independent as one performs, portion
Divide and partly perform or performed completely on remote computer or server on the remote computer on the user computer.
Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including LAN (LAN) or
Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as carried using Internet service
Pass through Internet connection for business).
In several embodiments provided by the present invention, it should be understood that disclosed apparatus and method etc., can pass through
Other modes are realized.For example, device embodiment described above is only schematical, for example, the division of the unit,
Only a kind of division of logic function, can there is other dividing mode when actually realizing.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit realized in the form of SFU software functional unit, can be stored in one and computer-readable deposit
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are causing a computer
It is each that equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the present invention
The part steps of embodiment methods described.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. it is various
Can be with the medium of store program codes.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
God any modification, equivalent substitution and improvements done etc., should be included within the scope of protection of the invention with principle.
Claims (12)
- A kind of 1. biopsy method, it is characterised in that including:For user to be detected, the first picture and second picture photographed using visible image capturing head is obtained respectively, wherein, First picture is the picture photographed in the case where there is light, and the second picture photographs in the case of unglazed Picture;Determine whether the user is live body according to first picture and the second picture.
- 2. according to the method for claim 1, it is characterised in thatFirst picture photographed using visible image capturing head and second picture of obtaining respectively is included:Obtain first picture photographed for the unified visible image capturing head set of different user and the second picture.
- 3. according to the method for claim 1, it is characterised in thatFirst picture photographed using visible image capturing head and second picture of obtaining respectively is included:Obtain the institute that intelligent terminal is sent, that the visible image capturing head on the intelligent terminal photographs that user uses State the first picture and the second picture.
- 4. according to the method for claim 1, it is characterised in thatIt is described to determine whether the user is that live body includes according to first picture and the second picture:First picture and the second picture are inputed into the disaggregated model that training in advance obtains, the use exported Family whether be live body testing result.
- 5. according to the method for claim 4, it is characterised in thatFirst picture and the second picture are RGB pictures;It is described first picture and the second picture are inputed into the disaggregated model that training in advance obtains to include:RGB triple channels information in first picture and the RGB triple channels information in the second picture are inputted together To the disaggregated model.
- A kind of 6. living body detection device, it is characterised in that including:Acquiring unit and detection unit;The acquiring unit, for for user to be detected, obtaining the first figure photographed using visible image capturing head respectively Piece and second picture, wherein, first picture is the picture photographed in the case where there is light, and the second picture is in nothing The picture photographed in the case of light, first picture and the second picture are sent to the detection unit;The detection unit, for determining whether the user is live body according to first picture and the second picture.
- 7. device according to claim 6, it is characterised in thatThe acquiring unit obtain first picture that is photographed for the unified visible image capturing head set of different user and The second picture.
- 8. device according to claim 6, it is characterised in thatThe acquiring unit obtains visible image capturing that the intelligent terminal that user uses is sent, on the intelligent terminal First picture and the second picture that head photographs.
- 9. device according to claim 6, it is characterised in thatThe detection unit inputs to first picture and the second picture disaggregated model that training in advance obtains, and obtains Output the user whether be live body testing result.
- 10. device according to claim 9, it is characterised in thatFirst picture and the second picture are RGB pictures;The detection unit is by the RGB triple channels information in first picture and the RGB triple channels in the second picture Information inputs to the disaggregated model together.
- 11. a kind of computer equipment, including memory, processor and it is stored on the memory and can be on the processor The computer program of operation, it is characterised in that realized during the computing device described program as any in Claims 1 to 5 Method described in.
- 12. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that described program is processed Such as method according to any one of claims 1 to 5 is realized when device performs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710613143.3A CN107633198A (en) | 2017-07-25 | 2017-07-25 | Biopsy method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710613143.3A CN107633198A (en) | 2017-07-25 | 2017-07-25 | Biopsy method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107633198A true CN107633198A (en) | 2018-01-26 |
Family
ID=61099428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710613143.3A Pending CN107633198A (en) | 2017-07-25 | 2017-07-25 | Biopsy method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107633198A (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3139996U (en) * | 2007-11-02 | 2008-03-13 | 真由美 堤 | Non-dazzling flash photography device |
CN101964056A (en) * | 2010-10-26 | 2011-02-02 | 徐勇 | Bimodal face authentication method with living body detection function and system |
CN102231205A (en) * | 2011-06-24 | 2011-11-02 | 北京戎大时代科技有限公司 | Multimode monitoring device and method |
CN102708383A (en) * | 2012-05-21 | 2012-10-03 | 广州像素数据技术开发有限公司 | System and method for detecting living face with multi-mode contrast function |
CN105320939A (en) * | 2015-09-28 | 2016-02-10 | 北京天诚盛业科技有限公司 | Iris biopsy method and apparatus |
CN105518711A (en) * | 2015-06-29 | 2016-04-20 | 北京旷视科技有限公司 | In-vivo detection method, in-vivo detection system, and computer program product |
CN105912908A (en) * | 2016-04-14 | 2016-08-31 | 苏州优化智能科技有限公司 | Infrared-based real person living body identity verification method |
CN105938546A (en) * | 2016-04-14 | 2016-09-14 | 苏州优化智能科技有限公司 | Real living identity verification terminal equipment based on infrared technology |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN106372601A (en) * | 2016-08-31 | 2017-02-01 | 上海依图网络科技有限公司 | In vivo detection method based on infrared visible binocular image and device |
CN106529512A (en) * | 2016-12-15 | 2017-03-22 | 北京旷视科技有限公司 | Living body face verification method and device |
CN106709477A (en) * | 2017-02-23 | 2017-05-24 | 哈尔滨工业大学深圳研究生院 | Face recognition method and system based on adaptive score fusion and deep learning |
CN106874871A (en) * | 2017-02-15 | 2017-06-20 | 广东光阵光电科技有限公司 | A kind of recognition methods of living body faces dual camera and identifying device |
-
2017
- 2017-07-25 CN CN201710613143.3A patent/CN107633198A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3139996U (en) * | 2007-11-02 | 2008-03-13 | 真由美 堤 | Non-dazzling flash photography device |
CN101964056A (en) * | 2010-10-26 | 2011-02-02 | 徐勇 | Bimodal face authentication method with living body detection function and system |
CN102231205A (en) * | 2011-06-24 | 2011-11-02 | 北京戎大时代科技有限公司 | Multimode monitoring device and method |
CN102708383A (en) * | 2012-05-21 | 2012-10-03 | 广州像素数据技术开发有限公司 | System and method for detecting living face with multi-mode contrast function |
CN102708383B (en) * | 2012-05-21 | 2014-11-26 | 广州像素数据技术开发有限公司 | System and method for detecting living face with multi-mode contrast function |
CN105518711A (en) * | 2015-06-29 | 2016-04-20 | 北京旷视科技有限公司 | In-vivo detection method, in-vivo detection system, and computer program product |
CN105320939A (en) * | 2015-09-28 | 2016-02-10 | 北京天诚盛业科技有限公司 | Iris biopsy method and apparatus |
CN105912908A (en) * | 2016-04-14 | 2016-08-31 | 苏州优化智能科技有限公司 | Infrared-based real person living body identity verification method |
CN105938546A (en) * | 2016-04-14 | 2016-09-14 | 苏州优化智能科技有限公司 | Real living identity verification terminal equipment based on infrared technology |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN106372601A (en) * | 2016-08-31 | 2017-02-01 | 上海依图网络科技有限公司 | In vivo detection method based on infrared visible binocular image and device |
CN106529512A (en) * | 2016-12-15 | 2017-03-22 | 北京旷视科技有限公司 | Living body face verification method and device |
CN106874871A (en) * | 2017-02-15 | 2017-06-20 | 广东光阵光电科技有限公司 | A kind of recognition methods of living body faces dual camera and identifying device |
CN106709477A (en) * | 2017-02-23 | 2017-05-24 | 哈尔滨工业大学深圳研究生院 | Face recognition method and system based on adaptive score fusion and deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107545241A (en) | Neural network model is trained and biopsy method, device and storage medium | |
US10699103B2 (en) | Living body detecting method and apparatus, device and storage medium | |
CN107563283A (en) | Method, apparatus, equipment and the storage medium of generation attack sample | |
CN107609463A (en) | Biopsy method, device, equipment and storage medium | |
WO2021196389A1 (en) | Facial action unit recognition method and apparatus, electronic device, and storage medium | |
CN107609481A (en) | The method, apparatus and computer-readable storage medium of training data are generated for recognition of face | |
CN107679860A (en) | A kind of method, apparatus of user authentication, equipment and computer-readable storage medium | |
US20190026606A1 (en) | To-be-detected information generating method and apparatus, living body detecting method and apparatus, device and storage medium | |
WO2022188697A1 (en) | Biological feature extraction method and apparatus, device, medium, and program product | |
CN112364803B (en) | Training method, terminal, equipment and storage medium for living body identification auxiliary network | |
CN112052830B (en) | Method, device and computer storage medium for face detection | |
WO2021143216A1 (en) | Face liveness detection method and related apparatus | |
CN107563289A (en) | A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium | |
CN104965589A (en) | Human living body detection method and device based on human brain intelligence and man-machine interaction | |
CN105659243A (en) | Informed implicit enrollment and identification | |
CN105184236A (en) | Robot-based face identification system | |
Gupta et al. | Advanced security system in video surveillance for COVID-19 | |
JP6311237B2 (en) | Collation device and collation method, collation system, and computer program | |
Sedik et al. | An efficient cybersecurity framework for facial video forensics detection based on multimodal deep learning | |
CN203552331U (en) | Intelligent identification door control system | |
CN108734818A (en) | Gate inhibition's operating method, device, terminal device and server | |
CN110516094A (en) | De-weight method, device, electronic equipment and the storage medium of class interest point data | |
CN109242005A (en) | The recognition methods and device of information of vehicles, storage medium and electronic equipment | |
CN107633198A (en) | Biopsy method, device, equipment and storage medium | |
CN108230514A (en) | Personnel library update method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180126 |