CN107563289A - A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium - Google Patents
A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium Download PDFInfo
- Publication number
- CN107563289A CN107563289A CN201710641646.1A CN201710641646A CN107563289A CN 107563289 A CN107563289 A CN 107563289A CN 201710641646 A CN201710641646 A CN 201710641646A CN 107563289 A CN107563289 A CN 107563289A
- Authority
- CN
- China
- Prior art keywords
- skin
- user
- image
- detection
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The present invention provides a kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium, and the method for wherein In vivo detection includes:Obtain user images;Extract the skin characteristic of user in described image;Using the obtained skin characteristic that extracts as the input of detection model, In vivo detection result is determined according to the output result of detection model;Wherein, the detection model is that training in advance obtains.By technical scheme provided by the present invention, it can realize and In vivo detection is carried out to user, so as to more accurately verify whether active user is user.
Description
【Technical field】
The present invention relates to biometrics identification technology field, more particularly to a kind of method, apparatus of In vivo detection, equipment and
Computer-readable storage medium.
【Background technology】
Prior art is when verifying whether as user in the scene operated, such as airport security, remote processing
Financial business, ATM withdraw cash, and often current operation personnel are identified using face recognition technology, to check current operation
Whether personnel are user.But there is the defects of certain in face recognition technology:It is only capable of whether being that user is carried out to face
Identification, when other people using user's photo or disguise oneself as user when, face recognition technology cannot be distinguished by current operation
Whether personnel are user.If current operation personnel are not users, but when can pass through recognition of face, can be to user
I causes huge loss.Therefore need badly and a kind of method that In vivo detection can be carried out to user is provided.
【The content of the invention】
In view of this, the invention provides a kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium, use
In the In vivo detection for realizing user, so as to more accurately verify whether active user is user.
The present invention is that technical scheme is to provide a kind of method of In vivo detection, methods described used by solving technical problem
Including:Obtain user images;Extract the skin characteristic of user in described image;Using the obtained skin characteristic that extracts as inspection
The input of model is surveyed, In vivo detection result is determined according to the output result of detection model;Wherein, the detection model is to instruct in advance
Get.
According to one preferred embodiment of the present invention, training in advance obtains the detection model in the following way:Obtain
At least one of real skin image and non-genuine skin image;Extract the real skin image and non-genuine skin
Skin characteristic in image;The skin characteristic and each skin characteristic are real skin or are non-genuine skin as instruction
Practice sample, train classification models, obtain the detection model.
According to one preferred embodiment of the present invention, the disaggregated model is the disaggregated model based on deep learning model.
According to one preferred embodiment of the present invention, the user images of the acquisition are the facial image of user.
According to one preferred embodiment of the present invention, the skin characteristic of user includes in the extraction described image:It is it is determined that described
The skin area of user in image;The skin characteristic of user is extracted from the skin area.
According to one preferred embodiment of the present invention, the skin characteristic includes:Dermatoglyph feature and skin pore feature
At least one of.
According to one preferred embodiment of the present invention, the output result according to detection model determines In vivo detection result bag
Include:If the output result of the detection model is that the skin characteristic is real skin, it is determined that user is live body, is not otherwise.
According to one preferred embodiment of the present invention, methods described also includes:If testing result is that user is not live body, prompt
User re-starts detection;If the In vivo detection result of preset times is still non-living body, it is determined that the user is not live body.
The present invention is to provide a kind of device of In vivo detection, described device to solve the technical scheme that technical problem uses
Including:Acquiring unit, for obtaining user images;Extraction unit, for extracting the skin characteristic of user in described image;It is determined that
Unit, it is true according to the output result of detection model for the input using the skin characteristic that the extraction obtains as detection model
Determine In vivo detection result;Wherein, the detection model is that training in advance obtains.
According to one preferred embodiment of the present invention, the device also includes:Training unit, for training in advance in the following way
Obtain detection model:Obtain at least one of real skin image and non-genuine skin image;Extract the real skin
Skin characteristic in image and non-genuine skin image;By the skin characteristic and each skin characteristic be real skin or
It is non-genuine skin as training sample, train classification models, obtains the detection model.
According to one preferred embodiment of the present invention, the disaggregated model is the disaggregated model based on deep learning model.
According to one preferred embodiment of the present invention, the user images that the acquiring unit obtains are the facial image of user.
According to one preferred embodiment of the present invention, the extraction unit is in the skin characteristic for extracting user in described image
When, it is specific to perform:Determine the skin area of user in described image;The skin characteristic of user is extracted from the skin area.
According to one preferred embodiment of the present invention, the skin characteristic includes:Dermatoglyph feature and skin pore feature
At least one of.
According to one preferred embodiment of the present invention, the determining unit according to the output result of detection model for determining to live
It is specific to perform during body testing result:If the output result of the detection model is that the skin characteristic is real skin, it is determined that is used
Family is live body, is not otherwise.
According to one preferred embodiment of the present invention, described device also include examine unit again, if for testing result be user not
When being live body, user is prompted to re-start detection;If the In vivo detection result of preset times is still non-living body, determining unit determines
The user is not live body.
As can be seen from the above technical solutions, by the present invention in that being examined with detection model to the skin characteristic of user
Survey, realize and In vivo detection is carried out to user, so as to more accurately verify whether active user is user.
【Brief description of the drawings】
Fig. 1 is a kind of method flow diagram for In vivo detection that one embodiment of the invention provides.
Fig. 2 is a kind of structure drawing of device for In vivo detection that one embodiment of the invention provides.
Fig. 3 is the block diagram for the computer system/server that one embodiment of the invention provides.
【Embodiment】
In order that the object, technical solutions and advantages of the present invention are clearer, below in conjunction with the accompanying drawings with specific embodiment pair
The present invention is described in detail.
The term used in embodiments of the present invention is only merely for the purpose of description specific embodiment, and is not intended to be limiting
The present invention." one kind ", " described " and "the" of singulative used in the embodiment of the present invention and appended claims
It is also intended to including most forms, unless context clearly shows that other implications.
It should be appreciated that term "and/or" used herein is only a kind of incidence relation for describing affiliated partner, represent
There may be three kinds of relations, for example, A and/or B, can be represented:Individualism A, while A and B be present, individualism B these three
Situation.In addition, character "/" herein, it is a kind of relation of "or" to typically represent forward-backward correlation object.
Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ...
When " or " in response to determining " or " in response to detection ".Similarly, depending on linguistic context, phrase " if it is determined that " or " if detection
(condition or event of statement) " can be construed to " when it is determined that when " or " in response to determine " or " when the detection (condition of statement
Or event) when " or " in response to detecting (condition or event of statement) ".
In airport security, ATM withdraws cash, remote processing financial business or mobile device use face as scenes such as passwords
When, it is required for whether being that user verifies to current operation personnel.Because current operation personnel may use user
I am photo or the user that disguises oneself as, and prior art can not distinguish this kind of situation using face recognition technology, if not
User and huge loss can be caused to user when passing through checking.Therefore, the invention provides a kind of In vivo detection
Method, apparatus, equipment and computer-readable storage medium, the skin characteristic extracted is detected by using detection model
Mode, realize the In vivo detection to user, so as to efficiently against current face's identification technology the defects of, more accurately verify
Whether current operation personnel or active user are user.
Wherein, the present invention carries out In vivo detection using the detection model that training in advance obtains to user, it is achieved thereby that existing
Somebody's face identification technology can not verify the problem of whether user is live body.
Specifically, detection model can train to obtain previously according to training sample in the following way:
Training sample is obtained first.Obtain at least one of real skin image and non-genuine skin image.Wherein,
Real skin image is the image for include user's skin that directly shoots, and non-genuine skin image is then that actual photographed is to include
The image of the image of user's skin.For example, real skin image is the image for the face that camera is directly shot, non-genuine
Skin image is the image of the human face photo of camera shooting.In other words, real skin image actual photographed is skin, non-
Real skin image actual photographed is image.In a preferred embodiment of the invention, it is acquired to include user's skin
Image be user facial image.
Then the skin characteristic of user in acquired real skin image and non-genuine skin image is extracted.It is true first
Determine the skin area of user in acquired image, such as acquired image is user's facial image, then skin area is defined as
Human face region;Then the skin characteristic of user is extracted from identified skin area.Wherein, the skin of the user extracted is special
Sign includes:At least one of dermatoglyph feature and skin pore feature.Finally, by the skin characteristic extracted and respectively
Skin characteristic is real skin or is non-genuine skin as training sample.
After training sample is obtained, disaggregated model is trained, to obtain detection model.Wherein, disaggregated model can
Think deep learning model, or SVMs, the present invention is to the type of disaggregated model without limiting.In the present invention
A preferred embodiment in, disaggregated model is the disaggregated model based on depth convolutional network.Obtaining detecting mould by training
After type, just it can judge that the skin characteristic in user images is that real skin is still non-genuine skin using the detection model,
So as to realize the In vivo detection of user.
Next the process for being carried out In vivo detection to user using above-mentioned detection model is described in detail.Fig. 1 is the present invention
A kind of method flow diagram for In vivo detection that one embodiment provides, as shown in fig. 1, methods described includes:
In 101, user images are obtained.
In this step, the user images of captured in real-time are obtained, acquired user images are the figure for including user's skin
Picture, it is therefore preferable to the facial image of user.
And in order to improve precision of the present invention when carrying out In vivo detection, in this step, use high-definition camera
User is shot, so as to obtain the high definition facial image of user.
In 102, the skin characteristic of user in described image is extracted.
In this step, in image acquired in extraction step 101 user skin characteristic.Wherein, the institute from image
The skin characteristic of the user of extraction includes:At least one of dermatoglyph feature and skin pore feature.Dermatoglyph is special
Sign includes the depth, thickness of dermatoglyph of dermatoglyph etc., and skin pore feature includes the quantity of skin pore, skin pore
Size etc..Because real skin image is directly to be shot by real skin, therefore the skin one in real skin image
Surely there is the texture that deep mixed, thickness differs, and certain amount and different size of pore;Rather than real skin image is
By the image taking comprising real skin, i.e., directly shoot and non-skin but image, therefore non-genuine skin image
In skin will not have real skin texture and pore.Therefore, it is possible to pass through dermatoglyph feature and skin pore feature
Real skin and non-genuine skin are distinguished in real skin image and non-genuine skin image.
Specifically, the skin characteristic of user can be extracted from image in the following manner:User in image is determined first
Skin area, such as the skin area in image can be obtained using colour of skin color model;Then in identified skin region
The skin characteristic of user is extracted in domain, such as the dermatoglyph of user in Gabor filter extraction skin area can be used special
Sign, the skin pore feature of user in Threshold segmentation and morphology operations extraction skin area can be used.For example, if institute
Obtained image is user's facial image, then identified skin area is human face region, and then from the human face region of determination
Extract the skin characteristic of the user.
In 103, the input using the skin characteristic that the extraction obtains as detection model, according to the output of detection model
As a result In vivo detection result is determined.
In this step, using the input by the skin characteristic that is extracted in step 102 as detection model, according to detection mould
The output result of type determines In vivo detection result.
As it was previously stated, detection model can determine that the skin characteristic is real skin or is non-genuine according to skin characteristic
Skin, so as to identify whether the user is live body to the judged result of skin characteristic according to detection model, or perhaps judge to work as
Whether preceding user is user, rather than uses user picture or the user that disguises oneself as.Specifically, if the output of detection model
As a result it is that the skin characteristic is real skin, it is determined that In vivo detection result is that the user is live body, i.e., active user is user
I;It is non-genuine skin that if the output result of detection model, which is the skin characteristic, it is determined that In vivo detection result is the user
It is not live body, i.e., active user is not user.
Due to detection when being detected, may be caused to fail because of shooting problem or user's self reason, obtain
It is not the testing result of live body to user.Therefore after this step, still further comprise:If testing result is that user is not living
Body, then user is prompted to re-start detection;If after the In vivo detection of preset times, resulting testing result is still user
It is not live body, it is determined that the user is not live body.Wherein, preset times can be configured according to practical application scene.Pass through
Such mode, it can further lift the precise degrees that In vivo detection is carried out to user.
Fig. 2 is a kind of structure drawing of device for In vivo detection that one embodiment of the invention provides, as shown in Figure 2, the dress
Put including:Training unit 21, acquiring unit 22, extraction unit 23, determining unit 24 and again examine unit 25.
Training unit 21, detection model is obtained for training in advance.
Specifically, training unit 21 can train to obtain detection model previously according to training sample in the following way:
Training unit 21 obtains training sample first.Training unit 21 obtains real skin image and non-genuine skin figure
At least one of as.Wherein, real skin image is the image for including user's skin that directly shoots, non-genuine skin image
Then for actual photographed be comprising user's skin image image.For example, the real skin acquired in training unit 21
Image is the image for the face that camera is directly shot, and the non-genuine skin image acquired in training unit 21 shoots for camera
Human face photo image.In other words, real skin image is that actual photographed is skin, and non-genuine skin image is actual to be clapped
What is taken the photograph is image.In a preferred embodiment of the invention, the image comprising user's skin acquired in training unit 21 is
The facial image of user.
Training unit 21 and then the skin for extracting user in acquired real skin image and non-genuine skin image
Feature.Training unit 21 determines the skin area of user in acquired image first, such as acquired image is user's face
Image, then skin area be defined as human face region;Then training unit 21 extracts the skin of user from identified skin area
Skin feature.Wherein, the skin characteristic for the user that training unit 21 is extracted includes:Dermatoglyph feature and skin pore feature
At least one of.Finally, the skin characteristic extracted and each skin characteristic are real skin or are by training unit 21
Non-genuine skin is as training sample.
After training sample is obtained, training unit 21 is trained to disaggregated model, to obtain detection model.Wherein,
Disaggregated model can be deep learning model, or SVMs, the present invention is to the type of disaggregated model without limit
It is fixed.In a preferred embodiment of the invention, disaggregated model is the disaggregated model based on depth convolutional network.Training unit 21
After detection model is obtained by training, just the detection model can be used to judge the skin characteristic in user images for true skin
Skin is still non-genuine skin, so as to realize the In vivo detection of user.
Acquiring unit 22, for obtaining user images.
Acquiring unit 22 obtains the user images of captured in real-time, and the user images acquired in acquiring unit 22 are to include user
The image of skin, it is therefore preferable to the facial image of user.
And in order to improve precision of the present invention when carrying out In vivo detection, acquiring unit 22 uses high-definition camera pair
User is shot, so as to obtain the high definition facial image of user.
Extraction unit 23, for extracting the skin characteristic of user in described image.
The skin characteristic of user in image acquired in the extraction acquiring unit 22 of extraction unit 23.Wherein, extraction unit 23
The skin characteristic of the user extracted from image includes:At least one of dermatoglyph feature and skin pore feature.
Dermatoglyph feature includes the depth of dermatoglyph, thickness of dermatoglyph etc., and skin pore feature includes the number of skin pore
Amount, size of skin pore etc..Because real skin image is directly to be shot by real skin, therefore real skin image
In skin necessarily there is the texture that differs of deep mixed, thickness, and certain amount and different size of pore;It is and non-genuine
Skin image be by the image taking comprising real skin, i.e., directly shoot be not skin but image, it is therefore non-
Skin in real skin image will not have the texture and pore of real skin.Therefore, it is possible to by dermatoglyph feature and
Skin pore feature distinguishes real skin and non-genuine skin in real skin image and non-genuine skin image.
Specifically, extraction unit 23 can extract the skin characteristic of user from image in the following manner:Extraction unit
23 determine the skin area of user in image first, such as can obtain the skin area in image using colour of skin color model;
Then extraction unit 23 extracts the skin characteristic of user in identified skin area, such as can use Gabor filter
The dermatoglyph feature of user in skin area is extracted, can use and be used in Threshold segmentation and morphology operations extraction skin area
The skin pore feature at family.For example, if the image obtained by acquiring unit 22 is user's facial image, extraction unit 23
Identified skin area is human face region, and then extraction unit 23 extracts the skin spy of the user from the human face region of determination
Sign.
Determining unit 24, for the input using the skin characteristic that the extraction obtains as detection model, according to detection mould
The output result of type determines In vivo detection result.
Determining unit 24 is using the input by the skin characteristic that extraction unit 23 is extracted as detection model, according to detection mould
The output result of type determines In vivo detection result.
As it was previously stated, detection model can determine that the skin characteristic is real skin or is non-genuine according to skin characteristic
Skin, so that it is determined that unit 24 identifies whether the user is live body according to detection model to the judged result of skin characteristic, or
Say it is to judge whether active user is user, rather than use user picture or the user that disguises oneself as.Specifically, if detection
The output result of model is that the skin characteristic is real skin, it is determined that unit 24 determines that In vivo detection result is that the user is living
Body, i.e. active user are user;It is non-genuine skin that if the output result of detection model, which is the skin characteristic, it is determined that single
Member 24 determines that In vivo detection result is that the user is not live body, i.e., active user is not user.
Unit 25 is examined again, for when it is not live body that testing result, which is user, prompting user to be detected again.
Due to detection when being detected, may be caused to fail because of shooting problem or user's self reason, obtain
It is not the testing result of live body to user.Therefore, unit 25 is examined again when it is not live body that testing result, which is user, prompts user's weight
Newly detected.If after the In vivo detection of preset times, the testing result obtained by determining unit 24 is not still for user
Live body, it is determined that the user is not live body.Wherein, preset times can be configured according to practical application scene.By so
Mode, can further be lifted to user carry out In vivo detection precise degrees.
The applicable application scenarios of several present invention are enumerated herein:
Scene 1:Airport security.After being shot by the high-definition camera at airport security to current passenger, to being clapped
The facial image of the passenger taken the photograph is detected, if it is live body that testing result, which is the passenger, surface current passenger is passenger himself, then
Safety check passes through;If testing result is that the passenger is not live body, it is not passenger himself to show current passenger, may have been used passenger
My photo or other modes disguises oneself as passenger himself, then safety check does not pass through.
Scene 2:Mobile-phone payment class software.Current operation personnel are shot by mobile phone camera, to being clapped
The facial image of the current operation personnel taken the photograph is detected, if it is live body that testing result, which is current operation personnel, shows currently to grasp
It is user as personnel, can be paid;If testing result is that current operation personnel are not live bodies, show current operation people
Member is not user, may have been used the photo of user or other modes disguise oneself as user, then can not be propped up
Pay.
Fig. 3 shows the frame suitable for being used for the exemplary computer system/server 012 for realizing embodiment of the present invention
Figure.The computer system/server 012 that Fig. 3 is shown is only an example, function that should not be to the embodiment of the present invention and use
Range band carrys out any restrictions.
As shown in figure 3, computer system/server 012 is showed in the form of universal computing device.Computer system/clothes
The component of business device 012 can include but is not limited to:One or more processor or processing unit 016, system storage
028, the bus 018 of connection different system component (including system storage 028 and processing unit 016).
Bus 018 represents the one or more in a few class bus structures, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.Lift
For example, these architectures include but is not limited to industry standard architecture (ISA) bus, MCA (MAC)
Bus, enhanced isa bus, VESA's (VESA) local bus and periphery component interconnection (PCI) bus.
Computer system/server 012 typically comprises various computing systems computer-readable recording medium.These media can be appointed
The usable medium what can be accessed by computer system/server 012, including volatibility and non-volatile media, movably
With immovable medium.
System storage 028 can include the computer system readable media of form of volatile memory, such as deposit at random
Access to memory (RAM) 030 and/or cache memory 032.Computer system/server 012 may further include other
Removable/nonremovable, volatile/non-volatile computer system storage medium.Only as an example, storage system 034 can
For reading and writing immovable, non-volatile magnetic media (Fig. 3 is not shown, is commonly referred to as " hard disk drive ").Although in Fig. 3
Being not shown, can providing for the disc driver to may move non-volatile magnetic disk (such as " floppy disk ") read-write, and pair can
The CD drive of mobile anonvolatile optical disk (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these situations
Under, each driver can be connected by one or more data media interfaces with bus 018.Memory 028 can include
At least one program product, the program product have one group of (for example, at least one) program module, and these program modules are configured
To perform the function of various embodiments of the present invention.
Program/utility 040 with one group of (at least one) program module 042, can be stored in such as memory
In 028, such program module 042 includes --- but being not limited to --- operating system, one or more application program, other
Program module and routine data, the realization of network environment may be included in each or certain combination in these examples.Journey
Sequence module 042 generally performs function and/or method in embodiment described in the invention.
Computer system/server 012 can also with one or more external equipments 014 (such as keyboard, sensing equipment,
Display 024 etc.) communication, in the present invention, computer system/server 012 is communicated with outside radar equipment, can also be with
One or more enables a user to the equipment communication interacted with the computer system/server 012, and/or with causing the meter
Any equipment that calculation machine systems/servers 012 can be communicated with one or more of the other computing device (such as network interface card, modulation
Demodulator etc.) communication.This communication can be carried out by input/output (I/O) interface 022.Also, computer system/clothes
Being engaged in device 012 can also be by network adapter 020 and one or more network (such as LAN (LAN), wide area network (WAN)
And/or public network, such as internet) communication.As illustrated, network adapter 020 by bus 018 and computer system/
Other modules communication of server 012.It should be understood that although not shown in the drawings, computer system/server 012 can be combined
Using other hardware and/or software module, include but is not limited to:Microcode, device driver, redundant processing unit, outside magnetic
Dish driving array, RAID system, tape drive and data backup storage system etc..
Processing unit 016 is stored in program in system storage 028 by operation, so as to perform various function application with
And data processing, such as a kind of method of In vivo detection is realized, it can include:
Obtain user images;
Extract the skin characteristic of user in described image;
Using the obtained skin characteristic that extracts as the input of detection model, determined according to the output result of detection model
In vivo detection result;
Wherein, the detection model is that training in advance obtains.
Above-mentioned computer program can be arranged in computer-readable storage medium, i.e., the computer-readable storage medium is encoded with
Computer program, the program by one or more computers when being performed so that one or more computers are performed in the present invention
State the method flow shown in embodiment and/or device operation.For example, the method stream by said one or multiple computing devices
Journey, it can include:
Obtain user images;
Extract the skin characteristic of user in described image;
Using the obtained skin characteristic that extracts as the input of detection model, determined according to the output result of detection model
In vivo detection result;
Wherein, the detection model is that training in advance obtains.
Over time, the development of technology, medium implication is more and more extensive, and the route of transmission of computer program is no longer limited by
Tangible medium, directly can also be downloaded from network etc..Any combination of one or more computer-readable media can be used.
Computer-readable medium can be computer-readable signal media or computer-readable recording medium.Computer-readable storage medium
Matter for example may be-but not limited to-system, device or the device of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or
Combination more than person is any.The more specifically example (non exhaustive list) of computer-readable recording medium includes:With one
Or the electrical connections of multiple wires, portable computer diskette, hard disk, random access memory (RAM), read-only storage (ROM),
Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light
Memory device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable recording medium can
Be it is any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or
Person is in connection.
Computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including --- but
It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be
Any computer-readable medium beyond computer-readable recording medium, the computer-readable medium can send, propagate or
Transmit for by instruction execution system, device either device use or program in connection.
The program code included on computer-readable medium can be transmitted with any appropriate medium, including --- but it is unlimited
In --- wireless, electric wire, optical cable, RF etc., or above-mentioned any appropriate combination.Can be with one or more programmings
Language or its combination are write for performing the computer program code that operates of the present invention, described program design language include towards
The programming language of object-such as Java, Smalltalk, C++, in addition to conventional procedural programming language-all
Such as " C " language or similar programming language.Program code can perform fully on the user computer, partly with
On the computer of family perform, the software kit independent as one perform, part on the user computer part on the remote computer
Perform or performed completely on remote computer or server.In the situation of remote computer is related to, remote computer can
To pass through the network of any kind --- subscriber computer is connected to including LAN (LAN) or wide area network (WAN), or, can
To be connected to outer computer (such as passing through Internet connection using ISP).
By technical scheme provided by the present invention, the skin characteristic extracted can be examined using detection model
Survey, realize the In vivo detection to user, so as to more accurately verify whether active user is user.
In several embodiments provided by the present invention, it should be understood that disclosed system, apparatus and method can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the unit
Division, only a kind of division of logic function, can there is other dividing mode when actually realizing.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit realized in the form of SFU software functional unit, can be stored in one and computer-readable deposit
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are causing a computer
It is each that equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the present invention
The part steps of embodiment methods described.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (Read-
Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disc or CD etc. it is various
Can be with the medium of store program codes.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
God any modification, equivalent substitution and improvements done etc., should be included within the scope of protection of the invention with principle.
Claims (18)
- A kind of 1. method of In vivo detection, it is characterised in that methods described includes:Obtain user images;Extract the skin characteristic of user in described image;Using the obtained skin characteristic that extracts as the input of detection model, live body is determined according to the output result of detection model Testing result;Wherein, the detection model is that training in advance obtains.
- 2. according to the method for claim 1, it is characterised in that the detection model is that training in advance obtains in the following way Arrive:Obtain at least one of real skin image and non-genuine skin image;Extract the skin characteristic in the real skin image and non-genuine skin image;The skin characteristic and each skin characteristic are real skin or are non-genuine skin as training sample, training point Class model, obtain the detection model.
- 3. according to the method for claim 2, it is characterised in that the disaggregated model is the classification based on deep learning model Model.
- 4. according to the method for claim 1, it is characterised in that the user images of the acquisition are the facial image of user.
- 5. according to the method for claim 1, it is characterised in that the skin characteristic bag of user in the extraction described image Include:Determine the skin area of user in described image;The skin characteristic of user is extracted from the skin area.
- 6. according to the method for claim 1, it is characterised in that the skin characteristic includes:Dermatoglyph feature and skin At least one of skin pore feature.
- 7. according to the method for claim 1, it is characterised in that the output result according to detection model determines that live body is examined Surveying result includes:If the output result of the detection model is that the skin characteristic is real skin, it is determined that user is live body, is not otherwise.
- 8. according to the method for claim 7, it is characterised in that methods described also includes:If testing result is that user is not live body, user is prompted to re-start detection;If the In vivo detection result of preset times is still non-living body, it is determined that the user is not live body.
- 9. a kind of device of In vivo detection, it is characterised in that described device includes:Acquiring unit, for obtaining user images;Extraction unit, for extracting the skin characteristic of user in described image;Determining unit, for the input using the skin characteristic that the extraction obtains as detection model, according to the defeated of detection model Go out result and determine In vivo detection result;Wherein, the detection model is that training in advance obtains.
- 10. device according to claim 9, it is characterised in that the device also includes:Training unit, for using as follows Mode training in advance obtains detection model:Obtain at least one of real skin image and non-genuine skin image;Extract the skin characteristic in the real skin image and non-genuine skin image;The skin characteristic and each skin characteristic are real skin or are non-genuine skin as training sample, training point Class model, obtain the detection model.
- 11. device according to claim 10, it is characterised in that the disaggregated model is point based on deep learning model Class model.
- 12. device according to claim 9, it is characterised in that the user images that the acquiring unit obtains are user's Facial image.
- 13. device according to claim 9, it is characterised in that the extraction unit is used for extracting in described image It is specific to perform during the skin characteristic at family:Determine the skin area of user in described image;The skin characteristic of user is extracted from the skin area.
- 14. device according to claim 9, it is characterised in that the skin characteristic includes:Dermatoglyph feature and skin At least one of skin pore feature.
- 15. device according to claim 9, it is characterised in that the determining unit is for according to the defeated of detection model It is specific to perform when going out result and determining In vivo detection result:If the output result of the detection model is that the skin characteristic is real skin, it is determined that user is live body, is not otherwise.
- 16. device according to claim 15, it is characterised in that described device also includes examining unit again, for detecting When not being as a result live body for user, user is prompted to re-start detection;If the In vivo detection result of preset times is still non-living body, determining unit determines that the user is not live body.
- 17. a kind of equipment, it is characterised in that the equipment includes:One or more processors;Storage device, for storing one or more programs,When one or more of programs are by one or more of computing devices so that one or more of processors are real The now method as described in any in claim 1-8.
- 18. a kind of storage medium for including computer executable instructions, the computer executable instructions are by computer disposal For performing the method as described in any in claim 1-8 when device performs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710641646.1A CN107563289A (en) | 2017-07-31 | 2017-07-31 | A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710641646.1A CN107563289A (en) | 2017-07-31 | 2017-07-31 | A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107563289A true CN107563289A (en) | 2018-01-09 |
Family
ID=60974180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710641646.1A Pending CN107563289A (en) | 2017-07-31 | 2017-07-31 | A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107563289A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304708A (en) * | 2018-01-31 | 2018-07-20 | 广东欧珀移动通信有限公司 | Mobile terminal, face unlocking method and related product |
CN108470169A (en) * | 2018-05-23 | 2018-08-31 | 国政通科技股份有限公司 | Face identification system and method |
CN108875530A (en) * | 2018-01-12 | 2018-11-23 | 北京旷视科技有限公司 | Vivo identification method, vivo identification equipment, electronic equipment and storage medium |
CN110141246A (en) * | 2018-02-10 | 2019-08-20 | 上海聚虹光电科技有限公司 | Biopsy method based on colour of skin variation |
CN111507944A (en) * | 2020-03-31 | 2020-08-07 | 北京百度网讯科技有限公司 | Skin smoothness determination method and device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101908140A (en) * | 2010-07-29 | 2010-12-08 | 中山大学 | Biopsy method for use in human face identification |
CN106778525A (en) * | 2016-11-25 | 2017-05-31 | 北京旷视科技有限公司 | Identity identifying method and device |
-
2017
- 2017-07-31 CN CN201710641646.1A patent/CN107563289A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101908140A (en) * | 2010-07-29 | 2010-12-08 | 中山大学 | Biopsy method for use in human face identification |
CN106778525A (en) * | 2016-11-25 | 2017-05-31 | 北京旷视科技有限公司 | Identity identifying method and device |
Non-Patent Citations (3)
Title |
---|
SAJIDA PARVEEN等: "Face Liveness Detection Using Dynamic Local Ternary Pattern(DLTP)", 《MDPI》 * |
ZHIWEI ZHANG等: "Face Liveness Detection by Learning Multispectral Reflectance Distributions", 《FACE AND GESTURE》 * |
彭代渊: "《铁路信息安全技术》", 31 May 2010 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875530A (en) * | 2018-01-12 | 2018-11-23 | 北京旷视科技有限公司 | Vivo identification method, vivo identification equipment, electronic equipment and storage medium |
CN108304708A (en) * | 2018-01-31 | 2018-07-20 | 广东欧珀移动通信有限公司 | Mobile terminal, face unlocking method and related product |
CN110141246A (en) * | 2018-02-10 | 2019-08-20 | 上海聚虹光电科技有限公司 | Biopsy method based on colour of skin variation |
CN108470169A (en) * | 2018-05-23 | 2018-08-31 | 国政通科技股份有限公司 | Face identification system and method |
CN111507944A (en) * | 2020-03-31 | 2020-08-07 | 北京百度网讯科技有限公司 | Skin smoothness determination method and device and electronic equipment |
CN111507944B (en) * | 2020-03-31 | 2023-07-04 | 北京百度网讯科技有限公司 | Determination method and device for skin smoothness and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10699103B2 (en) | Living body detecting method and apparatus, device and storage medium | |
CN107563289A (en) | A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium | |
CN109711243B (en) | Static three-dimensional face in-vivo detection method based on deep learning | |
CN107545241B (en) | Neural network model training and living body detection method, device and storage medium | |
JP7040457B2 (en) | Image processing device, image processing method, face recognition system and program | |
CN107679860A (en) | A kind of method, apparatus of user authentication, equipment and computer-readable storage medium | |
CN107563283B (en) | Method, device, equipment and storage medium for generating attack sample | |
CN107609462A (en) | Measurement information generation to be checked and biopsy method, device, equipment and storage medium | |
CN103577801B (en) | Quality metrics method and system for biometric authentication | |
CN109670487A (en) | A kind of face identification method, device and electronic equipment | |
CN107609463B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN110175527A (en) | Pedestrian recognition methods and device, computer equipment and readable medium again | |
US20210027080A1 (en) | Spoof detection by generating 3d point clouds from captured image frames | |
EP3008704B1 (en) | Method of control of persons and application to the inspection of persons | |
CN107767137A (en) | A kind of information processing method, device and terminal | |
KR102257897B1 (en) | Apparatus and method for liveness test,and apparatus and method for image processing | |
JP2020525964A (en) | Face biometrics card emulation for in-store payment authorization | |
CN109766755A (en) | Face identification method and Related product | |
CN111104833A (en) | Method and apparatus for in vivo examination, storage medium, and electronic device | |
CN111652087A (en) | Car checking method and device, electronic equipment and storage medium | |
JP2020526835A (en) | Devices and methods that dynamically identify a user's account for posting images | |
CN106357411A (en) | Identity verification method and device | |
CN107736874A (en) | A kind of method, apparatus of In vivo detection, equipment and computer-readable storage medium | |
CN111738199B (en) | Image information verification method, device, computing device and medium | |
JP6311237B2 (en) | Collation device and collation method, collation system, and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |