CN110163078A - The service system of biopsy method, device and application biopsy method - Google Patents
The service system of biopsy method, device and application biopsy method Download PDFInfo
- Publication number
- CN110163078A CN110163078A CN201910217452.8A CN201910217452A CN110163078A CN 110163078 A CN110163078 A CN 110163078A CN 201910217452 A CN201910217452 A CN 201910217452A CN 110163078 A CN110163078 A CN 110163078A
- Authority
- CN
- China
- Prior art keywords
- biological characteristic
- image
- characteristic region
- visible images
- described image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/38—Individual registration on entry or exit not involving the use of a pass with central registration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of biopsy method, the biopsy method includes: the image for obtaining object to be detected and taking under binocular camera, and described image includes infrared image and visible images;To the biological characteristic region in described image, the extraction of image physical message is carried out, obtains the image physical message in biological characteristic region in described image, the biological characteristic region is used to indicate position of the biological characteristic of the object to be detected in described image;If the image physical message in biological characteristic region indicates that the object to be detected is living body in described image, then it is based on machine learning model, to the biological characteristic region in described image, Deep Semantics feature extraction is carried out, the Deep Semantics feature in biological characteristic region in described image is obtained;According to the Deep Semantics feature in biological characteristic region in described image, the living body for carrying out the object to be detected determines.Solve the problems, such as that In vivo detection is poor to prosthese attack defending in the prior art using the present invention.
Description
Technical field
The present invention relates to computer field more particularly to a kind of biopsy method, device and apply biopsy method
Service system.
Background technique
With the development of biometrics identification technology, living things feature recognition is widely used, for example, the payment of brush face, view
Recognition of face in frequency monitoring and the fingerprint recognition in gate inhibition's authorization, iris recognition etc..Living things feature recognition also therefore and
There is various threats, such as attacker to carry out living things feature recognition using the face, fingerprint, iris etc. forged.
For this purpose, existing In vivo detection scheme, is mainly based upon the scheme of human-computer interaction, object cooperation to be detected is needed to do
Corresponding movement out, such as nod, blink, smile, In vivo detection is carried out to analyze the movement of object to be detected.
Inventor has found above scheme, and it is more demanding not only to treat test object, is easy so that poor user experience, and
There is a problem of poor to prosthese attack defending.
Summary of the invention
Various embodiments of the present invention provide a kind of biopsy method, device, the access control system using biopsy method, branch
System, service system, electronic equipment and storage medium are paid, can solve that In vivo detection is poor to prosthese attack defending to ask
Topic.
Wherein, the technical scheme adopted by the invention is as follows:
One side according to an embodiment of the present invention, a kind of biopsy method, comprising: obtain object to be detected and taken the photograph in binocular
As the image taken under head, described image includes infrared image and visible images;To the biological characteristic area in described image
Domain carries out the extraction of image physical message, obtains the image physical message in biological characteristic region in described image, the biological characteristic
Region is used to indicate position of the biological characteristic of the object to be detected in described image;If biological characteristic in described image
The image physical message in region indicates that the object to be detected is living body, then machine learning model is based on, in described image
Biological characteristic region carries out Deep Semantics feature extraction, obtains the Deep Semantics feature in biological characteristic region in described image;Root
According to the Deep Semantics feature in biological characteristic region in described image, the living body for carrying out the object to be detected determines.
In one exemplary embodiment, the biological characteristic region in the infrared image and the visible images
In biological characteristic region between, carry out regional location matching, comprising: respectively to the biological characteristic region in the infrared image
With the biological characteristic region in the visible images, regional location detection is carried out, obtains corresponding to raw in the infrared image
The first area position of object characteristic area and second area position corresponding to biological characteristic region in the visible images;
Calculate the related coefficient of the first area position Yu the second area position;If the related coefficient is more than that setting is related
Threshold value then determines the regional location matching.
In one exemplary embodiment, the biological characteristic region in the infrared image and the visible images
In biological characteristic region between, carry out regional location matching, comprising: determine the object to be detected and the binocular camera
First level distance between the perpendicular of place;Based on the infrared camera and visible image capturing in the binocular camera
Head obtains the second horizontal distance between the infrared camera and the visible image capturing head;According to the first level away from
From with second horizontal distance, obtain biological characteristic region in the infrared image and the biology in the visible images
Horizontal distance between characteristic area is poor;If the horizontal distance difference is more than set distance threshold value, the region position is determined
Set mismatch.
In one exemplary embodiment, the biological characteristic region is human face region;It is described to obtain object to be detected double
After the image taken under mesh camera, the method also includes: respectively to the infrared image and the visible images
Carry out Face datection;If it detects and does not include human face region in the infrared image, and/or, in the visible images not
Comprising human face region, then determine that the object to be detected is prosthese.
One side according to an embodiment of the present invention, a kind of living body detection device, comprising: image collection module, for obtaining
The image that object to be detected takes under binocular camera, described image include infrared image and visible images;Image object
Information extraction modules are managed, for the extraction of image physical message being carried out, obtaining the figure to the biological characteristic region in described image
The image physical message in biological characteristic region as in, the biology that the biological characteristic region is used to indicate the object to be detected are special
Levy the position in described image;Deep Semantics characteristic extracting module, if the figure for biological characteristic region in described image
As physical message indicates that the object to be detected for living body, is then based on machine learning model, to the biological characteristic in described image
Region carries out Deep Semantics feature extraction, obtains the Deep Semantics feature in biological characteristic region in described image;Object living body is sentenced
Cover half block carries out the living body of the object to be detected for the Deep Semantics feature according to biological characteristic region in described image
Determine.
One side according to an embodiment of the present invention, a kind of access control system using biopsy method, including accommodation,
Identify electronic equipment and access control equipment, wherein the accommodation, for entering and leaving object using binocular camera acquisition
Image, described image include infrared image and visible images;The identification electronic equipment includes living body detection device, for pair
Biological characteristic region in the image for entering and leaving object carries out the extraction of image physical message and Deep Semantics feature extraction respectively,
According to the image physical message and Deep Semantics feature extracted, judge the discrepancy object whether living body;When the discrepancy pair
As for living body, the identification electronic equipment carries out identification to the discrepancy object so that the access control equipment be at
The discrepancy object that function completes identification configures access permission, so that the discrepancy object refers to according to the access permission control configured
The gate inhibition's banister for determining region executes clearance movement.
One side according to an embodiment of the present invention, a kind of payment system using biopsy method, including payment terminal
With payment electronic equipment, wherein the payment terminal, for the image using binocular camera acquisition payment user, the figure
As including infrared image and visible images;The payment terminal includes living body detection device, for the payment user's
Biological characteristic region in image carries out the extraction of image physical message and Deep Semantics feature extraction respectively, according to the figure extracted
As physical message and Deep Semantics feature, judge the payment user whether living body;When the payment user is living body, the branch
It pays terminal and authentication is carried out to the payment user, it is electric to the payment with when the payment user passes through authentication
Sub- equipment initiates payment request.
One side according to an embodiment of the present invention, a kind of service system using biopsy method, including service terminal
With certification electronic equipment, wherein the service terminal, for the image using binocular camera acquisition attendant, the figure
As including infrared image and visible images;The service terminal includes living body detection device, for the attendant's
Biological characteristic region in image carries out the extraction of image physical message and Deep Semantics feature extraction respectively, according to the figure extracted
As physical message and Deep Semantics feature, judge the attendant whether living body;When the attendant is living body, the clothes
Certification electronic equipment described in terminal request of being engaged in carries out authentication to the attendant, and to passing through the service people of authentication
Member's distribution service business instruction.
One side according to an embodiment of the present invention, a kind of electronic equipment, including processor and memory, on the memory
It is stored with computer-readable instruction, the computer-readable instruction realizes living body inspection as described above when being executed by the processor
Survey method.
One side according to an embodiment of the present invention, a kind of storage medium are stored thereon with computer program, the computer
Biopsy method as described above is realized when program is executed by processor.
In the above-mentioned technical solutions, the infrared image based on binocular camera shooting, and combine image physical message and depth
Layer semantic feature treats test object and carries out In vivo detection, can effectively filter out the various types of prosthese attacks of attacker
Behavior, and the cooperation independent of object to be detected.
Specifically, the image that object to be detected takes under binocular camera is obtained, to the biological characteristic area in image
Domain carries out the extraction of image physical message, is living body when the image physical message in biological characteristic region in image is object to be detected
When, it is based on machine learning model, Deep Semantics feature extraction is carried out to the biological characteristic region in image, is extracted with basis
The living body that Deep Semantics feature carries out object to be detected determines, is filtered as a result, based on the infrared image that binocular camera takes
Fall prosthese attack of the attacker about video playback, the vacation about black-and-white photograph is filtered out based on image physical characteristic information
Body attack, and the prosthese attack about photochrome and hole mask etc. is filtered out based on Deep Semantics characteristic information
Behavior, while object to be detected being allowed to carry out In vivo detection under the free state of non-cooperation, to solve in the prior art
The existing In vivo detection problem poor to prosthese attack defending.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
It can the limitation present invention.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and in specification together principle for explaining the present invention.
Fig. 1 is the schematic diagram using the implementation environment of the application scenarios of biopsy method.
Fig. 2 is the hardware block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Fig. 3 is a kind of flow chart of biopsy method shown according to an exemplary embodiment.
Fig. 4 be in Fig. 3 corresponding embodiment step 330 in the flow chart of one embodiment.
Fig. 5 be in Fig. 4 corresponding embodiment step 333 in the flow chart of one embodiment.
Fig. 6 is the color histogram of color image and the color histogram of black white image involved in Fig. 5 corresponding embodiment
Between difference schematic diagram.
Fig. 7 be in Fig. 4 corresponding embodiment step 333 in the flow chart of another embodiment.
Fig. 8 be in Fig. 7 corresponding embodiment step 3332 in the flow chart of one embodiment.
Fig. 9 is the schematic diagram of HSV model involved in Fig. 8 corresponding embodiment.
Figure 10 be in Fig. 7 corresponding embodiment step 3332 in the flow chart of another embodiment.
Figure 11 be in Fig. 3 corresponding embodiment step 350 in the flow chart of one embodiment.
Figure 12 be in Fig. 3 corresponding embodiment step 370 in the flow chart of one embodiment.
Figure 13 is flow chart of the step 320 shown according to an exemplary embodiment in one embodiment.
Figure 14 is flow chart of the step 320 shown according to an exemplary embodiment in another embodiment.
Figure 15 is the schematic diagram of horizontal distance difference involved in Figure 14 corresponding embodiment.
Figure 16 is the schematic diagram of face key point shown according to an exemplary embodiment.
Figure 17 is a kind of specific implementation schematic diagram of biopsy method in an application scenarios.
Figure 18 is a kind of block diagram of living body detection device shown according to an exemplary embodiment.
Figure 19 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Through the above attached drawings, it has been shown that the specific embodiment of the present invention will be hereinafter described in more detail, these attached drawings
It is not intended to limit the scope of the inventive concept in any manner with verbal description, but is by referring to specific embodiments
Those skilled in the art illustrate idea of the invention.
Specific embodiment
Here will the description is performed on the exemplary embodiment in detail, the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended
The example of device and method being described in detail in claims, some aspects of the invention are consistent.
Fig. 1 is the schematic diagram using the implementation environment of the application scenarios of biopsy method.
As shown in Fig. 1 (a), which includes payment user 510, smart phone 530 and payment server 550.
If payment user 510 is judged as living body by biopsy method, payment user 510 can be by intelligent hand
Machine 530 carries out authentication, and after through authentication, and request payment server 550 completes the payment of order to be paid
Process.
As shown in Fig. 1 (b), which includes accommodation 610, identification server 630 and access control equipment
650。
Living body is judged as by biopsy method if entering and leaving object 670, entering and leaving object 670 can set by reception
After standby 610 acquisition image, it will pass through identification server 630 and carry out identification, the identification of object 670 to be entered and left is complete
At gate inhibition's banister that the access permission that can be configured by access control equipment 650 controls relevant range executes clearance movement.
As shown in Fig. 1 (c), which includes attendant 710, service terminal 730 and certificate server 750.
If attendant 710 is judged as living body by biopsy method, attendant 710 can be by service terminal
730 acquisition images, and authentication is carried out by certificate server 750 based on the image, and after through authentication, by taking
Terminal 730 of being engaged in distributes service business instruction, to fulfil related service.
In above-mentioned three kinds of application scenarios, object only to be detected, such as payment user 510, discrepancy object 670, service
Personnel 710, by In vivo detection, can continue subsequent authentication or identification, effectively alleviate identity with this
The perhaps operating pressure of identification and flow pressure are verified to preferably complete various authentications or identification times
Business.
Referring to Fig. 2, Fig. 2 is the hardware block diagram of a kind of electronic equipment shown according to an exemplary embodiment.It should
Kind electronic equipment is suitable for the smart phone 530 of implementation environment shown by Fig. 1, identification server 630, certificate server 750.
It should be noted that this kind of electronic equipment, which is one, adapts to example of the invention, must not believe that there is provided
To any restrictions of use scope of the invention.This kind of electronic equipment can not be construed to the figure that needs to rely on or must have
One or more component in illustrative electronic equipment 200 shown in 2.
The hardware configuration of electronic equipment 200 can generate biggish difference due to the difference of configuration or performance, such as Fig. 2 institute
Show, electronic equipment 200 include: power supply 210, interface 230, at least a memory 250 and an at least central processing unit (CPU,
Central Processing Units)270。
Specifically, power supply 210 is used to provide operating voltage for each hardware device on electronic equipment 200.
Interface 230 includes an at least wired or wireless network interface, for interacting with external equipment.
Certainly, in the example that remaining present invention is adapted to, interface 230 can further include an at least serioparallel exchange and connect
233, at least one input/output interface 235 of mouth and at least USB interface 237 etc., as shown in Fig. 2, herein not to this composition
It is specific to limit.
The carrier that memory 250 is stored as resource, can be read-only memory, random access memory, disk or CD
Deng the resource stored thereon includes operating system 251, application program 253 and data 255 etc., and storage mode can be of short duration
It stores or permanently stores.
Wherein, operating system 251 is used to manage and each hardware device and application program in controlling electronic devices 200
253, to realize operation and processing of the central processing unit 270 to mass data 255 in memory 250, it can be Windows
ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
Application program 253 is the computer program based at least one of completion particular job on operating system 251, can
To include an at least module (being not shown in Fig. 2), each module can separately include a series of meters to electronic equipment 200
Calculation machine readable instruction.For example, living body detection device can be considered the application program 253 for being deployed in electronic equipment.
Data 255 can be stored in photo, picture in disk etc., be stored in memory 250.
Central processing unit 270 may include the processor of one or more or more, and be set as total by least one communication
Line is communicated with memory 250, to read the computer-readable instruction stored in memory 250, and then is realized in memory 250
The operation and processing of mass data 255.For example, reading the series of computation stored in memory 250 by central processing unit 270
The form of machine readable instruction completes biopsy method.
It is appreciated that structure shown in Fig. 2 is only to illustrate, electronic equipment 100 may also include than shown in Fig. 2 more or more
Few component, or with the component different from shown in Fig. 1.Each component shown in Fig. 2 can using hardware, software or its
Combination is to realize.
Referring to Fig. 3, in one exemplary embodiment, a kind of biopsy method is suitable for electronic equipment, which sets
Standby hardware configuration can be as shown in Figure 2.
This kind of biopsy method can be executed by electronic equipment, may comprise steps of:
Step 310, the image that object to be detected takes under binocular camera is obtained, described image includes infrared image
And visible images.
Firstly, object to be detected can be the payment user of some order to be paid, some silver for waiting for storage/access money can be
Row card user can also be that some waits for the discrepancy object by gate inhibition, or, it is the service people of some service business to be received
Member, the present embodiment do not treat test object and make specific limit.
Correspondingly, the difference of object to be detected can correspond to different application scenarios, for example, the branch of some order to be paid
It pays user and corresponds to order payment application scenarios, some waits for that the bank card user of storage/access money corresponds to bank card application scenarios, certain
A discrepancy object wait pass through gate inhibition then corresponds to access control applications scene, and the attendant of some service business to be received corresponds to
Commuter service application scenarios.
It is appreciated that all there may be the prosthese attacks of attacker in any one of the above application scenarios, for example,
The possible alternative service personnel of offender, using prosthese attack by authentication, to carry passenger, therefore, this implementation
Biopsy method provided by example can be suitable for different application scenarios according to the difference of object to be detected.
Secondly, the image for object to be detected obtains, binocular camera captured in real-time and collected to be checked can be
The image for surveying object, is also possible to the image of pre-stored object to be detected in electronic equipment, i.e., by reading electronic equipment
Buffer zone in a historical time section by binocular camera shooting and acquired image, the present embodiment also not to this progress
It limits.
It remarks additionally herein, image can be one section of video, can also be several photos, as a result, subsequent living body
It when detection, is carried out as unit of picture frame.
Finally, binocular camera includes infrared camera for generating infrared image and for generating visible light figure
The visible image capturing head of picture.
The binocular camera can be installed on video camera, video recorder or other electronics with image collecting function and set
It is standby, for example, smart phone etc..
It is appreciated that the prosthese attack based on this type of video playback, generally requires attacker and utilize electronic equipment
The screen configured carries out video playback, at this point, will occur reflective existing if infrared light is projected to the screen by infrared camera
As, and then cause in the infrared image generated include biological characteristic region.
The living body for being just advantageously implemented the object to be detected of the infrared image taken based on binocular camera as a result, is sentenced
It is fixed, that is, filter out the prosthese attack about video playback.
Step 330, to the biological characteristic region in described image, the extraction of image physical message is carried out, described image is obtained
The image physical message in middle biological characteristic region.
Firstly, the biological characteristic of object to be detected, for example, face, eyes, mouth, hand, foot, fingerprint, iris etc..Phase
Ying Di, the position of the biological characteristic of object to be detected in the picture are the biological characteristic area in the image for constitute object to be detected
Domain, it is understood that be that the biological characteristic region is used to indicate the biological characteristic of the object to be detected in described image
Position.
Secondly, image physical message, reflects texture, the color of image, including but not limited to: the texture information of image,
Colouring information etc..
It should be noted that for the texture of image, grain details that the biological characteristic of living body is presented in the picture with
There is biggish difference between the grain details that the biological characteristic of prosthese is presented in the picture, and be directed to for the color of image,
By living body shooting and collected visible images are usually color image.
It therefore, can be for living body and vacation after extraction obtains the image physical message in biological characteristic region in image
Existing above-mentioned difference between body treats test object and carries out In vivo detection.
If the image physical message in biological characteristic region indicates that the object to be detected is living body in described image, jump
Turn to execute step 350, continues to treat test object progress In vivo detection.
, whereas if the image physical message in biological characteristic region indicates that the object to be detected is false in described image
Body then stops treating test object and continuing In vivo detection, so as to improve the efficiency of In vivo detection.
Step 350, it is based on machine learning model, to the biological characteristic region in described image, carries out Deep Semantics feature
It extracts, obtains the Deep Semantics feature in biological characteristic region in described image.
Due to the texture of the image of image physical message reflection, color, it is not only easy the variation because of shooting angle, and is occurred
Biggish variation, and then cause living body judgement error occur, and defend the ability of prosthese attack limited, be only applicable to compared with
Biological characteristic region in image is extracted for this purpose, machine learning model will be based in the present embodiment for the attack of simple photo
Deep Semantics feature so as to improve In vivo detection to the defence capability of prosthese attack, while realizing the adaptive of shooting angle
It answers.
Machine learning model carries out model training based on a large amount of positive samples and negative sample, to realize the work of object to be detected
Body determines.
Wherein, in order to allow object to be detected to live under the free state (such as nod, rotary head, shake the head) of non-cooperation
Physical examination is surveyed, and positive sample and negative sample are to be utilized respectively binocular camera to carry out the figure that different angle is shot to living body, prosthese
Picture.
It by model training, inputs using positive sample and negative sample as training, and with the corresponding living body of positive sample and bears
The corresponding prosthese of sample is as training true value, and just building obtains carrying out the machine learning model that the living body of object to be detected determines.
Specifically, model training, is subject to iteration optimization to the parameter of specified mathematical model by positive sample and negative sample,
So that thus the assignment algorithm function of parameter building meets the condition of convergence.
Wherein, mathematical model is specified, including but not limited to: logistic regression, support vector machines, random forest, neural network
Etc..
Assignment algorithm function, including but not limited to: greatest hope function, loss function etc..
For example, random initializtion specifies the parameter of mathematical model, calculates random initializtion according to when previous sample
Parameter constructed by loss function penalty values.
If the penalty values of loss function are not up to minimum, the parameter of specified mathematical model is updated, and according to the latter
Sample calculates the penalty values of loss function constructed by the parameter updated.
Such iterative cycles are considered as loss function convergence, at this point, referring to until the penalty values of loss function reach minimum
Determine mathematical model also to restrain, and meet default required precision, then stops iteration.
Otherwise, iteration updates the parameter of specified mathematical model, and iterates to calculate the building of institute's undated parameter according to remaining sample
Loss function penalty values, until loss function restrain.
It is noted that will also stop if the number of iterations has reached iteration threshold before loss function convergence
Iteration guarantees the efficiency of model training with this.
When specified mathematical model restrains and meets default required precision, indicates that model training is completed, thus complete mould
The machine learning model of type training is just provided with the ability that Deep Semantics feature extraction is carried out to biological characteristic region in image.
So, it is based on machine learning model, Deep Semantics feature can be obtained from the biological characteristic extracted region in image,
And determined with this living body for carrying out object to be detected.
Optionally, machine learning model includes but is not limited to: convolutional neural networks model, deep neural network model, residual
Poor neural network model etc..
Wherein, Deep Semantics feature includes the color characteristic of image, textural characteristics, compared to the face in image physical message
Color information and texture information more can be improved the resolution capability of living body and prosthese, attack in favor of improving In vivo detection to prosthese
The defence capability of behavior.
Step 370, according to the Deep Semantics feature in biological characteristic region in described image, the object to be detected is carried out
Living body determines.
By process as described above, can have and effectively resist video playback, black-and-white photograph, photochrome, hole mask
Etc. a variety of different types of prosthese attacks, while object to be detected being allowed to carry out living body inspection under the free state of non-cooperation
It surveys, the accuracy rate of In vivo detection is improved while improving user experience, has sufficiently ensured the safety of In vivo detection.
Referring to Fig. 4, in one exemplary embodiment, step 330 may comprise steps of:
Step 331, according to the image got, the visible images comprising the biological characteristic region are obtained.
It should be appreciated that based on the infrared camera in binocular camera, by living body shooting and collected infrared image reality
Matter is gray level image, and based on the visible image capturing head in binocular camera, by living body shooting and collected visible images
It is then color image.
The work of object to be detected is carried out based on this in order to filter the such prosthese attack of black-and-white photograph
The basis that body determines is visible images.
Therefore, get infrared image that object to be detected takes under binocular camera and visible images it
Afterwards, it is necessary first to obtain the visible images comprising biological characteristic region, just whether can be based on the visible images subsequent
For color image, the living body for carrying out object to be detected determines.
Step 333, from the biological characteristic region in the visible images, extraction obtains biological in the visible images
The image physical message of characteristic area.
If image physical message instruction visible images be color image, can determine object to be detected for living body, instead
It determines object to be detected for prosthese if image physical message instruction visible images are black white image.
Optionally, image physical message includes but is not limited to: the colouring information that is defined with color histogram, with LBP
(Local Binary Patterns, local binary patterns)/LPQ (Local Phase Quantization, local phase amount
Change) texture information that defines of histogram.
The extraction process of image physical message is illustrated below.
Referring to Fig. 5, in one exemplary embodiment, image physical message is colouring information.
Correspondingly, step 333 may comprise steps of:
Step 3331, it based on the biological characteristic region in the visible images, calculates biological in the visible images
The color histogram of characteristic area.
Step 3333, the color histogram that will be calculated, the face as biological characteristic region in the visible images
Color information.
As shown in fig. 6, the color histogram (b) of the color histogram (a) of color image and black white image exist it is more bright
Aobvious distributional difference can carry out object to be detected based on the color histogram in biological characteristic region in visible images as a result,
Living body determine, to filter out the prosthese attack about black-and-white photograph.
Under the action of above-described embodiment, realize the extraction of colouring information so that based on colouring information carry out to
The living body judgement of test object is achieved.
Referring to Fig. 7, in a further exemplary embodiment, image physical message is texture information.
Correspondingly, step 333 may comprise steps of:
Step 3332, based on the biological characteristic region in the visible images, creation corresponds to the visible images
The color space in middle biological characteristic region.
Wherein, the color space corresponding to biological characteristic region in the visible images, substantially with mathematical way
To describe the color set in biological characteristic region in visible images.
Optionally, color space can be constructed based on HSV parameter, be also based on the building of YCbCr parameter.The HSV parameter
Including tone (H), saturation degree (S) and brightness (V), the YCbCr parameter include color brightness and concentration (Y), blue it is dense
Spend offset (Cb) and red-color concentration offset (Cr).
Step 3334, for the color space, local binary patterns feature is extracted in airspace, and/or, in frequency domain extraction
Local phase quantization characteristic.
Wherein, local binary patterns (LBP, Local Binary Patterns) feature is to be based on image pixel itself,
To the accurate description of the grain details in biological characteristic region in visible images, the grey scale change of visible images is reflected with this.
Local phase quantifies (LPQ, Local Phase Quantization) feature, then is based on image in transform domain
Transformation coefficient reflects visible images to the accurate description of the grain details in biological characteristic region in visible images with this
Gradient distribution.
In other words, either local binary patterns feature or local phase quantization characteristic, are substantially to visible light figure
The grain details in biological characteristic region are analyzed as in, and the texture letter in biological characteristic region in visible images is defined with this
Breath.
Step 3336, according to the local binary patterns feature, and/or, the local phase quantization characteristic generates LBP/
LPQ histogram, the texture information as biological characteristic region in the visible images.
LBP/LPQ histogram as a result, be according to the local binary patterns feature, and/or, local phase quantization
What feature generated, it by the be combineding with each other of LBP/LPQ, is complementary to one another, can more accurately describe biological in visible images
The grain details of characteristic area, and then fully ensure the accuracy rate of In vivo detection.
Under the action of the various embodiments described above, the extraction of texture information is realized, so that carrying out based on texture information
The living body judgement of object to be detected is achieved.
Further, the creation process of color space is carried out as described below.
Referring to Fig. 8, in one exemplary embodiment, step 3332 may comprise steps of:
Step 3332a is obtained based on the biological characteristic region in the visible images and is corresponded to the visible images
The HSV parameter in middle biological characteristic region, the HSV parameter include tone (H), saturation degree (S) and brightness (V).
Step 3332c constructs HSV model according to the HSV parameter got, raw in the visible images as corresponding to
The color space of object characteristic area.
As shown in figure 9, HSV model is substantially hexagonal pyramid, correspondingly, the building process of HSV model includes: to pass through tone
(H) boundary for constructing hexagonal pyramid, the trunnion axis of hexagonal pyramid is constructed by saturation degree (S), constructs hexagonal pyramid by brightness (V)
Vertical axis.
The building of the color space based on HSV parameter is completed as a result,.
Referring to Fig. 10, in a further exemplary embodiment, step 3332 may comprise steps of:
Step 3332b is obtained based on the biological characteristic region in the visible images and is corresponded to the visible images
The YCbCr parameter in middle biological characteristic region, the YCbCr parameter include the brightness and concentration (Y), blue intensity offset of color
Measure (Cb) and red-color concentration offset (Cr).
Step 3332d, according to the YCbCr parameter got, building corresponds to biological characteristic area in the visible images
The color space in domain.
Specifically, the YCbCr Parameter Switch that will acquire is RGB parameter, and then with the RGB color of RGB parameter representative
Channel, construct RGB picture, that is, in visible images biological characteristic region color space.Wherein, RGB color channel includes
Indicate the blue channel B of red red channel R, the green channel G for indicating green and expression blue.
The building of the color space based on YCbCr parameter is completed as a result,.
By the cooperation of the various embodiments described above, the creation of color space is realized, so that extracting based on color space
Texture information is achieved.
In one exemplary embodiment, after step 330, method as described above can with the following steps are included:
According to the image physical message in biological characteristic region in the visible images, the work of the object to be detected is carried out
Body determines.
Specifically, by the image physical message input support vector machines point in biological characteristic region in the visible images
Class device carries out colour type prediction to the visible images, obtains the colour type of the visible images.
Firstly, support vector machine classifier, is instructed based on colour type of a large amount of learning samples to visible images
Practice generation.Wherein, learning sample includes the visible light figure for belonging to the visible images of black white image and belonging to color image
Picture.
Secondly, colour type includes color image classification and black white image classification.
So, if the colour type for the visible images that prediction obtains is black white image classification, that is, indicate described visible
The colour type of light image indicates that the visible images are black white image, then determines that the object to be detected is prosthese.
, whereas if the colour type for the visible images that prediction obtains is color image classification, that is, indicate described visible
The colour type of light image indicates that the visible images are color image, then determines that the object to be detected is living body.
Under the action of above-described embodiment, the living body for realizing the object to be detected based on visible images determines, i.e.,
Filter the prosthese attack about black-and-white photograph.
In one exemplary embodiment, the machine learning model is deep neural network model, the depth nerve net
Network model includes input layer, convolutional layer, articulamentum and output layer.The convolutional layer is used for feature extraction, which is used for feature
Fusion.
Optionally, deep neural network model can also include active coating, pond layer.Wherein, active coating is for improving depth
The convergence rate of neural network model is spent, pond layer is then for reducing the complexity of feature connection.
Optionally, convolutional layer be configured with multiple channels, each channel in the same image have different channels letter
The image of breath inputs, the precision extracted with this lifting feature.
For example, image is a color image, it is assumed that convolutional layer is configured with tri- channels A1, A2, A3, then, the coloured silk
Chromatic graph picture can be input to three channels of convolutional layer configuration according to GRB Color Channel mode, i.e. red channel R is corresponding
Color image parts input the channel A1, and the corresponding color image parts of green channel G input the channel A2, and blue channel B is corresponding
Color image parts input the channel A3.
As shown in figure 11, step 350 may comprise steps of:
Step 351, from the input layer in the deep neural network model, described image is input to the convolutional layer.
Step 353, feature extraction is carried out using the convolutional layer, obtains the shallow-layer language in biological characteristic region in described image
Adopted feature, and it is input to the articulamentum.
Step 355, Fusion Features are carried out using the articulamentum, obtains the deep layer language in biological characteristic region in described image
Adopted feature, and it is input to the output layer.
Wherein, shallow semantic feature includes the shape feature of image, spatial relation characteristics, and Deep Semantics feature then includes
Color characteristic, the textural characteristics of image.
That is, the feature extraction through convolutional layer obtains shallow semantic feature, then the Fusion Features through articulamentum obtain
Deep Semantics feature, it is meant that the feature of different resolution different scale can be interrelated in deep neural network model, and
And non-orphaned, and then can effectively promote the accuracy rate of In vivo detection.
Figure 12 is please referred to, in one exemplary embodiment, step 370 may comprise steps of:
Step 371, using the activation primitive classifier in the output layer, classification prediction is carried out to described image.
Step 373, the classification predicted according to described image judges whether the object to be detected is living body.
Activation primitive classifier, that is to say, softmax classifier, the probability to belong to a different category for calculating image.
In the present embodiment, the classification of image includes: living body classification and prosthese classification.
For example, for image, based on the activation primitive classifier in output layer, the image is calculated and belongs to
The probability of living body classification is P1, and the probability that the image belongs to prosthese classification is P2.
If P1 > P2, that is, indicate that the image belongs to living body classification, then determines object to be detected for living body.
Otherwise P1 < P2 then determines object to be detected for prosthese if indicating that the image belongs to prosthese classification.
Under the action of above-described embodiment, the living body for realizing the object to be detected based on Deep Semantics feature determines, i.e.,
The prosthese attack about photochrome and hole mask is filtered out, and In vivo detection is matched independent of object to be detected
It closes.
In one exemplary embodiment, after step 310, method as described above can with the following steps are included:
Step 320, in the biological characteristic region in the infrared image and the biological characteristic area in the visible images
Between domain, regional location matching is carried out.
It is appreciated that for infrared camera and visible image capturing head in binocular camera, if infrared photography
Head and visible image capturing head are to be shot in synchronization for the free state (such as nodding) of the same object to be detected,
Biological characteristic region in the regional location and visible images in the biological characteristic region in the infrared image then thus taken
Regional location between have biggish correlation.
Therefore, in the present embodiment, judge whether there is biggish correlation between said two devices by regional location matching
Property, and then judge whether object to be detected is living body.
If regional location mismatches, i.e., the regional location and visible light figure in the biological characteristic region in expression infrared image
The regional location correlation in the biological characteristic region as in is smaller, that is, indicates what infrared camera and visible image capturing head were shot
Not same individual then determines that the object to be detected is prosthese.
, whereas if regional location match, i.e., expression infrared image in biological characteristic region regional location with it is visible
The regional location correlation in the biological characteristic region in light image is larger, that is, indicates that infrared camera and visible image capturing head are clapped
That takes the photograph belongs to same individual, then determines that the object to be detected is living body.
It is noted that step 320 can be set before step 330, step 350 any one step, this implementation
Example is limited not to this.
The matching process of regional location is illustrated below.
Figure 13 is please referred to, in one exemplary embodiment, step 320 may comprise steps of:
Step 321, special to the biology in the biological characteristic region and the visible images in the infrared image respectively
Levy region, carry out regional location detection, obtain correspond to the infrared image in biological characteristic region first area position and
Second area position corresponding to biological characteristic region in the visible images.
Wherein, regional location detects, and can realize in projective geometry method based on computer vision.
Step 323, the related coefficient of the first area position Yu the second area position is calculated.
If the related coefficient is more than setting dependent thresholds, the regional location matching is determined, and then determine to be checked
Survey object is living body, can continue the step of executing subsequent In vivo detection.
, whereas if the related coefficient is less than setting dependent thresholds, then determine that the regional location mismatches, and then sentence
Fixed object to be detected is prosthese, stops the step of executing subsequent In vivo detection, at this time so as to improve the efficiency of In vivo detection.
Figure 14 is please referred to, in a further exemplary embodiment, step 320 may comprise steps of:
Step 322, the first level where determining the object to be detected and the binocular camera between perpendicular
Distance.
Step 324, based on the infrared camera and visible image capturing head in the binocular camera, described infrared take the photograph is obtained
As the second horizontal distance between head and the visible image capturing head.
Step 326, according to the first level distance and second horizontal distance, the life in the infrared image is obtained
The horizontal distance between biological characteristic region in object characteristic area and the visible images is poor.
As shown in figure 15, A indicates that object to be detected, B1 indicate that the infrared camera in binocular camera, B2 indicate binocular
Visible image capturing head in camera.X1 indicates that perpendicular where object A to be detected, X2 indicate that binocular camera place is vertical
Plane.
So, the acquisition formula of horizontal distance difference D is as follows:
D=L/Z.
Wherein, D indicates that horizontal distance is poor, that is to say the difference of infrared image and visible images abscissa in the horizontal plane.Z
Indicate first level distance, L indicates the second horizontal distance.
If the horizontal distance difference D is less than set distance threshold value, determine regional location matching, so determine to
Test object is living body, can continue the step of executing subsequent In vivo detection.
, whereas if the horizontal distance difference D is more than set distance threshold value, then determine that the regional location mismatches, into
And determine that object to be detected for prosthese, stops the step of executing subsequent In vivo detection at this time, so as to improve the efficiency of In vivo detection.
By the above process, the living body for realizing the object to be detected based on regional location determines that effectively filtering belongs to
The image of prosthese is conducive to the accuracy rate for improving In vivo detection.
In one exemplary embodiment, the biological characteristic region is human face region.
Correspondingly, after step 310, method as described above can with the following steps are included:
Face datection is carried out to the infrared image and the visible images respectively.
As shown in figure 16, there are 68 key points in face characteristic in the picture, specifically include: eyes in the picture 6
Key point 43~48,20 key points 49~68 etc. of mouth in the picture.Wherein, above-mentioned key point, in the picture by not
Same coordinate (x, y) is uniquely indicated.
Based on this, in the present embodiment, Face datection is realized by face Critical point model.
Face Critical point model is essentially that face characteristic in image constructs index relative, in order to pass through building
Index relative can be positioned from image and obtain the key point of Given Face feature.
Specifically, for the image of object to be detected, i.e. infrared image or visible images, it is input to face key point
After model, the key point of face characteristic in the picture i.e. carried out index label, as shown in figure 16, eyes in the picture six
The index that a key point is marked is 43~48, and the index that 20 key points of mouth in the picture are marked is 49~68.
Meanwhile correspondingly storage has carried out the coordinate of the key point of index label in the picture, as object to be detected
Face characteristic construct the index relative between index and coordinate corresponding to image.
So, it is based on index relative, the face characteristic of object to be detected key point in the picture can be obtained by indexing
Coordinate, and then determine the position of the face characteristic of object to be detected in the picture, i.e., the face characteristic region in image.
If it detects and does not include human face region in the infrared image, and/or, people is not included in the visible images
Face region then determines that the object to be detected is prosthese, to stop subsequent In vivo detection, to improve the effect of In vivo detection
Rate.
, whereas if detecting in the infrared image comprising including people in human face region and the visible images
Face region then determines that the object to be detected is living body, can jump and execute step 330.
It is based on the above process as a result, realizes the object to be detected of the infrared image taken based on binocular camera
Living body determines, that is, filters out the prosthese attack about video playback.
In addition, carrying out Face datection based on face Critical point model, the face characteristic of different facial expressions is identified, all
There is preferable Stability and veracity, has fully ensured that the accuracy of In vivo detection.
In one exemplary embodiment, after step 370, method as described above can with the following steps are included:
If the object to be detected is living body, human face recognition model is called to carry out the image of the object to be detected
Recognition of face.
Face recognition process is illustrated below with reference to concrete application scene.
Fig. 1 (a) is the schematic diagram of the implementation environment of order payment application scenarios.As shown in Fig. 1 (a), in the application scenarios,
Implementation environment includes payment user 510, smart phone 530 and payment server 550.
For some order to be paid, pays user 510 and brushed by the binocular camera that smart phone 530 is configured
Face so that smart phone 530 obtains the image of payment user 510, and then carries out face to the image using human face recognition model
Identification.
The user characteristics of the image are extracted, to human face recognition model to calculate the user characteristics and designated user's feature
Similarity pays user 510 and passes through authentication if similarity is greater than similar threshold value.Wherein, designated user is characterized in intelligence
Mobile phone 530 is that payment user 510 extracts by human face recognition model in advance.
In payment user 510 by after authentication, smart phone 530 is order to be paid to payment server 550
Order payment request is initiated, the payment flow of order to be paid is completed with this.
Fig. 1 (b) is the schematic diagram of the implementation environment of access control applications scene.As shown in Fig. 1 (b), which includes connecing
To equipment 610, identification server 630 and access control equipment 650.
Wherein, binocular camera is installed on accommodation 610, is taken pictures with carrying out face to discrepancy object 670, and will obtain
The image of the discrepancy object 670 obtained is sent to identification server 630 and carries out recognition of face.In this application scene, object 670 is entered and left
Including staff and visiting personnel.
Identification server 630 extracts the personnel characteristics of the image by human face recognition model, with calculate the personnel characteristics and
The similarity of multiple designated person's features obtains the maximum designated person's feature of similarity, and then similarity is maximum specified
Personnel identity associated by personnel characteristics is identified as entering and leaving the identity of object 670, is known with the identity that this completes to enter and leave object 670
Not.Wherein, designated person is characterized in identifying that server 630 is to enter and leave object 670 to extract by human face recognition model in advance.
The identification of object 670 to be entered and left is completed, and identification server 630 is to enter and leave object 670 to access control equipment
650 send gate inhibition's authorized order, so that access control equipment 650 is to enter and leave object 670 to configure phase according to gate inhibition's authorized order
The access permission answered, so that the gate inhibition's banister for entering and leaving object 670 by access permission control assigned work region executes
Clearance movement.
Certainly, in different application scenarios, flexible deployment can be carried out according to practical application request, for example, identification service
Device 630 and access control equipment 650 can be deployed as the same server, alternatively, accommodation 610 and access control equipment 650
It is deployed in the same server, this application scene is limited not to this.
Fig. 1 (c) is the schematic diagram of the implementation environment of commuter service application scenarios.As shown in Fig. 1 (c), in the application scenarios,
Implementation environment includes attendant 710, service terminal 730 and certificate server 750.In this application scene, attendant 710 is
Passenger traffic driver.
Wherein, it is set to the service terminal 730 of vehicle, binocular camera is installed, to clap attendant 710
According to, and the image of the attendant of acquisition 710 is sent to certificate server 750 and carries out recognition of face.
Certificate server 750 extracts the personnel characteristics of the image by human face recognition model, to calculate the personnel characteristics
With the similarity of designated person's feature, if similarity is greater than similar threshold value, attendant 710 passes through authentication.Wherein, refer to
It is what attendant 710 extracted that fix the number of workers is characterized in that service terminal 730 passes through human face recognition model in advance.
After attendant 710 is by authentication, service terminal 730 can be distributed to the attendant 710 to be taken
Business service order, so that the attendant --- passenger traffic driver just can arrive at the destination according to the instruction that service business instructs and take
Take out visitor.
In above-mentioned three kinds of application scenarios, living body detection device can be used as the preceding driving module of recognition of face.
As shown in figure 17, by executing step 801 to step 804, Face datection, Region Matching detection, figure are based respectively on
As physics information supervisory and the detection of Deep Semantics feature, the living body for repeatedly carrying out object to be detected determines.
Living body detection device just can accurately judge whether object to be detected is living body as a result, and then realize to various
The defence of different types of prosthese attack, can not only fully guarantee authentication/identification safety, and can also
Enough operating pressures and flow pressure for effectively mitigating later period recognition of face, to preferably be provided for various recognition of face tasks
It is convenient.
Following is apparatus of the present invention embodiment, can be used for executing biopsy method according to the present invention.For this
Undisclosed details in invention device embodiment, please refers to the embodiment of the method for biopsy method according to the present invention.
Figure 18 is please referred to, in one exemplary embodiment, a kind of living body detection device 900 includes but is not limited to: image obtains
Modulus block 910, image physical message extraction module 930, Deep Semantics characteristic extracting module 950 and object living body determination module
970。
Wherein, image collection module 910, the image taken under binocular camera for obtaining object to be detected, institute
Stating image includes infrared image and visible images.
Image physical message extraction module 930, for carrying out image physics letter to the biological characteristic region in described image
Breath extracts, and obtains the image physical message in biological characteristic region in described image, and the biological characteristic region is used to indicate described
Position of the biological characteristic of object to be detected in described image.
Deep Semantics characteristic extracting module 950, if the image physical message for biological characteristic region in described image
It indicates that the object to be detected is living body, is then based on machine learning model, to the biological characteristic region in described image, carry out deep
Layer semantic feature extraction, obtains the Deep Semantics feature in biological characteristic region in described image.
Object living body determination module 970 is carried out for the Deep Semantics feature according to biological characteristic region in described image
The living body of the object to be detected determines.
In one exemplary embodiment, described image physical message extraction module 930 includes but is not limited to: visible images
Acquiring unit and image physical message extraction unit.
Wherein, it is seen that light image acquiring unit, for according to the image got, obtaining to include the biological characteristic region
Visible images.
Image physical message extraction unit, for from the biological characteristic region in the visible images, extraction to obtain institute
State the image physical message in biological characteristic region in visible images.
In one exemplary embodiment, described image physical message is colouring information.
Correspondingly, described image physical message extraction unit includes but is not limited to: color histogram computation subunit and face
Color information defines subelement.
Wherein, color histogram computation subunit, for calculating based on the biological characteristic region in the visible images
The color histogram in biological characteristic region in the visible images.
Colouring information defines subelement, the color histogram for will be calculated, as raw in the visible images
The colouring information of object characteristic area.
In one exemplary embodiment, described image physical message is texture information.
Correspondingly, described image physical message extraction unit includes but is not limited to: color space creates subelement, part spy
Sign extracts subelement and texture information defines subelement.
Wherein, color space creates subelement, for based on the biological characteristic region in the visible images, creation pair
The color space in biological characteristic region in visible images described in Ying Yu.
Local shape factor subelement extracts local binary patterns feature in airspace for being directed to the color space,
And/or in frequency domain extraction local phase quantization characteristic.
Texture information defines subelement, is used for according to the local binary patterns feature, and/or, the local phase amount
Change feature, generates LBP/LPQ histogram, the texture information as biological characteristic region in the visible images.
In one exemplary embodiment, the color space creation subelement includes but is not limited to: the first parameter obtains son
Unit and the first building subelement.
Wherein, the first parameter obtains subelement, for based on the biological characteristic region in the visible images, acquisition pair
The HSV parameter in biological characteristic region in visible images described in Ying Yu, the HSV parameter include tone (H), saturation degree (S) and
Brightness (V).
First building subelement, it is described visible as corresponding to for constructing HSV model according to the HSV parameter got
The color space in biological characteristic region in light image.
In one exemplary embodiment, the color space creation subelement includes but is not limited to: the second parameter obtains son
Unit and the second building subelement.
Wherein, the second parameter obtains subelement, for based on the biological characteristic region in the visible images, acquisition pair
The YCbCr parameter in biological characteristic region in visible images described in Ying Yu, the YCbCr parameter include the brightness of color and dense
Spend (Y), blue intensity offset (Cb) and red-color concentration offset (Cr).
Second building subelement, for according to the YCbCr parameter got, building to correspond to raw in the visible images
The color space of object characteristic area.
In one exemplary embodiment, described device 900 further includes but is not limited to: the second object living body determination module.
Wherein, the second object living body determination module, for the image according to biological characteristic region in the visible images
Physical message, the living body for carrying out the object to be detected determine.
In one exemplary embodiment, the second object living body determination module includes but is not limited to: colour type prediction
Unit, the first object living body judging unit and the second object living body judging unit.
Wherein, colour type predicting unit, for believing the image physics in biological characteristic region in the visible images
The visible images are carried out colour type prediction, obtain the visible images by breath input support vector machine classifier
Colour type.
First object living body judging unit, if the colour type for the visible images indicates the visible light figure
As being black white image, then determine that the object to be detected is prosthese.
Second object living body judging unit, if the colour type for the visible images indicates the visible light figure
As being color image, then determine that the object to be detected is living body.
In one exemplary embodiment, the machine learning model is deep neural network model, the depth nerve net
Network model includes input layer, convolutional layer, articulamentum and output layer.
Correspondingly, the Deep Semantics characteristic extracting module 950 includes but is not limited to: image input units, feature extraction
Unit and Fusion Features unit.
Wherein, image input units, for from the input layer in the deep neural network model, described image to be inputted
To the convolutional layer.
Feature extraction unit obtains biological characteristic area in described image for carrying out feature extraction using the convolutional layer
The shallow semantic feature in domain, and it is input to the articulamentum.
Fusion Features unit obtains biological characteristic area in described image for carrying out Fusion Features using the articulamentum
The Deep Semantics feature in domain, and it is input to the output layer.
In one exemplary embodiment, the object living body determination module 970 includes but is not limited to: classification predicting unit and
Third object living body judging unit.
Wherein, predicting unit of classifying carries out described image for utilizing the activation primitive classifier in the output layer
Classification prediction.
Third object living body judging unit, the classification for being predicted according to described image judge the object to be detected
It whether is living body.
In one exemplary embodiment, described device 900 further includes but is not limited to: regional location matching module and the 4th pair
As living body determination module.
Wherein, regional location matching module, in the infrared image biological characteristic region and the visible light
Between biological characteristic region in image, regional location matching is carried out.
4th object living body determination module determines that the object to be detected is false if mismatched for regional location
Body.
In one exemplary embodiment, the regional location matching module includes but is not limited to: regional location detection unit,
Related coefficient computing unit and the 5th object living body judging unit.
Wherein, regional location detection unit, for respectively in the infrared image biological characteristic region and it is described can
Biological characteristic region in light-exposed image carries out regional location detection, obtains corresponding to biological characteristic area in the infrared image
The first area position in domain and second area position corresponding to biological characteristic region in the visible images.
Related coefficient computing unit, for calculating the phase relation of the first area position Yu the second area position
Number.
5th object living body judging unit, if being more than to set dependent thresholds for the related coefficient, described in judgement
Regional location matching.
In one exemplary embodiment, the regional location matching module includes but is not limited to: first level distance determines
Unit, the second horizontal distance determination unit, horizontal distance difference acquiring unit and the 6th object living body judging unit.
Wherein, first level distance determining unit, for determining the object to be detected and binocular camera place
First level distance between perpendicular.
Second horizontal distance determination unit, for based on the infrared camera and visible image capturing in the binocular camera
Head obtains the second horizontal distance between the infrared camera and the visible image capturing head.
Horizontal distance difference acquiring unit, for obtaining institute according to the first level distance and second horizontal distance
The horizontal distance stated between the biological characteristic region in the biological characteristic region and the visible images in infrared image is poor.
6th object living body judging unit determines institute if being more than set distance threshold value for the horizontal distance difference
State regional location mismatch.
In one exemplary embodiment, the biological characteristic region is human face region.
Correspondingly, described device 900 further includes but is not limited to: face detection module and the 7th object living body determination module.
Wherein, face detection module, for carrying out Face datection to the infrared image and the visible images respectively.
7th object living body determination module, if do not include human face region in the infrared image for detecting, and/
Or, not including human face region in the visible images, then determine that the object to be detected is prosthese.
It should be noted that living body detection device provided by above-described embodiment is when carrying out In vivo detection processing, only with
The division progress of above-mentioned each functional module can according to need and for example, in practical application by above-mentioned function distribution by not
Same functional module is completed, i.e., the internal structure of living body detection device will be divided into different functional modules, to complete above retouch
The all or part of function of stating.
In addition, the embodiment of living body detection device and biopsy method provided by above-described embodiment belongs to same structure
Think, the concrete mode that wherein modules execute operation is described in detail in embodiment of the method, no longer superfluous herein
It states.
In one exemplary embodiment, a kind of access control system using biopsy method, including accommodation, identification clothes
Business device and access control equipment.
Wherein, the accommodation, for entering and leaving the image of object using binocular camera acquisition, described image includes red
Outer image and visible images.
The identification server includes living body detection device, for the biological characteristic area in the image for entering and leaving object
Domain carries out the extraction of image physical message and Deep Semantics feature extraction respectively, according to the image physical message and deep layer language extracted
Adopted feature, judge the discrepancy object whether living body.
When the discrepancy object is living body, the identification server carries out identification to the discrepancy object, so that institute
Stating access control equipment is to successfully complete the discrepancy object configuration access permission of identification, so that the discrepancy object is according to being matched
Gate inhibition's banister that the access permission set controls specified region executes clearance movement.
In one exemplary embodiment, a kind of payment system using biopsy method, including payment terminal and payment
Server.
Wherein, the payment terminal, for the image using binocular camera acquisition payment user, described image includes red
Outer image and visible images.
The payment terminal includes living body detection device, for the biological characteristic region in the image to the payment user
The extraction of image physical message and Deep Semantics feature extraction are carried out respectively, according to the image physical message and Deep Semantics extracted
Feature, judge the payment user whether living body.
When the payment user is living body, the payment terminal carries out authentication to the payment user, described
When payment user passes through authentication, Xiang Suoshu payment server initiates payment request.
In one exemplary embodiment, a kind of service system using biopsy method, including service terminal and certification
Server.
Wherein, the service terminal, for the image using binocular camera acquisition attendant, described image includes red
Outer image and visible images.
The service terminal includes living body detection device, for the biological characteristic region in the image to the attendant
The extraction of image physical message and Deep Semantics feature extraction are carried out respectively, according to the image physical message and Deep Semantics extracted
Feature, judge the attendant whether living body.
When the attendant is living body, the service terminal requests the certificate server to carry out the attendant
Authentication, and distribute service business instruction to the attendant by authentication.
Figure 19 is please referred to, in one exemplary embodiment, a kind of electronic equipment 1000, including an at least processor 1001,
An at least memory 1002 and at least a communication bus 1003.
Wherein, computer-readable instruction is stored on memory 1002, processor 1001 is read by communication bus 1003
The computer-readable instruction stored in memory 1002.
The biopsy method in the various embodiments described above is realized when the computer-readable instruction is executed by processor 1001.
In one exemplary embodiment, a kind of storage medium, is stored thereon with computer program, which is located
Manage the biopsy method realized in the various embodiments described above when device executes.
Above content, preferable examples embodiment only of the invention, is not intended to limit embodiment of the present invention, this
Field those of ordinary skill central scope according to the present invention and spirit can be carried out very easily corresponding flexible or repaired
Change, therefore protection scope of the present invention should be subject to protection scope required by claims.
Claims (15)
1. a kind of biopsy method characterized by comprising
The image that object to be detected takes under binocular camera is obtained, described image includes infrared image and visible light figure
Picture;
To the biological characteristic region in described image, the extraction of image physical message is carried out, biological characteristic area in described image is obtained
The image physical message in domain, the biological characteristic region are used to indicate the biological characteristic of the object to be detected in described image
Position;
If the image physical message in biological characteristic region indicates that the object to be detected is living body in described image, it is based on machine
Device learning model carries out Deep Semantics feature extraction to the biological characteristic region in described image, obtains biological in described image
The Deep Semantics feature of characteristic area;
According to the Deep Semantics feature in biological characteristic region in described image, the living body for carrying out the object to be detected determines.
2. the method as described in claim 1, which is characterized in that the biological characteristic region in described image carries out figure
As physical message extraction, the image physical message in biological characteristic region in described image is obtained, comprising:
According to the image got, the visible images comprising the biological characteristic region are obtained;
From the biological characteristic region in the visible images, extraction obtains the figure in biological characteristic region in the visible images
As physical message.
3. method according to claim 2, which is characterized in that described image physical message is colouring information;
The biological characteristic region from the visible images, extraction obtain biological characteristic region in the visible images
Image physical message, comprising:
Based on the biological characteristic region in the visible images, the color in biological characteristic region in the visible images is calculated
Histogram;
The color histogram that will be calculated, the colouring information as biological characteristic region in the visible images.
4. method according to claim 2, which is characterized in that described image physical message is texture information;
The biological characteristic region from the visible images, extraction obtain biological characteristic region in the visible images
Image physical message, comprising:
Based on the biological characteristic region in the visible images, creation corresponds to biological characteristic region in the visible images
Color space;
For the color space, local binary patterns feature is extracted in airspace, and/or, quantify in frequency domain extraction local phase
Feature;
According to the local binary patterns feature, and/or, the local phase quantization characteristic generates LBP/LPQ histogram, makees
For the texture information in biological characteristic region in the visible images.
5. method as claimed in claim 4, which is characterized in that the biological characteristic area based in the visible images
Domain, creation correspond to the color space in biological characteristic region in the visible images, comprising:
Based on the biological characteristic region in the visible images, obtains and correspond to biological characteristic region in the visible images
HSV parameter, the HSV parameter includes tone (H), saturation degree (S) and brightness (V);
HSV model is constructed according to the HSV parameter got, as the face for corresponding to biological characteristic region in the visible images
The colour space.
6. method as claimed in claim 4, which is characterized in that the biological characteristic area based in the visible images
Domain, creation correspond to the color space in biological characteristic region in the visible images, comprising:
Based on the biological characteristic region in the visible images, obtains and correspond to biological characteristic region in the visible images
YCbCr parameter, the YCbCr parameter includes that the brightness and concentration (Y) of color, blue intensity offset (Cb) and red are dense
It spends offset (Cr);
According to the YCbCr parameter got, building corresponds to the color space in biological characteristic region in the visible images.
7. method according to claim 2, which is characterized in that the biological characteristic region from the visible images,
After extraction obtains the image physical message in biological characteristic region in the visible images, the method also includes:
According to the image physical message in biological characteristic region in the visible images, the living body for carrying out the object to be detected is sentenced
It is fixed.
8. the method for claim 7, which is characterized in that described according to biological characteristic region in the visible images
Image physical message, the living body for carrying out the object to be detected determine, comprising:
By in the visible images biological characteristic region image physical message input support vector machine classifier, to it is described can
Light-exposed image carries out colour type prediction, obtains the colour type of the visible images;
If the colour type of the visible images indicates that the visible images are black white image, determine described to be detected
Object is prosthese;
If the colour type of the visible images indicates that the visible images are color image, determine described to be detected
Object is living body.
9. the method as described in claim 1, which is characterized in that the machine learning model is deep neural network model, institute
Stating deep neural network model includes input layer, convolutional layer, articulamentum and output layer;
It is described to be based on machine learning model, to the biological characteristic region in described image, Deep Semantics feature extraction is carried out, is obtained
The Deep Semantics feature in biological characteristic region in described image, comprising:
Described image is input to the convolutional layer by the input layer from the deep neural network model;
Feature extraction is carried out using the convolutional layer, obtains the shallow semantic feature in biological characteristic region in described image, and defeated
Enter to the articulamentum;
Fusion Features are carried out using the articulamentum, obtain the Deep Semantics feature in biological characteristic region in described image, and defeated
Enter to the output layer.
10. method as claimed in claim 9, which is characterized in that the deep layer according to biological characteristic region in described image
Semantic feature, the living body for carrying out the object to be detected determine, comprising:
Using the activation primitive classifier in the output layer, classification prediction is carried out to described image;
The classification predicted according to described image judges whether the object to be detected is living body.
11. the method as described in claim 1, which is characterized in that the acquisition object to be detected is shot under binocular camera
After the image arrived, the method also includes:
Between the biological characteristic region in the biological characteristic region and the visible images in the infrared image, area is carried out
Domain location matches;
If regional location mismatches, determine that the object to be detected is prosthese.
12. a kind of living body detection device characterized by comprising
Image collection module, the image taken under binocular camera for obtaining object to be detected, described image includes red
Outer image and visible images;
Image physical message extraction module, for carrying out the extraction of image physical message to the biological characteristic region in described image,
The image physical message in biological characteristic region in described image is obtained, the biological characteristic region is used to indicate described to be detected right
Position of the biological characteristic of elephant in described image;
Deep Semantics characteristic extracting module, if in described image biological characteristic region image physical message instruction described in
Object to be detected is living body, then is based on machine learning model, and to the biological characteristic region in described image, it is special to carry out Deep Semantics
Sign is extracted, and the Deep Semantics feature in biological characteristic region in described image is obtained;
Object living body determination module, for the Deep Semantics feature according to biological characteristic region in described image, carry out it is described to
The living body of test object determines.
13. a kind of service system using biopsy method, which is characterized in that the service system includes service terminal and recognizes
Demonstrate,prove server, wherein
The service terminal, for using binocular camera acquisition attendant image, described image include infrared image and
Visible images;
The service terminal includes living body detection device, for the biological characteristic region difference in the image to the attendant
The extraction of image physical message and Deep Semantics feature extraction are carried out, it is special according to the image physical message and Deep Semantics extracted
Sign, judge the attendant whether living body;
When the attendant is living body, the service terminal requests the certificate server to carry out identity to the attendant
Certification, and distribute service business instruction to the attendant by authentication.
14. a kind of electronic equipment characterized by comprising
Processor;And
Memory is stored with computer-readable instruction on the memory, and the computer-readable instruction is held by the processor
The biopsy method as described in any one of claims 1 to 11 is realized when row.
15. a kind of storage medium, is stored thereon with computer program, which is characterized in that the computer program is held by processor
The biopsy method as described in any one of claims 1 to 11 is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910217452.8A CN110163078A (en) | 2019-03-21 | 2019-03-21 | The service system of biopsy method, device and application biopsy method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910217452.8A CN110163078A (en) | 2019-03-21 | 2019-03-21 | The service system of biopsy method, device and application biopsy method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110163078A true CN110163078A (en) | 2019-08-23 |
Family
ID=67638988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910217452.8A Pending CN110163078A (en) | 2019-03-21 | 2019-03-21 | The service system of biopsy method, device and application biopsy method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163078A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110519061A (en) * | 2019-09-02 | 2019-11-29 | 国网电子商务有限公司 | A kind of identity identifying method based on biological characteristic, equipment and system |
CN110532957A (en) * | 2019-08-30 | 2019-12-03 | 北京市商汤科技开发有限公司 | Face identification method and device, electronic equipment and storage medium |
CN110555930A (en) * | 2019-08-30 | 2019-12-10 | 北京市商汤科技开发有限公司 | Door lock control method and device, electronic equipment and storage medium |
CN110781770A (en) * | 2019-10-08 | 2020-02-11 | 高新兴科技集团股份有限公司 | Living body detection method, device and equipment based on face recognition |
CN111160299A (en) * | 2019-12-31 | 2020-05-15 | 上海依图网络科技有限公司 | Living body identification method and device |
CN111191527A (en) * | 2019-12-16 | 2020-05-22 | 北京迈格威科技有限公司 | Attribute identification method and device, electronic equipment and readable storage medium |
CN111209870A (en) * | 2020-01-09 | 2020-05-29 | 杭州涂鸦信息技术有限公司 | Binocular living body camera rapid registration method, system and device thereof |
CN111222425A (en) * | 2019-12-26 | 2020-06-02 | 新绎健康科技有限公司 | Method and device for positioning facial features |
CN111401258A (en) * | 2020-03-18 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Living body detection method and device based on artificial intelligence |
CN111582045A (en) * | 2020-04-15 | 2020-08-25 | 深圳市爱深盈通信息技术有限公司 | Living body detection method and device and electronic equipment |
CN111582155A (en) * | 2020-05-07 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN111666878A (en) * | 2020-06-05 | 2020-09-15 | 腾讯科技(深圳)有限公司 | Object detection method and device |
CN111951243A (en) * | 2020-08-11 | 2020-11-17 | 华北电力科学研究院有限责任公司 | Method and device for monitoring linear variable differential transformer |
CN112345080A (en) * | 2020-10-30 | 2021-02-09 | 华北电力科学研究院有限责任公司 | Temperature monitoring method and system for linear variable differential transformer of thermal power generating unit |
CN112883762A (en) * | 2019-11-29 | 2021-06-01 | 广州慧睿思通科技股份有限公司 | Living body detection method, device, system and storage medium |
CN113422982A (en) * | 2021-08-23 | 2021-09-21 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and storage medium |
WO2022206319A1 (en) * | 2021-04-02 | 2022-10-06 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, and device, storage medium and computer program product |
CN111582045B (en) * | 2020-04-15 | 2024-05-10 | 芯算一体(深圳)科技有限公司 | Living body detection method and device and electronic equipment |
-
2019
- 2019-03-21 CN CN201910217452.8A patent/CN110163078A/en active Pending
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555930B (en) * | 2019-08-30 | 2021-03-26 | 北京市商汤科技开发有限公司 | Door lock control method and device, electronic equipment and storage medium |
CN110532957A (en) * | 2019-08-30 | 2019-12-03 | 北京市商汤科技开发有限公司 | Face identification method and device, electronic equipment and storage medium |
CN110555930A (en) * | 2019-08-30 | 2019-12-10 | 北京市商汤科技开发有限公司 | Door lock control method and device, electronic equipment and storage medium |
CN110532957B (en) * | 2019-08-30 | 2021-05-07 | 北京市商汤科技开发有限公司 | Face recognition method and device, electronic equipment and storage medium |
CN110519061A (en) * | 2019-09-02 | 2019-11-29 | 国网电子商务有限公司 | A kind of identity identifying method based on biological characteristic, equipment and system |
CN110781770A (en) * | 2019-10-08 | 2020-02-11 | 高新兴科技集团股份有限公司 | Living body detection method, device and equipment based on face recognition |
CN110781770B (en) * | 2019-10-08 | 2022-05-06 | 高新兴科技集团股份有限公司 | Living body detection method, device and equipment based on face recognition |
CN112883762A (en) * | 2019-11-29 | 2021-06-01 | 广州慧睿思通科技股份有限公司 | Living body detection method, device, system and storage medium |
CN111191527A (en) * | 2019-12-16 | 2020-05-22 | 北京迈格威科技有限公司 | Attribute identification method and device, electronic equipment and readable storage medium |
CN111191527B (en) * | 2019-12-16 | 2024-03-12 | 北京迈格威科技有限公司 | Attribute identification method, attribute identification device, electronic equipment and readable storage medium |
CN111222425A (en) * | 2019-12-26 | 2020-06-02 | 新绎健康科技有限公司 | Method and device for positioning facial features |
CN111160299A (en) * | 2019-12-31 | 2020-05-15 | 上海依图网络科技有限公司 | Living body identification method and device |
CN111209870A (en) * | 2020-01-09 | 2020-05-29 | 杭州涂鸦信息技术有限公司 | Binocular living body camera rapid registration method, system and device thereof |
CN111401258B (en) * | 2020-03-18 | 2024-01-30 | 腾讯科技(深圳)有限公司 | Living body detection method and device based on artificial intelligence |
CN111401258A (en) * | 2020-03-18 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Living body detection method and device based on artificial intelligence |
CN111582045A (en) * | 2020-04-15 | 2020-08-25 | 深圳市爱深盈通信息技术有限公司 | Living body detection method and device and electronic equipment |
CN111582045B (en) * | 2020-04-15 | 2024-05-10 | 芯算一体(深圳)科技有限公司 | Living body detection method and device and electronic equipment |
CN111582155A (en) * | 2020-05-07 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN111582155B (en) * | 2020-05-07 | 2024-02-09 | 腾讯科技(深圳)有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN111666878A (en) * | 2020-06-05 | 2020-09-15 | 腾讯科技(深圳)有限公司 | Object detection method and device |
CN111666878B (en) * | 2020-06-05 | 2023-08-29 | 腾讯科技(深圳)有限公司 | Object detection method and device |
CN111951243A (en) * | 2020-08-11 | 2020-11-17 | 华北电力科学研究院有限责任公司 | Method and device for monitoring linear variable differential transformer |
CN112345080A (en) * | 2020-10-30 | 2021-02-09 | 华北电力科学研究院有限责任公司 | Temperature monitoring method and system for linear variable differential transformer of thermal power generating unit |
WO2022206319A1 (en) * | 2021-04-02 | 2022-10-06 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, and device, storage medium and computer program product |
CN113422982A (en) * | 2021-08-23 | 2021-09-21 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163078A (en) | The service system of biopsy method, device and application biopsy method | |
KR102483642B1 (en) | Method and apparatus for liveness test | |
RU2431190C2 (en) | Facial prominence recognition method and device | |
CN109492551A (en) | The related system of biopsy method, device and application biopsy method | |
CA3152812A1 (en) | Facial recognition method and apparatus | |
CN103383723A (en) | Method and system for spoof detection for biometric authentication | |
CN111898538B (en) | Certificate authentication method and device, electronic equipment and storage medium | |
WO2022222575A1 (en) | Method and system for target recognition | |
CN109492550A (en) | The related system of biopsy method, device and application biopsy method | |
CN109871773A (en) | Biopsy method, device and door access machine | |
CN109871845A (en) | Certificate image extracting method and terminal device | |
CN111104833A (en) | Method and apparatus for in vivo examination, storage medium, and electronic device | |
WO2022222569A1 (en) | Target discrimation method and system | |
CN106991364A (en) | face recognition processing method, device and mobile terminal | |
CN115147936A (en) | Living body detection method, electronic device, storage medium, and program product | |
CN114387548A (en) | Video and liveness detection method, system, device, storage medium and program product | |
CN107369086A (en) | A kind of identity card stamp system and method | |
CN112308093B (en) | Air quality perception method based on image recognition, model training method and system | |
CN113111810A (en) | Target identification method and system | |
CN113033305B (en) | Living body detection method, living body detection device, terminal equipment and storage medium | |
CN110073406A (en) | Face detection means and its control method and program | |
CN114386805A (en) | Laboratory information management system | |
CN109409325B (en) | Identification method and electronic equipment | |
Chen | Design and simulation of AI remote terminal user identity recognition system based on reinforcement learning | |
WO2023221996A1 (en) | Living body detection method, electronic device, storage medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |