CN110969077A - Living body detection method based on color change - Google Patents

Living body detection method based on color change Download PDF

Info

Publication number
CN110969077A
CN110969077A CN201910871588.0A CN201910871588A CN110969077A CN 110969077 A CN110969077 A CN 110969077A CN 201910871588 A CN201910871588 A CN 201910871588A CN 110969077 A CN110969077 A CN 110969077A
Authority
CN
China
Prior art keywords
color
image
human face
living body
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910871588.0A
Other languages
Chinese (zh)
Inventor
赵晋
苏亚龙
孙奥峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Hengdao Zhirong Information Technology Co Ltd
Original Assignee
Chengdu Hengdao Zhirong Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Hengdao Zhirong Information Technology Co Ltd filed Critical Chengdu Hengdao Zhirong Information Technology Co Ltd
Priority to CN201910871588.0A priority Critical patent/CN110969077A/en
Publication of CN110969077A publication Critical patent/CN110969077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The method comprises the steps of adopting a plurality of different color sequence light sources to collect face video images of a detection object, presenting face images with corresponding colors through a color verification code sequence, judging the consistency of color verification codes through a color recognition neural network, and detecting whether fraud is detected through an anti-fraud network. Through the mode of using the color identifying code, a user passively completes the living body detection, the complex degree of the user is reduced, meanwhile, due to the fact that the light reflecting degrees of the light reflecting parts on different media are different, after the multi-frame face image interpolation between different colors is calculated, obvious differences exist, and the accuracy of the living body detection can be effectively improved.

Description

Living body detection method based on color change
Technical Field
The application relates to the technical field of deep learning and artificial intelligence, in particular to a living body detection method based on color change.
Background
Compared with biological characteristics such as fingerprints and irises, the human face characteristics are most easily acquired. The face recognition system is gradually becoming commercial and developing towards the trend of automation and unsupervised, however, at present, the face recognition technology can recognize the identity of a face image but cannot accurately distinguish the authenticity of an input face. How to automatically and effectively distinguish image authenticity to ensure system security against spoofing attack has become a problem to be solved urgently in the human recognition technology.
The following methods are commonly used for in vivo detection:
and (5) judging by using the texture information of the copied human face color image. The reproduced image generates much noise. If the image is shot against the computer screen, the moire interference is generated due to the difference between the time resolution of the screen and the camera frequency, so that stripe noise is generated on the image, and whether the image is a real human face is judged by analyzing according to different noises. In the technology, the resolution of the current part of photographing equipment is high, and the signal-to-noise ratio of an image is high, so that uncertain factors are brought to a noise judgment technology.
The consistency of the false face region with its background is used for discrimination. The false face is often statically or dynamically shown by printing or in a hardware medium in a playing form. However, the area or edge background of the false face area does not generate motion difference with the false face in the motion process of paper or media hardware equipment, and keeps consistency. Based on the characteristics, the motion detection and the judgment of the false face can be utilized in the video image. In the technology, video frame information is required to be utilized, the complexity of operation of the system is increased, and meanwhile, the user experience is poor.
And (4) judging non-rigid motion by using local information of the human face. The false static face image is rigid movement regardless of the movement, and the real face generates slight non-rigid change of the face in the video, so that whether the real person exists or not can be judged according to the difference. In the technology, acquisition equipment with high time resolution is required, and non-rigid face information is extracted from real human motion (such as head rotation) due to rigidity change, so that difficulty is brought to timeliness improvement of living body detection and algorithm complexity reduction.
And (5) using human face three-dimensional reconstruction discrimination. The 3D information of the real person is strongly distinguished from the 3D information of the person described by the false electronic equipment. And reconstructing the depth information of the key points of the people by utilizing the multi-angle shooting of the camera and the internal parameters of the camera, and judging the supervised people according to the depth information of the key points. The technology needs to calibrate the internal parameters of the camera, and the calibration accuracy of the internal parameters of the camera, the rotation displacement relation among the cameras for shooting different images and the quality of the images have great influence on the reconstructed human depth value, so that the false detection rate of the method on the living body detection is high.
In the research and practice processes of the prior art, the algorithm adopted by the existing scheme living body detection is found to have low judgment accuracy and cannot effectively resist the attack of the synthesized face, and in addition, the complex active interaction can also greatly reduce the passing rate of correct samples, so that the living body detection effect of the existing scheme is poor on the whole, and the accuracy and the safety of identity verification are greatly influenced.
Disclosure of Invention
The invention aims to provide a living body detection method based on color change, which can restore the depth of an acquired image to a certain extent and effectively improve the accuracy of living body detection.
The invention is realized by the following steps:
the color change-based in-vivo detection method is provided to reduce the use threshold and improve the in-vivo detection effect, thereby improving the accuracy and the safety of identity verification. The method comprises the following steps:
receiving an identity authentication living body detection request, and generating an encrypted color verification code according to the request;
starting a mobile phone camera, adjusting the brightness of a screen to be brightest, sequentially generating colors of corresponding color verification codes on the mobile phone screen at certain time intervals, and uploading a video file;
acquiring image sequences of a plurality of human faces under different colors, and performing individual feature extraction on the image sequences of the plurality of human faces by adopting a convolutional neural network to generate a plurality of groups of human face feature images;
sending the image sequence characteristics under different colors into a color identification convolutional neural network, and checking whether the color sequence is consistent with the color verification code;
performing difference value calculation on image sequences under different colors on an rgb channel, inputting the multiple groups of human face difference value characteristic images into an anti-fraud rolling machine neural network, processing the multiple groups of human face characteristic images according to a preset processing mode to obtain human face living body depth characteristics of the detection object, sending the human face living body depth characteristics to the anti-fraud neural network, and detecting whether a video is fraudulent;
and fusing the identification results of the color identification convolutional neural network and the anti-fraud convolutional neural network to generate a final detection result.
In some embodiments of the invention, when the living body detection is started, the distance between the face of the user and the equipment is detected, the distance between the face of the user and the equipment is calculated according to the size ratio of the image acquisition area to the face area, and if the distance exceeds a specified range, the user is prompted to adjust to a proper distance.
In some embodiments of the present invention, the color sequence verification has a length of 3-5 bytes, and includes a color change interval and a color library used, and the color change interval is a fixed interval.
In some embodiments of the invention, the color sequence verification code is generated according to a random algorithm, and the same color does not appear continuously at adjacent positions, but may appear multiple times in one verification code.
In some embodiments of the present invention, after the image is collected, you are extracted from the middle point of the corresponding color interval according to the configured color interval, and a high-intensity highlight image with the same number of lengths as the color verification code is obtained.
In some embodiments of the present invention, the obtained face image frame is first sent to a face range abstract network for face region recognition, so as to obtain face features.
In some embodiments of the present invention, the input of the human face range abstraction network is a human face frame image, the output is a human face key part with coordinates of upper left and lower right points of a human face position as a range, and a non-human face part and a background environment in the frame image are removed.
In some embodiments of the present invention, after being input into the color recognition convolutional neural network, the human face features are predicted to have 3 probability values of red, green and blue, and when the probability is greater than the threshold, it indicates that the current picture contains the color; if the real color is an image of red, green and blue, generating a result which is larger than a threshold value; if the image is an image with other mixed colors, the result of the two colors is generated and used for restoring the color verification code of the image, and the result is compared to determine whether the color verification code generated by the system is consistent with the verification code restored by the acquired image frame.
In some embodiments of the invention, the image frames passing the color verification are sent to a depth restoration network, the depth of the acquired image is restored according to the color change of two adjacent image frames, and the fraud program prediction is completed.
In some embodiments of the present invention, the living body detection method may be executed by a single electronic device, where the executing device includes a camera and a display screen, and may be specifically one or more of a mobile phone, a tablet computer, a palm computer, and a desktop computer.
The embodiment of the invention at least has the following advantages or beneficial effects:
meanwhile, the human face image frames under a plurality of light sources with different colors are adopted, so that the depth of the collected image can be restored to a certain degree.
The accuracy rate of distinguishing can be effectively improved, meanwhile, the supply of the synthesized face can be effectively resisted, the complex active interaction is reduced, and the passing rate of correct samples is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a schematic diagram of a color change-based in vivo detection method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the embodiments of the present invention, it should be noted that, if the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate an orientation or a positional relationship based on the orientation or the positional relationship shown in the drawings, or an orientation or a positional relationship which is usually laid out when the product of the present invention is used, the description is only for convenience and simplicity, and the indication or the suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, therefore, the present invention should not be construed as being limited. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Furthermore, the terms "horizontal", "vertical", "overhang", and the like do not require that the components be absolutely horizontal or overhang, but may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the embodiments of the present invention, "a plurality" represents at least 2.
In the description of the embodiments of the present invention, it should be further noted that unless otherwise explicitly stated or limited, the terms "disposed," "mounted," "connected," and "connected" should be interpreted broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Examples
A color change-based in vivo detection method, as shown in FIG. 1, comprises the following steps:
receiving an identity authentication living body detection request, and generating an encrypted color sequence verification code according to the request;
starting a camera, adjusting the brightness to be brightest, sequentially generating colors of corresponding color sequence verification codes at intervals, and uploading a video file;
acquiring image sequences of human faces under different colors, and performing feature extraction on the image sequences by adopting a convolutional neural network to generate a plurality of groups of human face feature images;
sending the image sequence characteristics under different colors into a color identification convolutional neural network, and checking whether the color sequence is consistent with the color verification code;
performing difference value calculation on image sequences under different colors on an RGB channel, inputting a plurality of groups of difference value characteristic images of the human face into a reverse-fraud convolution neural network, processing the plurality of groups of characteristic images of the human face according to a preset processing mode, acquiring the living body depth characteristics of the human face of a detection object, sending the living body depth characteristics of the human face to the reverse-fraud convolution neural network, and detecting whether a video is fraudulent;
and fusing the identification results of the color identification convolutional neural network and the anti-fraud convolutional neural network to generate a final monitoring result.
It should be noted that the living body detection method of the present disclosure may be executed by a single electronic device, where the electronic device may have a camera and a display screen, examples of the electronic device are a mobile phone, a tablet computer, a palm computer, a desktop computer, and the like, and hereinafter, the electronic device client is mainly a mobile terminal. The method and the system have a matched client APP or small program, which is hereinafter referred to as a client. Similarly, a matched cloud end service is also provided, and is hereinafter referred to as a service end.
Because the light ray change of light is large, the face image under visible light is easily interfered by external light rays, for example, under visible light, the imaging effect of the face image can be influenced by side light and backlight, so that the detection effect is unstable. In order to overcome the problem, human face video image acquisition is carried out on a detection object by simultaneously adopting a plurality of light sources with different color sequences. Through the color verification code sequence, the client generates high-intensity light rays with corresponding color changes, the light rays can reflect light with corresponding colors after irradiating the face, and at the moment, the face is subjected to image acquisition, so that a face image with corresponding colors can be obtained; after extracting the face frame under the corresponding color, firstly sending the face frame into a color recognition neural network to judge whether the reflected light of the face is consistent with the color verification code; and then carrying out difference processing on a plurality of frames of images with different colors on an RGB color channel, restoring a depth map of the color change of the human face, and sending the depth map into an anti-fraud network to detect whether fraud occurs. According to the method and the device, the user passively completes the living body detection by using the color verification code, the user does not need to read numbers or do action coordination, and the use complexity of the user is reduced. Meanwhile, because the light emitting degrees of the light on different media are different, after the difference value of the face images of the multiple frames in different colors is calculated, obvious difference exists, and the method simultaneously improves the accuracy of the living body detection.
The operation flow of the scheme is specifically described as follows:
the client is internally provided with a face detection module. The face detection module can firstly detect whether the face detection function of the current electronic equipment can be directly called or not, and if so, the face detection function can be directly called; otherwise, the built-in face detection function of the module is used.
When the user uses the living body detection, the face detection module detects the face of the user. After the face of the user is detected, the distance between the face of the user and the equipment is estimated according to the size ratio of the image acquisition area to the face area of the user. The distance has an upper limit, when the face distance of the user exceeds the range, the user is prompted to adjust to a proper distance, and after the user adjusts to the proper distance, the living body detection request is started to be sent to the server.
The server receives the client detection request, and can generate the color verification code according to preset parameters, mainly including the length of the color sequence verification code, the color conversion interval, the use of a color library and the like. For example, the length of the color verification code is set to 4, the conversion interval is set to 500ms, the color library is red (ff0000), green (00ff00), blue (0000ff), pink (ff00ff), yellow (ffff00) and light blue (00ffff), and the server generates the color verification code according to the rule. The same color in the color verification code generated according to a random algorithm does not appear continuously at adjacent positions, but may appear multiple times in one color verification code.
After the client receives the color verification code, the client can utilize the screen of the terminal to generate a corresponding high-intensity color sequence according to the time interval defined by the server.
When a client starts to request, a detection interface can be started, wherein the detection interface is divided into two areas, namely a detection area, and the detection area is used for displaying an image acquired by a current camera; the other part is a non-detection area which is mainly used for flashing a color shade, and the color shade can be used as a light source to project light to a detection object. The client APP or the small program can call a large-area screen part of the terminal to form a color mask. The client is also provided with a brightness control module which records the original brightness of the terminal before the color begins to change. And when the color begins to change, adjusting the brightness of the terminal electronic equipment to be brightest. And after the color change is finished, the screen brightness of the terminal is restored, so that better user experience is achieved.
After the color sequence of the screen is changed, the image acquisition can be automatically stopped, the user is prompted to finish the image acquisition, the image acquisition is in the process of living body detection, and the acquired image is uploaded to the server.
After receiving the images collected by the client, the server extracts the image frames at the middle positions of the corresponding color intervals according to the configured color intervals, and obtains the images with the same length as the color verification codes. If the color verification code length is set to be 4 and the transformation interval is 500ms, 4 images are obtained at 250ms, 750ms, 1250ms and 1750 ms. The image theoretically corresponds to high-intensity strong light with 4 colors, and after the image irradiates a human face, the human face is reflected and then a human face image frame captured by a camera is obtained. If the length is set to other lengths, the number of images corresponding to the matching response and the high-intensity highlight corresponding to the color are obtained.
After a corresponding number of face image frames are obtained, the images are first sent to a face range extraction network. The network input is a human face frame image, the network input is a left upper point coordinate and a right lower point coordinate of a human face position, a human face key part can be obtained through the coordinates, a non-human face part and an environment are removed to reduce network parameters and environmental influence, and then the human face part is preprocessed to obtain human face features.
The face features are first input into a convolutional neural network for color recognition, which has the function of inputting a face image and predicting the color of the image. After the image passes through the network, the probability values of 3 red, green and blue can be predicted, and when the probability is larger than a certain threshold value, the current picture contains the color. For an image with the real color of red, yellow and blue, only one result larger than the threshold value is generated; for 3 colors of pink, light blue and yellow, 2 results larger than the threshold value are theoretically generated. The result can restore the color verification code of the image, and is used for comparing whether the color verification code generated by the server side is consistent with the verification code restored by the image frame collected by the client side. If the verification is passed, the next deep recovery network is carried out; if not, the flow is terminated, and the client end result is returned to prompt that the collected video is the cheating video.
And the image frames passing the color verification are sent to a deep recovery network.
The network has the main functions of restoring the depth of the acquired image according to the color change of two adjacent frames of images and completing fraud degree prediction. The image that monocular camera gathered is 2D image, and the 2D image can't be fine distinguish cheating and non-cheating video. However, if a 3D image is collected, an additional camera or an additional infrared ray is required, which cannot achieve the effect for a general mobile phone. The human face image frames under a plurality of light sources with different colors are adopted, so that the depth of the collected image can be restored to a certain degree. The main principle is that the reflection degrees of light on different media are different, for example, human faces and common paper belong to diffuse reflection, and smooth photos and mobile phone reproduction belong to specular reflection. Different types of images, under the same color sequence, will generate certain depth characteristics after making difference values on RGB channels through the images of two adjacent frames. After observing a large number of samples, the applicant finds that the picture, the face of the mobile phone turning disc and the characteristics of the real face have essential differences, and can obviously distinguish the real face from the cheating face after the characteristics are sent into the neural network. The network functions to generate a detection result by distinguishing the differences.
And after the network obtains the detection result, the final detection result is obtained, and the result is returned to the server flow control module to prompt whether the living body detection passes or not. The flow control module may decide whether to proceed with the next operation based on the results.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A living body detection method based on color change is characterized by comprising the following steps:
receiving an identity authentication living body detection request, and generating an encrypted color sequence verification code according to the request;
starting a camera, adjusting the brightness to be brightest, sequentially generating the colors of corresponding color sequence verification codes at intervals, and uploading a video file;
acquiring image sequences of human faces under different colors, and performing feature extraction on the image sequences by adopting a convolutional neural network to generate a plurality of groups of human face feature images;
sending the image sequence characteristics under different colors into a color identification convolutional neural network, and checking whether the color sequence is consistent with the color verification code;
performing difference value calculation on image sequences under different colors on an RGB channel, inputting a plurality of groups of difference value characteristic images of the human face into a reverse-fraud convolutional neural network, processing the plurality of groups of characteristic images of the human face according to a preset processing mode, acquiring the living body depth characteristics of the human face of a detection object, sending the living body depth characteristics of the human face to the reverse-fraud convolutional neural network, and detecting whether a video is fraudulent;
and fusing the identification results of the color identification convolutional neural network and the anti-cheating convolutional neural network to generate a final monitoring result.
2. The color change-based in-vivo detection method as claimed in claim 1, wherein the in-vivo detection is started, the distance between the face of the user and the equipment is calculated according to the size ratio of the image acquisition area to the face area, and if the distance exceeds a specified range, the user is prompted to adjust to a proper distance.
3. The color-change-based vivo detection method as claimed in claim 1, wherein the color sequence verification has a length of 3-5 bytes, and comprises a color change interval and a color library used, wherein the color change interval is a fixed interval, and the interval is set to 200 and 1000 ms.
4. The color-change-based liveness detection method of claim 1 wherein said color-sequential authentication code is generated according to a random algorithm, and the same color does not appear consecutively at adjacent positions but may appear multiple times in one authentication code.
5. The color change-based in-vivo detection method according to claim 1, wherein after the image is collected, you are extracted at the middle point of the corresponding color interval according to the configured color interval, and high-intensity highlight images with the same number of lengths as the color verification code are obtained.
6. The color change-based living body detection method according to claim 5, wherein the obtained human face image frame is firstly sent to a human face range abstraction network for human face region recognition to obtain human face features.
7. The color-change-based living body detection method according to claim 6, wherein the human face range abstraction network inputs a human face frame image, outputs a human face key part with coordinates of upper left and lower right points of a human face position as a range, and removes a non-human face part and a background environment in the frame image.
8. The color change-based living body detection method according to claim 1, wherein after being input into the color recognition convolutional neural network, the human face features are predicted to have 3 probability values of red, green and blue, and when the probability is greater than a threshold value, the current picture contains the color; if the real color is an image of red, green and blue, generating a result which is larger than a threshold value; if the image is an image with other mixed colors, the result of the two colors is generated and used for restoring the color verification code of the image, and the result is compared to determine whether the color verification code generated by the system is consistent with the verification code restored by the acquired image frame.
9. The color change-based vivo detection method as claimed in claim 8, wherein the image frames passing the color verification are sent to a depth restoration network, and the depth of the collected image is restored according to the color change of two adjacent image frames, and the fraud procedure prediction is completed.
10. The color change-based in-vivo detection method according to claim 1, wherein the in-vivo detection method can be executed by a single electronic device, and the execution device comprises a camera and a display screen, which may be one or more of a mobile phone, a tablet computer, a palm computer, and a desktop computer.
CN201910871588.0A 2019-09-16 2019-09-16 Living body detection method based on color change Pending CN110969077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910871588.0A CN110969077A (en) 2019-09-16 2019-09-16 Living body detection method based on color change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910871588.0A CN110969077A (en) 2019-09-16 2019-09-16 Living body detection method based on color change

Publications (1)

Publication Number Publication Date
CN110969077A true CN110969077A (en) 2020-04-07

Family

ID=70029593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910871588.0A Pending CN110969077A (en) 2019-09-16 2019-09-16 Living body detection method based on color change

Country Status (1)

Country Link
CN (1) CN110969077A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111783640A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Detection method, device, equipment and storage medium
CN111914763A (en) * 2020-08-04 2020-11-10 网易(杭州)网络有限公司 Living body detection method and device and terminal equipment
WO2020259128A1 (en) * 2019-06-28 2020-12-30 北京旷视科技有限公司 Liveness detection method and apparatus, electronic device, and computer readable storage medium
CN112395963A (en) * 2020-11-04 2021-02-23 北京嘀嘀无限科技发展有限公司 Object recognition method and device, electronic equipment and storage medium
CN112818900A (en) * 2020-06-08 2021-05-18 支付宝实验室(新加坡)有限公司 Face activity detection system, device and method
CN113361349A (en) * 2021-05-25 2021-09-07 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
CN113569622A (en) * 2021-06-09 2021-10-29 北京旷视科技有限公司 Living body detection method, device and system based on webpage and electronic equipment
CN113792801A (en) * 2021-09-16 2021-12-14 平安银行股份有限公司 Method, device and equipment for detecting dazzling degree of human face and storage medium
CN113807159A (en) * 2020-12-31 2021-12-17 京东科技信息技术有限公司 Face recognition processing method, device, equipment and storage medium thereof
CN114445898A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Face living body detection method, device, equipment, storage medium and program product
WO2022135073A1 (en) * 2020-12-24 2022-06-30 华为云计算技术有限公司 Authentication system, device and method
WO2023061123A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Facial silent living body detection method and apparatus, and storage medium and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150227781A1 (en) * 2014-02-12 2015-08-13 Nec Corporation Information processing apparatus, information processing method, and program
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN107832735A (en) * 2017-11-24 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for identifying face
WO2018121428A1 (en) * 2016-12-30 2018-07-05 腾讯科技(深圳)有限公司 Living body detection method, apparatus, and storage medium
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN110147721A (en) * 2019-04-11 2019-08-20 阿里巴巴集团控股有限公司 A kind of three-dimensional face identification method, model training method and device
CN110188670A (en) * 2019-05-29 2019-08-30 广西释码智能信息技术有限公司 Face image processing process, device in a kind of iris recognition and calculate equipment
CN110298312A (en) * 2019-06-28 2019-10-01 北京旷视科技有限公司 Biopsy method, device, electronic equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150227781A1 (en) * 2014-02-12 2015-08-13 Nec Corporation Information processing apparatus, information processing method, and program
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
WO2018121428A1 (en) * 2016-12-30 2018-07-05 腾讯科技(深圳)有限公司 Living body detection method, apparatus, and storage medium
CN107832735A (en) * 2017-11-24 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for identifying face
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN110147721A (en) * 2019-04-11 2019-08-20 阿里巴巴集团控股有限公司 A kind of three-dimensional face identification method, model training method and device
CN110188670A (en) * 2019-05-29 2019-08-30 广西释码智能信息技术有限公司 Face image processing process, device in a kind of iris recognition and calculate equipment
CN110298312A (en) * 2019-06-28 2019-10-01 北京旷视科技有限公司 Biopsy method, device, electronic equipment and computer readable storage medium

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020259128A1 (en) * 2019-06-28 2020-12-30 北京旷视科技有限公司 Liveness detection method and apparatus, electronic device, and computer readable storage medium
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111523438B (en) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 Living body identification method, terminal equipment and electronic equipment
CN112818900A (en) * 2020-06-08 2021-05-18 支付宝实验室(新加坡)有限公司 Face activity detection system, device and method
CN111783640A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Detection method, device, equipment and storage medium
CN111914763A (en) * 2020-08-04 2020-11-10 网易(杭州)网络有限公司 Living body detection method and device and terminal equipment
CN111914763B (en) * 2020-08-04 2023-11-28 网易(杭州)网络有限公司 Living body detection method, living body detection device and terminal equipment
CN112395963B (en) * 2020-11-04 2021-11-12 北京嘀嘀无限科技发展有限公司 Object recognition method and device, electronic equipment and storage medium
CN112395963A (en) * 2020-11-04 2021-02-23 北京嘀嘀无限科技发展有限公司 Object recognition method and device, electronic equipment and storage medium
WO2022135073A1 (en) * 2020-12-24 2022-06-30 华为云计算技术有限公司 Authentication system, device and method
CN113807159A (en) * 2020-12-31 2021-12-17 京东科技信息技术有限公司 Face recognition processing method, device, equipment and storage medium thereof
CN113361349B (en) * 2021-05-25 2023-08-04 北京百度网讯科技有限公司 Face living body detection method, device, electronic equipment and storage medium
CN113361349A (en) * 2021-05-25 2021-09-07 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
CN113569622A (en) * 2021-06-09 2021-10-29 北京旷视科技有限公司 Living body detection method, device and system based on webpage and electronic equipment
CN113792801A (en) * 2021-09-16 2021-12-14 平安银行股份有限公司 Method, device and equipment for detecting dazzling degree of human face and storage medium
CN113792801B (en) * 2021-09-16 2023-10-13 平安银行股份有限公司 Method, device, equipment and storage medium for detecting face dazzling degree
WO2023061123A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Facial silent living body detection method and apparatus, and storage medium and device
CN114445898A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Face living body detection method, device, equipment, storage medium and program product
CN114445898B (en) * 2022-01-29 2023-08-29 北京百度网讯科技有限公司 Face living body detection method, device, equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
CN110969077A (en) Living body detection method based on color change
US10237459B2 (en) Systems and methods for liveness analysis
CN110852160B (en) Image-based biometric identification system and computer-implemented method
CN103383723B (en) Method and system for spoof detection for biometric authentication
US8675926B2 (en) Distinguishing live faces from flat surfaces
JP4505362B2 (en) Red-eye detection apparatus and method, and program
CN111274928B (en) Living body detection method and device, electronic equipment and storage medium
CN106372629A (en) Living body detection method and device
WO2014025447A1 (en) Quality metrics for biometric authentication
WO2016107638A1 (en) An image face processing method and apparatus
Stein et al. Video-based fingerphoto recognition with anti-spoofing techniques with smartphone cameras
CN112712059A (en) Living body face recognition method based on infrared thermal image and RGB image
CN112153300A (en) Multi-view camera exposure method, device, equipment and medium
US11763601B2 (en) Techniques for detecting a three-dimensional face in facial recognition
EP3888000A1 (en) Device and method for authenticating an individual
CN115968487A (en) Anti-spoofing system
CN115830517B (en) Video-based examination room abnormal frame extraction method and system
ES2947414T3 (en) Assessing the condition of real world objects
RU2798179C1 (en) Method, terminal and system for biometric identification
OA20233A (en) Device and method for authenticating an individual.
CN116959074A (en) Human skin detection method and device based on multispectral imaging
CN114219868A (en) Skin care scheme recommendation method and system
CN112766175A (en) Living body detection method, living body detection device, and non-volatile storage medium
CN113807159A (en) Face recognition processing method, device, equipment and storage medium thereof
Spinsante et al. Access Control in Smart Homes by Android-Based Liveness Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200407

RJ01 Rejection of invention patent application after publication