CN110298312B - Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium - Google Patents

Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN110298312B
CN110298312B CN201910578487.4A CN201910578487A CN110298312B CN 110298312 B CN110298312 B CN 110298312B CN 201910578487 A CN201910578487 A CN 201910578487A CN 110298312 B CN110298312 B CN 110298312B
Authority
CN
China
Prior art keywords
color
sequence
target object
living body
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910578487.4A
Other languages
Chinese (zh)
Other versions
CN110298312A (en
Inventor
李念
卢江虎
李晓彤
姚聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910578487.4A priority Critical patent/CN110298312B/en
Publication of CN110298312A publication Critical patent/CN110298312A/en
Priority to PCT/CN2020/090976 priority patent/WO2020259128A1/en
Application granted granted Critical
Publication of CN110298312B publication Critical patent/CN110298312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general

Abstract

The embodiment of the application relates to the technical field of face recognition, and discloses a living body detection method, a living body detection device, electronic equipment and a computer-readable storage medium, wherein the living body detection method comprises the following steps: when a request sent by a client is received, generating corresponding color information based on a preset color generation rule, and sending response information comprising the color information to the client, wherein the color information comprises a color sequence formed by at least two colors; then receiving video information of a target object based on color information acquisition, which is sent by a client; and then, carrying out living body detection processing on the target object based on the video information, and sending a corresponding living body detection result to the client. The method of the embodiment of the application greatly improves the reliability and reliability of the collected video information, can quickly and accurately carry out the in-vivo detection, can effectively avoid an attacker from playing the action videos recorded in advance in sequence according to the prompt information to break the in-vivo detection, and greatly improves the safety of the in-vivo detection.

Description

Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
Technical Field
The embodiment of the application relates to the technical field of face recognition, in particular to a method and a device for detecting a living body, electronic equipment and a computer-readable storage medium.
Background
With the continuous popularization and increasingly powerful functions of intelligent terminals, the application of the mobile internet is fully integrated and affects the life of people. In particular, in the context of mobile payment and the rapid development of internet finance, authentication of users is increasingly important. Under the background, the face recognition system is increasingly applied to online scenes which need identity verification in the fields of security protection, finance and social security, such as online bank account opening, online transaction operation verification, an unattended access control system, online social security transaction, online medical security transaction and the like. In these high security level application fields, in addition to ensuring that the face similarity of the authenticatee conforms to the base library stored in the database, it is first necessary to verify that the authenticatee is a legitimate living organism, i.e., to perform living body detection.
At present, in the living body detection, a user usually makes corresponding actions according to system instructions, and the user or an attacker is prevented from cheating the system to finish verification by using photos, videos, 3D face models, masks and the like under some important environments. However, the existing in-vivo detection method is easy to be used by an attacker, for example, the attacker can record each action in advance, so that when recording a video, the recorded action video is sequentially played according to the prompt information, and then the in-vivo detection is broken. Therefore, a safer and more robust in vivo detection method is needed.
Disclosure of Invention
The purpose of the embodiments of the present application is to solve at least one of the above technical drawbacks, and to provide the following technical solutions:
in one aspect, a method for in vivo detection is provided, comprising:
when a request sent by a client is received, generating corresponding color information based on a preset color generation rule, and sending response information comprising the color information to the client, wherein the color information comprises a color sequence formed by at least two colors;
receiving video information of a target object based on color information acquisition sent by a client;
and performing living body detection processing on the target object based on the video information, and sending a corresponding living body detection result to the client.
In one possible implementation, generating the corresponding color information based on a predetermined color generation rule includes:
generating a color sequence with a preset length according to at least two preset color codes based on a preset color generation rule, and determining the number of image acquisition frames corresponding to each color in the color sequence, wherein each preset color code represents a corresponding color;
receiving video information of a target object which is sent by a client and acquired based on color information, wherein the video information comprises:
and receiving video information of the target object, which is sent by the client and is acquired based on the image acquisition frame numbers respectively corresponding to the color sequence and each color.
In one possible implementation, performing a living body detection process on a target object based on video information includes:
extracting each image frame corresponding to each color in the color sequence from the video information;
determining face regions corresponding to the image frames respectively, calculating first RGB values of the face regions, and performing living body detection processing on the target object based on the first RGB values and the color sequence to obtain a first detection result.
In one possible implementation, performing a living body detection process on a target object based on video information includes:
extracting a first preset number of image frames from the video information;
performing living body detection processing on the target object according to a first preset number of image frames through a pre-trained neural network model to obtain a second detection result;
and obtaining a living body detection result of whether the target object is a living body according to the first detection result and the second detection result.
In one possible implementation manner, performing living body detection processing on the target object based on each first RGB value and the color sequence to obtain a first detection result includes:
obtaining a corresponding first R value sequence, a first G value sequence and a first B value sequence according to the first RGB values and the time sequence of each image frame in the video information;
obtaining a corresponding second R value sequence, a second G value sequence and a second B value sequence according to each second RGB value in the color sequence and the corresponding image acquisition frame number;
determining a first matching degree between the first R value sequence and the second R value sequence, determining a second matching degree between the first G value sequence and the second G value sequence, and determining a third matching degree between the first B value sequence and the second B value sequence;
and performing living body detection processing on the target object according to the first matching degree, the second matching degree and the third matching degree to obtain a first detection result.
In one possible implementation, determining a matching degree between the first X value sequence and the second X value sequence, where X is any one of R, G and B, includes any one of:
when the color sequence comprises black and white, performing normalization processing and binarization processing on the first X value sequence to obtain a binarization X sequence; sequentially comparing the binaryzation X sequence with the binaryzation-processed second X value sequence according to the sequence position, determining a first number of the two sequences with the same value, and determining the matching degree between the first X value sequence and the second X value sequence according to the ratio of the first number to the sequence length of the binaryzation X sequence;
and determining each first hopping point and each first hopping direction of the first X value sequence, determining each second hopping point and each second hopping direction of the second X value sequence, and determining the matching degree between the first X value sequence and the second X value sequence according to the comparison result of each first hopping point and each second hopping point and the comparison result of each first hopping direction and each second hopping direction.
In one possible implementation manner, before performing living body detection processing on the target object based on each first RGB value and the color sequence to obtain a first detection result, the method further includes:
and respectively carrying out discrete Fourier transform processing, low-pass filtering processing and inverse discrete Fourier transform processing on each first RGB value so as to filter out high-frequency noise in the first RGB values.
In one possible implementation manner, performing living body detection processing on the target object based on each first RGB value and the color sequence to obtain a first detection result includes:
converting the respective first RGB values into first color space values of a predetermined pattern, respectively, and converting second RGB values of the color sequence into second color space values of the predetermined pattern;
dividing each first color space value according to a color channel of a preset mode, and obtaining a first channel color value sequence comprising channel color values under the same channel based on the time sequence of each image frame in the video information;
dividing the second color space value according to a color channel of a preset mode to obtain a second channel color value sequence comprising channel color values under the same channel;
and determining a fourth matching degree of the first channel color value sequence and the second channel color value sequence under the same channel according to the time sequence position, and performing living body detection processing on the target object according to the fourth matching degree to obtain a first detection result.
In one possible implementation, the method further includes:
when a color information acquisition request sent by a client is received, generating a request identifier aiming at the color information acquisition request, and recording first receiving time when the color information acquisition request corresponding to the request identifier is received;
the response information comprises a request identification, and the video information comprises the request identification;
wherein detecting whether the target object is a living body based on the video information includes:
determining a second receiving time when the video information is received;
determining first receiving time of a color information acquisition request according to a request identifier included in the video information;
the living body detection processing is performed on the target object based on a time difference value between the first reception time and the second reception time.
In one aspect, a method for in vivo detection is provided, comprising:
sending a request to a server, and receiving response information fed back by the server aiming at the request, wherein the response information comprises color information generated by the server based on a preset color generation rule, and the color information comprises a color sequence formed by at least two colors;
acquiring video information of a target object based on the color information, and sending the video information to a server so that the server performs living body detection processing on the target object based on the video information;
and receiving a living body detection result returned by the server and used for carrying out living body detection processing on the target object.
In one possible implementation, acquiring video information of a target object based on color information includes:
controlling a display screen to display a preset prompt pattern and outputting preset prompt information so that the target object is in the preset prompt pattern according to the preset prompt information;
and when the target object is detected to be in the preset prompt pattern according to the preset prompt information, controlling the display screen to display the colors corresponding to the color sequence and controlling the image acquisition equipment to acquire the video information of the target object.
In a possible implementation manner, the color information is a color sequence of a preset length generated by the server according to at least two preset color codes based on a preset color generation rule, each preset color code represents a corresponding color, and each color in the color sequence has a corresponding image acquisition frame number;
controlling a display screen to display color information and controlling an image capturing device to capture video information of a target object, comprising:
and controlling the display screen to display the color sequence and controlling the image acquisition equipment to acquire video information according to the image acquisition frame number corresponding to each color in the color sequence.
In one aspect, there is provided a living body detection apparatus comprising:
the first processing module is used for generating corresponding color information based on a preset color generation rule when receiving a request sent by a client, and sending response information comprising the color information to the client, wherein the color information comprises a color sequence formed by at least two colors;
the first receiving module is used for receiving video information of a target object which is sent by a client and acquired based on color information;
and the second processing module is used for carrying out living body detection processing on the target object based on the video information and sending a corresponding living body detection result to the client.
In a possible implementation manner, the first processing module is specifically configured to generate a color sequence with a preset length according to at least two preset color codes based on a preset color generation rule, and determine image acquisition frame numbers corresponding to respective colors in the color sequence, where the respective preset color codes represent respective colors;
the first receiving module is specifically configured to receive video information of a target object, which is sent by a client and acquired based on image acquisition frame numbers corresponding to the color sequences and the colors respectively.
In a possible implementation manner, the second processing module is specifically configured to extract, from the video information, each image frame corresponding to each color in the color sequence; and determining face regions corresponding to the image frames respectively, calculating first RGB values of the face regions, and performing living body detection processing on the target object based on the first RGB values and the color sequence to obtain a first detection result.
In one possible implementation manner, the second processing module comprises a sampling sub-module, a detection sub-module and a first living body determining sub-module;
the sampling submodule is used for extracting a first preset number of image frames from the video information;
the detection submodule is used for carrying out living body detection processing on the target object according to a first preset number of image frames through a pre-trained neural network model to obtain a second detection result;
and the first living body determining submodule is used for obtaining a living body detection result of whether the target object is a living body according to the first detection result and the second detection result.
In a possible implementation manner, the second processing module includes a first sequence generation sub-module, a second sequence generation sub-module, a first matching degree determination sub-module, and a second living body determination sub-module;
the first sequence generation submodule is used for obtaining a corresponding first R value sequence, a corresponding first G value sequence and a corresponding first B value sequence according to each first RGB value and the time sequence of each image frame in the video information;
the second sequence generation submodule is used for obtaining a corresponding second R value sequence, a second G value sequence and a second B value sequence according to each second RGB value in the color sequence and the corresponding image acquisition frame number;
the first matching degree determining submodule is used for determining a first matching degree between the first R value sequence and the second R value sequence, determining a second matching degree between the first G value sequence and the second G value sequence and determining a third matching degree between the first B value sequence and the second B value sequence;
and the second living body determining submodule is used for carrying out living body detection processing on the target object according to the first matching degree, the second matching degree and the third matching degree to obtain a first detection result.
In a possible implementation manner, the first matching degree determining sub-module is specifically configured to perform any one of the following:
when the color sequence comprises black and white, performing normalization processing and binarization processing on the first X value sequence to obtain a binarization X sequence; sequentially comparing the binaryzation X sequence with the binaryzation-processed second X value sequence according to the sequence position, determining a first number of the two sequences with the same value, and determining the matching degree between the first X value sequence and the second X value sequence according to the ratio of the first number to the sequence length of the binaryzation X sequence;
determining each first hopping point and each first hopping direction of the first X value sequence, determining each second hopping point and each second hopping direction of the second X value sequence, and determining the matching degree between the first X value sequence and the second X value sequence according to the comparison result of each first hopping point and each second hopping point and the comparison result of each first hopping direction and each second hopping direction;
x is any one of R, G and B.
In a possible implementation manner, the system further comprises a third processing module;
and the third processing module is used for respectively carrying out discrete Fourier transform processing, low-pass filtering processing and inverse discrete Fourier transform processing on each first RGB value so as to filter high-frequency noise in the first RGB values.
In a possible implementation manner, the second processing module includes a space conversion sub-module, a third sequence generation sub-module, a fourth sequence generation sub-module, and a second matching degree determination sub-module;
a spatial conversion sub-module for converting the respective first RGB values into first color space values of a predetermined pattern, respectively, and converting second RGB values of the color sequence into second color space values of the predetermined pattern;
the third sequence generation submodule is used for dividing each first color space value according to a color channel of a preset mode and obtaining a first channel color value sequence comprising channel color values under the same channel based on the time sequence of each image frame in the video information;
the fourth sequence generation submodule is used for dividing the second color space value according to the color channel of the preset mode to obtain a second channel color value sequence comprising channel color values under the same channel;
and the second matching degree determining submodule is used for determining a fourth matching degree of the first channel color value sequence and the second channel color value sequence under the same channel according to the time sequence position, and performing living body detection processing on the target object according to the fourth matching degree to obtain a first detection result.
In a possible implementation manner, the apparatus further includes a fourth processing module;
the fourth processing module is used for generating a request identifier aiming at the color information acquisition request when receiving the color information acquisition request sent by the client, and recording first receiving time when the color information acquisition request corresponding to the request identifier is received;
the response information comprises a request identification, and the video information comprises the request identification;
the second processing module is specifically configured to determine a second receiving time when the video information is received; the color information acquisition module is used for acquiring the color information of the video information, and determining first receiving time of the color information acquisition request according to the request identifier included in the video information; and a processing unit configured to perform living body detection processing on the target object based on a time difference value between the first reception time and the second reception time.
In one aspect, there is provided a living body detection apparatus comprising:
the receiving and sending processing module is used for sending a request to the server and receiving response information fed back by the server aiming at the request, the response information comprises color information generated by the server based on a preset color generation rule, and the color information comprises a color sequence formed by at least two colors;
the acquisition processing module is used for acquiring video information of the target object based on the color information and sending the video information to the server so that the server performs living body detection processing on the target object based on the video information;
and the second receiving module is used for receiving the living body detection result of the target object, returned by the server, for performing the living body detection processing.
In one possible implementation manner, the acquisition processing module comprises a control output sub-module and an acquisition sub-module;
the control output sub-module is used for controlling the display screen to display a preset prompt pattern and outputting preset prompt information so that the target object is in the preset prompt pattern according to the preset prompt information;
and the acquisition sub-module is used for controlling the display screen to display colors corresponding to the color sequence and controlling the image acquisition equipment to acquire the video information of the target object when the target object is detected to be in the preset prompt pattern according to the preset prompt information.
In a possible implementation manner, the color information is a color sequence of a preset length generated by the server according to at least one preset color code based on a preset color generation rule, each preset color code represents a corresponding color, and each color in the color sequence has a corresponding image acquisition frame number;
the acquisition submodule is specifically used for controlling the display screen to display the color sequence and controlling the image acquisition equipment to acquire video information according to the image acquisition frame number corresponding to each color in the color sequence.
In one aspect, an electronic device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the above-described server-side liveness detection method when executing the program.
In one aspect, an electronic device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above-described method for live body detection on the client side when executing the program.
In one aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the above-described server-side living body detection method.
In one aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the above-described client-side living body detection method.
According to the living body detection method provided by the embodiment of the application, the corresponding color information is generated based on the preset color generation rule, and the video information of the target object acquired by the client based on the color information is received, so that the received video information not only comprises the biological characteristic information, the specified living body detection action and the like of the target object, but also comprises the specific color information generated by the server according to the color information acquisition request of the client, and therefore the reliability and reliability of the acquired video information are greatly improved.
According to the living body detection method provided by the embodiment of the application, the color information is requested to the server, the video information of the target object is acquired based on the color information, so that the acquired video information not only comprises the biological characteristic information, the specified living body detection action and the like of the target object, but also comprises the specific color information generated by the server according to the color information acquisition request of the client, the reliability and the reliability of the acquired video information are greatly improved, and the hacker attack can be resisted to a certain degree; by sending the collected video information to the server, the server can carry out in-vivo detection processing on the target object based on the video information, so that not only can the in-vivo detection be carried out quickly and accurately, but also the situation that an attacker breaks through the in-vivo detection by sequentially playing the action videos recorded in advance according to the prompt information can be effectively avoided, and the safety of the in-vivo detection is greatly improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of embodiments of the present application will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a biopsy method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a biopsy method according to another embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a control display screen displaying a predetermined prompt pattern and outputting predetermined prompt information in a biopsy method according to yet another embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an interaction process of a biopsy method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram showing a basic structure of a biopsy device according to another embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a detailed structure of a biopsy device according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a basic structure of a biopsy device according to yet another embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a detailed structure of a biopsy device according to yet another embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
The living body detection method, the living body detection device, the electronic equipment and the computer readable storage medium provided by the embodiment of the application aim to solve the technical problems in the prior art.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
One embodiment of the present application provides a method of living body detection, which is performed by a server. The servers may be individual physical servers, clusters of physical servers, or virtual servers. As shown in fig. 1, the method includes:
step S110, when receiving a request sent by a client, generating corresponding color information based on a predetermined color generation rule, and sending response information including the color information to the client, where the color information includes a color sequence composed of at least two colors.
Specifically, when the client performs live detection on a target object (for example, client a), a corresponding request, which may be a color request or a trigger request for live detection, needs to be sent to the server first. When the request is a color request, the client requests the server to acquire corresponding color information, and at this time, the client may acquire the corresponding color information from the server by sending a color information acquisition request to the server. Correspondingly, the server receives the color information acquisition request sent by the client. When the request is a trigger request for a living body detection, the client requests color information for performing the living body detection from the server through the trigger request for the living body detection to acquire corresponding color information from the server. Correspondingly, the server receives a trigger request of the living body detection sent by the client. Wherein the color information comprises a color sequence of at least two colors.
Specifically, the server may generate the corresponding color information based on a predetermined color generation rule. After generating corresponding color information, the server sends the color information to the client to respond to the request of the client, namely sends response information including the color information to the client.
And step S120, receiving the video information of the target object acquired based on the color information sent by the client.
Specifically, after the server sends the response information including the color information to the client, the client may collect the video information of the target object according to the received color information, and send the collected video information to the server after the collection of the video information is completed. Correspondingly, the server receives the video information of the target object acquired based on the color information sent by the client.
And step S130, performing living body detection processing on the target object based on the video information, and sending a corresponding living body detection result to the client.
Specifically, after receiving video information of a target object acquired based on color information sent by a client, the server may perform living body detection processing on the target object based on the received video information to detect whether the target object in the video information is a real living body object, that is, detect whether the target object is a living body based on the video information.
Specifically, the server may transmit the corresponding living body detection result (whether the target object is a living body or not) to the client, i.e., transmit the corresponding living body detection result to the client, after completing the detection of whether the target object is a living body based on the video information.
According to the living body detection method provided by the embodiment of the application, the corresponding color information is generated based on the preset color generation rule, and the video information of the target object acquired by the client based on the color information is received, so that the received video information not only comprises the biological characteristic information, the specified living body detection action and the like of the target object, but also comprises the specific color information generated by the server according to the color information acquisition request of the client, and therefore the reliability and reliability of the acquired video information are greatly improved.
In a possible implementation manner of an embodiment of the present application, the color information includes a color sequence and image capturing frame numbers respectively corresponding to colors in the color sequence, and the generating of the corresponding color information based on a predetermined color generation rule includes: based on a preset color generation rule, generating a color sequence with a preset length according to at least two preset color codes, and determining the number of image acquisition frames corresponding to each color in the color sequence, wherein each preset color code represents the corresponding color. The method for receiving the video information of the target object based on color information acquisition sent by the client comprises the following steps: and receiving video information of the target object, which is sent by the client and is acquired based on the image acquisition frame numbers respectively corresponding to the color sequence and each color.
Specifically, the server generates corresponding color information based on a predetermined color generation rule, including but not limited to randomly generating corresponding color information by the server, generating corresponding color information according to a rule set in advance, and the like, that is, the color generation rule includes but not limited to a randomly generating rule, a rule set in advance as needed, and the like.
Specifically, the color information may include a color sequence of a preset length composed of at least two preset color codes, and may also include a color sequence of a preset length composed of specific RGB values. Each preset color code represents a corresponding color, and the preset color codes include, but are not limited to, numbers 0 to 9, letters a to Z, special characters (e.g., @, #, & etc.), and any combination form among numbers, letters, special characters (e.g., 1A, 2&, a #, 1A @ etc.), and the like. Each color uniquely corresponds to one preset color code, namely, one color can be uniquely represented by using only one preset color code.
In practical applications, which preset color code represents which color is used is a one-to-one correspondence relationship established in advance according to needs, for example, red with an RGB value of (255, 0, 0) is represented by a preset color code 0, green with an RGB value of (0, 255, 0) is represented by a preset color code 1, blue with an RGB value of (0, 0, 255) is represented by a preset color code 2, black with an RGB value of (0, 0, 0) is represented by a preset color code a, and white with an RGB value of (255, 255, 255) is represented by a preset color code of "@ B".
Specifically, when a color sequence of a preset length is composed of specific RGB values, it is possible to uniquely represent red using the RGB value of (255, 0, 0), green using the RGB value of (0, 255, 0), blue using the RGB value of (0, 0, 255), black using the RGB value of (0, 0, 0), white using the RGB value of (255, 255, 255), and the like.
Specifically, the colors (represented by preset colors) included in the color sequence may be adjusted according to a predetermined color generation rule according to actual needs, for example, in one case, the color sequence 1 is rgb, and rgb …, in another case, the color sequence 2 is cyan, magenta, blue, green, cyan, magenta, cyan, red, green, blue, red, green, blue, green, black, blue, black, green, blue, black, green, blue, white …, and the like. If the red preset color code is 0, the green preset color code is 1, the blue preset color code is 2, the black preset color code is a, and the white preset color code is "@ B", the color sequence 1 is specifically 012012012 …, the color sequence 2 is specifically 210210210 …, and the color sequence 3 is specifically a012@ BA012@ B.
Specifically, the server may specify or set image capturing frame numbers respectively corresponding to the colors in the color sequence simultaneously in a process of generating a color sequence of a preset length according to at least two preset color codes based on a preset color generation rule, where the image capturing frame numbers respectively corresponding to the colors in the color sequence may be referred to as configuration information of the color sequence, and if no special description is provided, the subsequent configuration information refers to the image capturing frame numbers respectively corresponding to the colors in the color sequence. Taking the color sequence 1 as an example, for example, the number of image capturing frames of red is set to be P1 frames (i.e., P1 frames of target images are captured when red is currently displayed), the number of image capturing frames of green is set to be P2 frames (i.e., P2 frames of target images are captured when green is currently displayed), the number of image capturing frames of blue is set to be P3 frames (i.e., P3 frames of target images are captured when blue is currently displayed), and the like, where P1, P2, and P3 are positive integers, and the values of P1, P2, and P3 may be the same or different.
Further, the server generates a color sequence with a preset length according to at least one preset color code based on a preset color generation rule, and sends the color sequence and the configuration information to the client after determining the image acquisition frame number corresponding to each color in the color sequence, so as to respond to the color information acquisition request of the client. The client can acquire the video information of the target object according to the received color sequence and the configuration information, namely the video information of the target object acquired based on the image acquisition frame numbers respectively corresponding to the color sequence and each color, and after the acquisition of the video information is completed, the acquired video information is sent to the server. Correspondingly, the server receives the video information of the target object, which is sent by the client and acquired based on the image acquisition frame numbers respectively corresponding to the color sequence and each color.
In one possible implementation manner of an embodiment of the present application, performing living body detection processing on a target object based on video information includes: the method comprises the steps of extracting image frames corresponding to colors in a color sequence from video information, determining face regions corresponding to the image frames, calculating first RGB values of the face regions, and carrying out living body detection processing on a target object based on the first RGB values and the color sequence to obtain a first detection result.
Specifically, after receiving video information of a target object acquired based on image acquisition frame numbers respectively corresponding to colors in a color sequence sent by a client, the server may extract image frames respectively corresponding to the colors in the color sequence from the video information, where the number of image frames respectively corresponding to the colors is determined by configuration information when the server generates the color sequence. Taking the color sequence 1 as an example, if the number of red image acquisition frames is 4 frames, the number of green image acquisition frames is 4 frames, and the number of blue image acquisition frames is 4 frames, and the preset length of the color sequence is red, green, blue, red, green, blue, red, blue, green, blue, red, green, blue, red, blue, and blue, and blue, and blue, 60 frames, 60, respectively, 60 frames, then: when extracting image frames from the video information, image frames corresponding to colors in the color sequence may be extracted from the video information, for example, 20 image frames corresponding to red, 20 image frames corresponding to green, and 20 image frames corresponding to blue are extracted.
In practical applications, 60 frames of image frames may be extracted from the video information at predetermined time intervals as needed, 60 frames of image frames may be extracted from predetermined positions such as the frontmost position, the middle position, and the rearmost position of the video information, and other extraction methods may also be used, which is not limited in the embodiment of the present application.
Further, in the process of extracting each image frame corresponding to each color in the color sequence from the video information, each extracted image frame has a certain extraction timing sequence. Taking color sequence 1 as an example, according to the extraction timing sequence of each extracted image frame, it can be obtained that: the 1 st frame image to the 4 th frame image are image frames corresponding to red in a first red-green-blue combination, the 5 th frame image to the 8 th frame image are image frames corresponding to green in a first red-green-blue combination, the 9 th frame image to the 12 th frame image are image frames corresponding to blue in a first red-green-blue combination, the 13 th frame image to the 16 th frame image are image frames corresponding to red in a second red-green-blue combination, the 17 th frame image to the 20 th frame image are image frames corresponding to green in a second red-green-blue combination, the 21 st frame image to the 24 th frame image are image frames corresponding to blue in a second red-green-blue combination, and so on, the 51 st frame image to the 54 th frame image are image frames corresponding to red in a fifth red-green-blue combination, the 55 th frame image to the 58 th frame image are image frames corresponding to green in a fifth red-green-blue combination, the 57 th frame image to the 60 th frame image are image frames corresponding to blue in the fifth red, green and blue combination. If the 1 st to 60 th frame images are respectively denoted by L1 to L60, the extracted time sequence of each image frame is L1, L2, L3, · and L60 in this order.
Further, after the image frames are extracted from the video information, for any image frame, a face region (for example, a human face region) of the image frame may be determined, and then RGB values (i.e., the first RGB values) of the determined face region may be calculated, where the average value of pixels of the determined face region may be obtained by calculating the RGB values of the determined face region. After the RGB values (i.e., the first RGB values) of the face regions corresponding to the respective image frames are determined, whether the target object is a living body may be detected based on the respective RGB values (i.e., the first RGB values), and a first detection result may be obtained.
Specifically, when the living body detection processing is performed on the target object based on the respective first RGB values and the color sequence, and the first detection result is obtained, the following process may be performed:
firstly, according to each first RGB value and the time sequence of each image frame in the video information, a corresponding first R value sequence, a corresponding first G value sequence and a corresponding first B value sequence are obtained. In practical application, the following processing may be adopted: and dividing each first RGB value into a corresponding first R value, a first G value and a first B value according to the R channel, the G channel and the B channel.
Specifically, taking the color sequence 1 as an example, the RGB values of the 1 st image frame L1 (i.e., the first RGB values) are divided into corresponding first R values (denoted as R1), first G values (denoted as G1) and first B values (denoted as B1), the RGB values of the 2 nd image frame L2 (i.e., the first RGB values) are divided into corresponding first R values (denoted as R2), first G values (denoted as G2) and first B values (denoted as B2), and so on, and the RGB values of the 60 th image frame L60 (i.e., the first RGB values) are divided into corresponding first R values (denoted as R60), first G values (denoted as G60) and first B values (denoted as B60).
Secondly, based on the time sequence of each image frame in the video information, a first R value sequence comprising each first R value, a first G value sequence comprising each first G value and a first B value sequence comprising each first B value are obtained. According to the above description, the first R value sequence is R1, R2, …, R60, the first G value sequence is G1, G2, …, G60, and the first B value sequence is B1, B2, …, B60.
And thirdly, obtaining a corresponding second R value sequence, a second G value sequence and a second B value sequence according to each second RGB value in the color sequence and the corresponding image acquisition frame number. In practical application, the following processing may be adopted: and determining a second RGB value of the color sequence locally generated by the server, and dividing the second RGB value into a corresponding second R value, a second G value and a second B value according to the R channel, the G channel and the B channel to obtain a corresponding second R value sequence, a second G value sequence and a second B value sequence.
Specifically, after generating a corresponding color sequence and configuration information (i.e., the number of image acquisition frames corresponding to each color in the color sequence) according to the color information acquisition request of the client, the server locally stores the color sequence and the configuration information, may extract an image frame (e.g., 60 frames) with a corresponding number of frames from the locally stored color sequence according to the configuration information, and determine an RGB value of each image frame, where the determined RGB of each image frame is an RGB value (i.e., the second RGB value) of the color sequence.
After obtaining the second RGB values of the color sequence, the second RGB values may be divided into corresponding second R values, G values, and second B values according to the R channel, G channel, and B channel, so as to obtain corresponding second R value sequences (denoted as R ' 1, R ' 2, …, and R ' 60), second G value sequences (denoted as G ' 1, G ' 2, …, and G ' 60), and second B value sequences (denoted as B ' 1, B ' 2, …, and B ' 60).
Again, a first degree of matching between the first sequence of R values (R1, R2, …, R60) and the second sequence of R values (R ' 1, R ' 2, …, R ' 60) is determined, a second degree of matching between the first sequence of G values (G1, G2, …, G60) and the second sequence of G values (G ' 1, G ' 2, …, G ' 60) is determined, and a third degree of matching between the first sequence of B values (B1, B2, …, B60) and the second sequence of B values (B ' 1, B ' 2, …, B ' 60) is determined.
Specifically, whether determining a first matching degree between the first R value sequence and the second R value sequence, determining a second matching degree between the first G value sequence and the second G value sequence, or determining a third matching degree between the first B value sequence and the second B value sequence, the corresponding matching degree may be determined in any one of the following manners. For ease of understanding, it can be summarized as: a manner of determining a degree of match between a first sequence of X values and a second sequence of X values, where X is any one of R, G and B, including any one of:
the first method is as follows: when the color sequence includes black and white, for example, the first two colors of the color sequence are fixed as black and white, and for example, the last two colors of the color sequence are fixed as black and white, and for example, the color sequence includes black and white at any position of the color sequence, considering that the client displays corresponding colors through a display screen, and reflects the corresponding colors to a face area of a target object through reflection, and then acquires video information of the target object through an image acquisition device, the color is lost, and a situation that the acquired colors cannot be completely matched with colors stored locally in a server easily occurs, so: firstly, normalizing a first X value sequence, wherein the value range of the normalized first X value sequence is [0, 1 ]; next, the normalized first X-value sequence is subjected to binarization processing, for example, binarization to 0 and 1, by using a preset corresponding threshold, so as to obtain a corresponding binarization X-sequence.
The second X value sequence local to the server is usually subjected to binarization processing in advance, and thus, after the binary X sequence is obtained, the binary X sequence and the binary processed second X value sequence may be sequentially compared according to sequence positions, the number of the two sequences having the same value (i.e., the first number) is determined, and the matching degree between the first X value sequence and the second X value sequence is determined according to a ratio between the first number and a sequence length of the binary X sequence. If the binary X sequence (denoted as sequence T) obtained after the first X value sequence is subjected to the normalization processing and the binarization processing is 01100110, that is, the sequence length of the binary X sequence is 8, and the second X value sequence (denoted as sequence T') local to the server after the binarization processing is 01100100, then: comparing the sequence T with the sequence T ' in sequence according to sequence positions, it can be determined that the two sequences (i.e., the sequence T and the sequence T ') have different values at the penultimate sequence position, and the values of the other sequence positions are the same, so that the number of the sequences T and the sequence T ' having the same value is 7, that is, the first number is 7, and further, the matching degree between the first X-value sequence and the second X-value sequence is 87.5% according to the ratio between the first number and the sequence length of the binarized X sequence (i.e., 7/8 ═ 87.5%).
When X is R, a first matching degree between the first R value sequence and the second R value sequence is obtained, when X is G, a second matching degree between the first G value sequence and the second G value sequence is obtained, and when X is B, a third matching degree between the first B value sequence and the second B value sequence is obtained.
The second method comprises the following steps: after the first X value sequence and the second X value sequence are obtained, each sequence value in the first X value sequence may be used as a vertical coordinate, a sequence position corresponding to each sequence value may be used as a horizontal coordinate, and each sequence value may be sequentially drawn as a corresponding coordinate point and connected to each coordinate point, so as to obtain a corresponding curve (referred to as a first X value curve) of the first X value sequence. When X is R, a curve of a first R value series (referred to as a first R value curve) can be obtained, when X is G, a curve of a first G value series (referred to as a first G value curve) can be obtained, and when X is B, a curve of a first B value series (referred to as a first B value curve) can be obtained. For the second X value sequence, the corresponding curve of the second X value sequence (denoted as second X value curve) is obtained according to the above curve drawing method, wherein, when X is R, the curve of the second R value sequence (denoted as second R value curve) is obtained, when X is G, the curve of the second G value sequence (denoted as second G value curve) is obtained, and when X is B, the curve of the second B value sequence (denoted as second B value curve) is obtained.
After the first X-value curve and the second X-value sequence curve are obtained, each first jumping point and each first jumping direction of the first X-value curve may be determined, and each second jumping point and each second jumping direction of the second X-value curve may be determined, so that each first jumping point of the first X-value curve and each second jumping point of the second X-value curve may be sequentially compared according to an abscissa (i.e., a sequence position), and a corresponding first comparison result may be obtained. Likewise, each first transition direction of the first X-value curve and each second transition direction of the second X-value curve may be sequentially compared according to the abscissa (i.e., sequence position) to obtain a corresponding second comparison result. Then, a matching degree between the first X-value sequence and the second X-value sequence can be determined according to the first comparison result and the second comparison result. In practical application, each first jump point corresponding to the corresponding curve can be found by performing first derivative calculation on the corresponding curve, and each first jump direction corresponding to the corresponding curve can be found by performing second derivative calculation on the corresponding curve.
When X is R, a first matching degree between the first R value sequence and the second R value sequence is obtained, when X is G, a second matching degree between the first G value sequence and the second G value sequence is obtained, and when X is B, a third matching degree between the first B value sequence and the second B value sequence is obtained.
And finally, performing living body detection processing on the target object according to the first matching degree, the second matching degree and the third matching degree to obtain a first detection result. Here, the first detection result may be a probability value of whether the target object is a living body, such as 80%, 95%, or 0.8, 0.95, and the like, and at this time, whether the target object is a living body may be determined by comparing or judging the first detection result with a predetermined threshold, such as when the first detection result is not less than 90% or not less than 0.9, the target object is considered to be a living body. Of course, the first detection result may also be an indication value of whether the target object is a living body, for example, 0, 1, etc., for example, 0 represents that the target object is not a living body, 1 represents that the target object is a living body, and for example, 0 represents that the target object is a living body, 1 represents that the target object is not a living body.
Specifically, in the process of determining whether the target object is a living body according to the first matching degree, the second matching degree, and the third matching degree, the first matching degree, the second matching degree, and the third matching degree may be considered in combination to determine whether the target object is a living body, for example, when all of the first matching degree, the second matching degree, and the third matching degree are higher than a predetermined matching degree threshold, the target object may be determined to be a living body to obtain a first detection result, for example, when any two of the first matching degree, the second matching degree, and the third matching degree are higher than a predetermined matching degree threshold, the target object may be determined to be a living body to obtain a first detection result, for example, the first matching degree, the second matching degree, and the third matching degree are weighted-averaged, and whether the target object is a living body is determined according to the obtained weighted-averaged result to obtain the first detection result, but other determination methods are also possible, the embodiments of the present application do not limit the same.
It should be noted that, in this implementation, before detecting whether the target object is a living body based on each first RGB value, each first RGB value may be subjected to discrete fourier transform processing, and then converted into a frequency domain, and then subjected to low-pass filtering processing in the frequency domain to filter out high-frequency noise in the first RGB value, and then each first RGB value after being subjected to high-frequency noise filtering is subjected to inverse discrete fourier transform processing, and then converted into a time domain, and then based on each first RGB value after being converted into the time domain, whether the target object is a living body is detected, thereby improving accuracy and reliability of the detection result.
In a possible implementation manner of an embodiment of the present application, performing living body detection processing on a target object based on each first RGB value and color sequence to obtain a first detection result, includes: converting the respective first RGB values into first color space values of a predetermined pattern, respectively, and converting second RGB values of the color sequence into second color space values of the predetermined pattern; dividing each first color space value according to a color channel of a preset mode, and obtaining a first channel color value sequence comprising channel color values under the same channel based on the time sequence of each image frame in the video information; dividing the second color space value according to a color channel of a preset mode to obtain a second channel color value sequence comprising channel color values under the same channel; and determining a fourth matching degree of the first channel color value sequence and the second channel color value sequence under the same channel according to the time sequence position, and performing living body detection processing on the target object according to the fourth matching degree to obtain a first detection result.
Specifically, the predetermined patterns include, but are not limited to, HSL (color model, H: Hue-Hue; S: Saturration-Saturation; L: Lightness-Lightness) and HSV (hexagonal pyramid model, H: Hue-Hue; S: Saturration-Saturation; V: Value-Lightness).
The following describes the processing procedure in this implementation in detail, taking the predetermined mode as HSL as an example:
firstly, each first RGB value obtained from video information is respectively converted into a first color space value of HSL, namely each corresponding first HSL value is obtained, and meanwhile, a second RGB value of a local color sequence of a server is also converted into a second color space value of HSL, namely a corresponding second HSL value is obtained. If the first RGB values are 60, the 60 first RGB values are respectively converted into the first color space values of HSL (i.e. the first HSL values), and 60 first HSL values can be obtained. Likewise, converting the second RGB value of the color sequence local to the server to a second color space value of HSL (i.e., a second HSL value), a second HSL value of the color sequence may be obtained.
Next, the 60 first HSL values are divided according to the color channels of HSLs (i.e., H channel, S channel, and L channel), wherein the 1 st first HSL value is divided according to H channel, S channel, and L channel to obtain the corresponding first H value (denoted as H1), first S value (denoted as S1), and first L value (denoted as L1), the 2 nd first HSL value is divided according to H channel, S channel, and L channel to obtain the corresponding first H value (denoted as H2), first S value (denoted as S2), and first L value (denoted as L2), and so on, the 60 th first HSL value is divided according to H channel, S channel, and L channel to obtain the corresponding first H value (denoted as H60), first S value (denoted as S60), and first L value (denoted as L60).
After the first HSL values are divided according to the color channels of the HSLs, a first channel color value sequence including channel color values under the same channel may be obtained based on a time sequence of each image frame in the video information (i.e., a time sequence of each first RGB value), that is, a first H value sequence of a first H value under the H channel (i.e., H1, H2, …, H60), a first S value sequence of a first S value under the S channel (i.e., S1, S2, …, S60), and a first L value sequence of a first L value under the L channel (i.e., L1, L2, …, L60).
Secondly, after obtaining a second HSL value of the color sequence local to the server, the second HSL value may be divided according to the color channels of the HSL (i.e., the H channel, the S channel, and the L channel) to obtain a second channel color value sequence including channel color values under the same channel, and then a second H value sequence (i.e., H ' 1, H ' 2, …, and H ' 60) of the second H value under the H channel, a second S value sequence (i.e., S ' 1, S ' 2, …, and S ' 60) of the second S value under the S channel, and a second L value sequence (i.e., L ' 1, L ' 2, …, and L ' 60) of the second L value under the L channel are obtained.
And determining a fourth matching degree of the first channel color value sequence and the second channel color value sequence under the same channel according to the time sequence position, and determining whether the target object is a living body according to the fourth matching degree to obtain a first detection result. In other words, a fifth degree of matching between the first H-value sequence (H1, H2, …, H60) and the second H-value sequence (H ' 1, H ' 2, …, H ' 60) is determined, a sixth degree of matching between the first S-value sequence (S1, S2, …, S60) and the second S-value sequence (S ' 1, S ' 2, …, S ' 60) is determined, and a seventh degree of matching between the first L-value sequence (L1, L2, …, L60) and the second L-value sequence (L ' 1, L ' 2, …, L ' 60) is determined.
The first or second method may be adopted no matter whether the fifth matching degree between the first S value sequence and the second S value sequence, the sixth matching degree between the first H value sequence and the second H value sequence, or the seventh matching degree between the first L value sequence and the second L value sequence is determined, which is not described herein again. The fourth matching degree may be any one of the fifth matching degree, the sixth matching degree, and the seventh matching degree, or a combination of the three.
And finally, determining whether the target object is a living body according to the first matching degree, the second matching degree and the third matching degree to obtain a first detection result.
It should be noted that, for the predetermined mode of HSV, the processing method is the same as the processing method of HSL, and is not described herein again.
In one possible implementation manner of an embodiment of the present application, performing living body detection processing on a target object based on video information includes: extracting a first preset number of image frames from the video information; and carrying out in-vivo detection processing on the target object according to the first preset number of image frames through a pre-trained neural network model to obtain a second detection result. Here, the second detection result may be a probability value of whether the target object is a living body, such as 85%, 97%, and the like, and further, such as 0.85, 0.97, and the like, and at this time, whether the target object is a living body may be determined by comparing or judging the second detection result with a predetermined threshold, for example, when the second detection result is not less than 80% or not less than 0.8, the target object is considered to be a living body. The second detection result may also be an indication value of whether the target object is a living body, for example, 0, 1, or the like, such as 0 representing that the target object is not a living body, 1 representing that the target object is a living body, and such as 0 representing that the target object is a living body, and 1 representing that the target object is not a living body.
Specifically, after receiving video information of a target object acquired based on image acquisition frame numbers respectively corresponding to color sequences and colors sent by a client, the server may extract a first preset number of image frames from the video information according to actual needs, for example, extract a preset number of image frames such as 20 frames, 30 frames, 50 frames, and so on. The image frames may be extracted randomly or at regular intervals, for example, one image frame may be extracted every 1 second, 3 seconds, 5 seconds, or the like, or other image frame extraction manners may also be used, which is not limited in the embodiments of the present application.
Further, after a first preset number of image frames are extracted, the first preset number of image frames may be input into a pre-trained neural network model, so that a living body detection is performed on the target object according to the first preset number of image frames through the neural network model to obtain a corresponding detection result (denoted as a second detection result). The neural network model is obtained after training based on a deep learning video in-vivo detection algorithm.
Further, after the second detection result and the first detection result are obtained, whether the target object is a living body may be directly determined based on the first detection result, whether the target object is a living body may be directly determined based on the second detection result, and whether the target object is a living body may be determined based on both the first detection result and the second detection result.
Wherein, when determining whether the target object is a living body or not in common based on the first detection result and the second detection result, it may be determined that the target object is a living body only when both the first detection result and the second detection result are living bodies, that is, it is directly determined that the target object is not a living body as long as the first detection result is not a living body or the second detection result is not a living body; the first detection result and the second detection result can be weighted and averaged, and whether the target object is a living body or not can be determined according to the weighted and averaged result; the scaling factor of the first detection result and the scaling factor of the second detection result may also be set as needed, such as the scaling factor of the first detection result (denoted as S1) is K, and the scaling factor of the second detection result (denoted as S2) is L, where the sum of K and L may be set as a predetermined value, such as 1, 100%, etc., as needed, so that a comparison or determination may be made with a predetermined threshold value according to the data sum (K S1+ L S2) of K times the first detection result and L times the second detection result to obtain a living body detection result of whether the target object is a living body, such as whether the target object is a living body, or such as whether the target object is not a living body.
Further, after obtaining the final detection result of whether the target object is a living body, the server may send the final detection result to the client, so that the client prompts whether the target object passes the living body detection, wherein the client may display the received living body detection result on a display screen or play the received living body detection result through a voice player. On the other hand, the final detection result and the color information acquisition request can be sent to a third-party client for authentication.
In a possible implementation manner of an embodiment of the present application, when receiving a color information acquisition request sent by a client, a server generates a request identifier for the color information acquisition request, and records a first receiving time when receiving the color information acquisition request corresponding to the request identifier.
Specifically, after receiving a color information acquisition request sent by a client, the server generates a request identifier (for example, ID _1) for the color information acquisition request, and the request identifier is used for uniquely identifying the color information acquisition request. Meanwhile, the server records a first receiving time (such as time _0) when the color information acquisition request corresponding to the request identifier is received, wherein during the recording process, the server may establish a one-to-one correspondence relationship between the request identifier (such as ID _1) and the first receiving time (such as time _ 0).
Specifically, the server, when transmitting response information including the generated color information to the client, includes the request identification (such as ID _1) in the response information. When the client finishes the acquisition of the video information of the target object based on the received color information and sends the video information to the server, the request identifier (such as ID _1) is carried in the video information.
Specifically, in the process that the server detects whether the target object is a living body based on the video information, a second receiving time (such as time _1) at which the video information is received is first determined; then, according to a request identifier (such as ID _1) included in the video information, determining a first receiving time (such as time _0) of the color information acquisition request, that is, according to a one-to-one correspondence between the request identifier and the first receiving time, searching for the receiving time _0 corresponding to ID _ 1; then, whether the target object is a living body is detected based on the first reception time and the second reception time.
In the process of performing the living body detection processing on the target object based on the time difference between the first receiving time and the second receiving time, the time difference between the second receiving time _1 and the first receiving time _0 may be calculated, and then whether the target object is a living body may be determined according to the comparison result of the time difference and the predetermined time threshold, for example, when the time difference is greater than the predetermined time threshold, the target object is determined not to be a living body, and when the time difference is less than or equal to the predetermined time threshold, the target object is determined to be a living body.
Yet another embodiment of the present application provides a method for detecting a living body, which is performed by a terminal device. The terminal device may be a desktop device or a mobile terminal. As shown in fig. 2, the method includes:
step S210, sending a request to the server, and receiving response information fed back by the server for the request, where the response information includes color information generated by the server based on a predetermined color generation rule.
Specifically, when the client performs live detection on a target object (for example, client a), a request, which may be a color request or a trigger request for live detection, needs to be sent to the server first, and when the request is a color request, the client requests the server to acquire corresponding color information, and at this time, the client may acquire the corresponding color information from the server by sending a color information acquisition request to the server. Correspondingly, the server receives the color information acquisition request sent by the client. When the request is a trigger request for a living body detection, the client requests color information for performing the living body detection from the server through the trigger request for the living body detection to acquire corresponding color information from the server. Correspondingly, the server receives a trigger request of the living body detection sent by the client. Wherein the color information includes a color sequence composed of at least two colors.
Specifically, after receiving the request sent by the client, the server may generate corresponding color information based on a predetermined color generation rule. After generating corresponding color information, the server sends the color information to the client so as to respond to a color information acquisition request of the client. Correspondingly, the client receives response information fed back by the server for the color information acquisition request, wherein the response information comprises color information generated by the server based on a predetermined color generation rule. Wherein the color information comprises a color sequence of at least two colors.
Step S220, collecting video information of the target object based on the color information, and sending the video information to the server, so that the server performs living body detection processing on the target object based on the video information.
Specifically, after receiving response information fed back by the server, the client acquires video information of the target object according to color information which is included in the response information and generated based on a predetermined color generation rule. After the video information is collected, the video information is sent to the server, namely the video information is sent to the server, so that the server carries out the living body detection processing on the target object based on the video information.
In step S230, the server receives the result of the live body detection performed on the target object.
Specifically, after receiving video information of a target object acquired based on color information sent by a client, the server may detect whether the target object in the video information is a real living object based on the received video information, that is, whether the target object is a living body based on the video information.
Further, the server may transmit a corresponding detection result (whether a living body or not) to the client after completing the detection of whether the target object is a living body based on the video information. Correspondingly, the client receives the living body detection result of whether the target object returned by the server is the living body.
According to the living body detection method provided by the embodiment of the application, the color information is requested to the server, the video information of the target object is acquired based on the color information, so that the acquired video information not only comprises the biological characteristic information, the specified living body detection action and the like of the target object, but also comprises the specific color information generated by the server according to the color information acquisition request of the client, the reliability and the reliability of the acquired video information are greatly improved, and the hacker attack can be resisted to a certain degree; by sending the collected video information to the server, the server can carry out in-vivo detection processing on the target object based on the video information, so that not only can the in-vivo detection be carried out quickly and accurately, but also the situation that an attacker breaks through the in-vivo detection by sequentially playing the action videos recorded in advance according to the prompt information can be effectively avoided, and the safety of the in-vivo detection is greatly improved.
In a possible implementation manner of an embodiment of the present application, acquiring video information of a target object based on color information includes: controlling a display screen to display a preset prompt pattern and output preset prompt information so that a target object is in the preset prompt pattern according to the preset prompt information; and when the target object is detected to be in the preset prompt pattern according to the preset prompt information, controlling the display screen to display colors corresponding to the color sequence and controlling the image acquisition equipment to acquire the video information of the target object.
Specifically, after receiving the color information fed back by the server, the client may control the display screen to display a predetermined prompt pattern, such as a circle and a face frame, which are shown in a guide frame of fig. 3, where the predetermined prompt pattern may also be a pattern in another form, and the embodiment of the present application is not limited thereto. Meanwhile, the client outputs a predetermined prompt message, for example, the predetermined prompt message is displayed on a display screen, and for example, the predetermined prompt message is played through a voice player, where the predetermined prompt message may be "please place the front face in the guide frame" as shown in fig. 3, or may be other prompt messages with a guide function, and the embodiment of the present application does not limit the predetermined prompt message.
Specifically, the client controls the display screen to display a preset prompt pattern and output preset prompt information to guide the face of the target object to be opposite to the screen in a short distance and guide the target object to keep a stable state as much as possible during shooting, so that the target object is guided to be in the preset prompt pattern according to the preset prompt information, and early preparation work for acquiring video information is performed.
Specifically, when the client detects that the target object is in a predetermined prompt pattern according to predetermined prompt information, the client controls the display screen to display the color information received from the server while controlling an image capture device (e.g., a camera, etc.) to capture video information of the target object. The color information is a color sequence with a preset length generated by the server according to each preset color code based on a preset color generation rule, each preset color code represents a corresponding color, and each color in the color sequence has a corresponding image acquisition frame number.
Specifically, in the process of acquiring the video information of the target object, the client controls the display screen to start the maximum brightness, controls the display screen to display the received color sequence, and controls the image acquisition equipment to acquire the video information according to the image acquisition frame number corresponding to each color in the color sequence. In other words, in the process of capturing video information of a target object, the client controls the image capturing apparatus to perform capturing of the video information by the number of image capturing frames for each color configured in the configuration information of the server, and prohibits display screen information, screen lock, and the like during the capturing. Wherein, the shooting can be finished 2-3 seconds or 5-6 seconds after the video information is shot, namely the collection of the video information is finished.
In a possible implementation manner of the embodiment of the present application, an interactive process of performing live body detection on a client and a server is provided, and the process is shown in fig. 4. In FIG. 4, step 401: the client sends a color information acquisition request to the server; step 402: the server generates identification information of the color information acquisition request and records the receiving time (such as time _0) of the color information acquisition request; step 403: the server generates a color sequence with a preset length according to each preset color code based on a preset color generation rule, and determines image acquisition frame numbers respectively corresponding to each color in the color sequence; step 404: the server sends the color sequence, the image acquisition frame number and the identification information which respectively correspond to each color to the client; step 405: the client guides the target object to align with a preset prompt image, controls a display screen to display a color sequence and controls the image acquisition equipment to acquire video information; step 406: the client sends the identification information and the collected video information to the server; step 407: the server judges whether the video information is expired, namely the server judges whether the video information is expired according to the receiving time _0 of the color information acquisition request and the time _1 of the received video information; step 408: when the video information is overdue, returning a detection result that the target object is not a living body to the client; step 409: when the video information is not overdue, the server detects whether the target object is a living body or not through the color RGB value to obtain a first detection result; step 410: when the video information is not overdue, the server detects whether the target object is a living body through a pre-trained neural network model to obtain a second detection result; step 411: the server determines whether the target object is a living body according to the first detection result and the second detection result; step 412: the server returns a corresponding living body detection result to the client; step 413: the client generates prompt information of the living body detection result, the prompt information can be displayed through a display screen, and the prompt information can also be played through a voice player; step 414: and sending the living body detection result to a third party for authentication.
Fig. 5 is a schematic structural diagram of a living body detection apparatus according to another embodiment of the present application, and as shown in fig. 5, the apparatus 50 may include a first processing module 51, a first receiving module 52, and a second processing module 53, where:
a first processing module 51, configured to generate corresponding color information based on a predetermined color generation rule when receiving a request sent by a client, and send response information including the color information to the client, where the color information includes a color sequence composed of at least two colors;
the first receiving module 52 is configured to receive video information of a target object, which is sent by a client and acquired based on color information;
and a second processing module 53, configured to perform living body detection processing on the target object based on the video information, and send a corresponding living body detection result to the client.
The device provided by the embodiment of the application generates corresponding color information based on a preset color generation rule and receives the video information of the target object acquired by the client based on the color information, so that the received video information not only comprises biological characteristic information of the target object, specified living body detection action and the like, but also comprises specific color information generated by the server according to the color information acquisition request of the client, thereby greatly improving the reliability and reliability of the acquired video information.
Fig. 6 is a detailed structural diagram of a living body detection apparatus according to yet another embodiment of the present disclosure, and as shown in fig. 6, the apparatus 90 may include a first processing module 61, a first receiving module 62, a second processing module 63, a third processing module 64, and a fourth processing module 65, where functions implemented by the first processing module 61 in fig. 6 are the same as those implemented by the first processing module 51 in fig. 5, functions implemented by the first receiving module 62 in fig. 6 are the same as those implemented by the first receiving module 52 in fig. 5, and functions implemented by the second processing module 63 in fig. 6 are the same as those implemented by the second processing module 53 in fig. 5, and are not repeated herein. The living body detecting apparatus shown in FIG. 6 will be described in detail below:
specifically, the first processing module 61 is specifically configured to generate a color sequence with a preset length according to at least two preset color codes based on a preset color generation rule, and determine image acquisition frame numbers corresponding to respective colors in the color sequence, where the respective preset color codes represent respective colors;
the first receiving module 62 is specifically configured to receive video information of a target object, which is sent by a client and acquired based on image acquisition frame numbers corresponding to the color sequences and the colors respectively.
Specifically, the second processing module 63 is specifically configured to extract, from the video information, each image frame corresponding to each color in the color sequence; and determining face regions corresponding to the image frames respectively, calculating first RGB values of the face regions, and performing living body detection processing on the target object based on the first RGB values and the color sequence to obtain a first detection result.
Further, the second processing module 63 includes a sampling sub-module 631, a detection sub-module 632, and a first living body determining sub-module 633, wherein:
a sampling sub-module 631 for extracting a first preset number of image frames from the video information;
the detection submodule 632 is configured to perform living body detection processing on the target object according to a first preset number of image frames through a pre-trained neural network model to obtain a second detection result;
the first living body determining submodule 633 is configured to obtain a living body detection result of whether the target object is a living body according to the first detection result and the second detection result.
Further, the second processing module 63 includes a first sequence generation submodule 634, a second sequence generation submodule 635, a first matching degree determination submodule 636 and a second living body determination submodule 637, wherein:
the first sequence generation sub-module 634 is configured to obtain a corresponding first R value sequence, a first G value sequence, and a first B value sequence according to each first RGB value and a time sequence of each image frame in the video information;
the second sequence generation submodule 635 is configured to obtain a corresponding second R value sequence, a second G value sequence, and a second B value sequence according to each second RGB value in the color sequence and the corresponding image acquisition frame number;
a first matching degree determining sub-module 636, configured to determine a first matching degree between the first R value sequence and the second R value sequence, determine a second matching degree between the first G value sequence and the second G value sequence, and determine a third matching degree between the first B value sequence and the second B value sequence;
the second living body determining submodule 637 is configured to perform living body detection processing on the target object according to the first matching degree, the second matching degree, and the third matching degree, to obtain a first detection result.
In a possible implementation manner, the first matching degree determining sub-module 636 is specifically configured to perform any one of the following:
when the color sequence comprises black and white, performing normalization processing and binarization processing on the first X value sequence to obtain a binarization X sequence; sequentially comparing the binaryzation X sequence with the binaryzation-processed second X value sequence according to the sequence position, determining a first number of the two sequences with the same value, and determining the matching degree between the first X value sequence and the second X value sequence according to the ratio of the first number to the sequence length of the binaryzation X sequence;
determining each first hopping point and each first hopping direction of the first X value sequence, determining each second hopping point and each second hopping direction of the second X value sequence, and determining the matching degree between the first X value sequence and the second X value sequence according to the comparison result of each first hopping point and each second hopping point and the comparison result of each first hopping direction and each second hopping direction;
x is any one of R, G and B.
In a possible implementation manner, a third processing module 64 is further included, where:
and a third processing module 64, configured to perform discrete fourier transform processing, low-pass filtering processing, and inverse discrete fourier transform processing on each first RGB value, so as to filter out high-frequency noise in the first RGB values.
Further, the second processing module 63 includes a spatial transform submodule 638, a third sequence generation submodule 639, a fourth sequence generation submodule 640, and a second matching degree determination submodule 641, where:
a spatial conversion submodule 638 for converting the respective first RGB values into first color space values of a predetermined pattern, respectively, and converting second RGB values of the color sequence into second color space values of the predetermined pattern;
the third sequence generation submodule 639 is configured to divide each first color space value according to a color channel in a predetermined mode, and obtain a first channel color value sequence including channel color values in the same channel based on a time sequence of each image frame in the video information;
the fourth sequence generation submodule 640 is configured to divide the second color space value according to a color channel of a predetermined pattern, so as to obtain a second channel color value sequence including channel color values in the same channel;
the second matching degree determining submodule 641 is configured to determine a fourth matching degree of the first channel color value sequence and the second channel color value sequence in the same channel according to the time sequence position, and perform living body detection processing on the target object according to the fourth matching degree to obtain a first detection result.
In a possible implementation, the apparatus further includes a fourth processing module 65, where:
a fourth processing module 65, configured to generate a request identifier for the color information acquisition request when receiving the color information acquisition request sent by the client, and record first receiving time when receiving the color information acquisition request corresponding to the request identifier;
the response information comprises a request identification, and the video information comprises the request identification;
the second processing module 63 is specifically configured to determine a second receiving time when the video information is received; the color information acquisition module is used for acquiring the color information of the video information, and determining first receiving time of the color information acquisition request according to the request identifier included in the video information; and a processing unit configured to perform living body detection processing on the target object based on a time difference value between the first reception time and the second reception time.
It should be noted that the present embodiment is an apparatus embodiment corresponding to the method embodiment shown in fig. 1, and the present embodiment can be implemented in cooperation with the method embodiment. The related technical details mentioned in the above method embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described method item embodiments.
Fig. 7 is a schematic structural diagram of a living body detection apparatus according to another embodiment of the present application, and as shown in fig. 7, the apparatus 70 may include a transceiver processing module 71, an acquisition processing module 72, and a second receiving module 73, where:
the transceiving processing module 71 is configured to send a request to the server, and receive response information fed back by the server in response to the request, where the response information includes color information generated by the server based on a predetermined color generation rule, and the color information includes a color sequence formed by at least two colors;
the acquisition processing module 72 is used for acquiring video information of the target object based on the color information and sending the video information to the server so that the server performs living body detection processing on the target object based on the video information;
and a second receiving module 73, configured to receive a living body detection result obtained by performing living body detection processing on the target object and returned by the server.
According to the device provided by the embodiment of the application, the color information is requested to the server, the video information of the target object is acquired based on the color information, so that the acquired video information not only comprises the biological characteristic information, the specified living body detection action and the like of the target object, but also comprises the specific color information generated by the server according to the color information acquisition request of the client, the reliability and the reliability of the acquired video information are greatly improved, and the hacker attack can be resisted to a certain degree; by sending the collected video information to the server, the server can carry out in-vivo detection processing on the target object based on the video information, so that not only can the in-vivo detection be carried out quickly and accurately, but also the situation that an attacker breaks through the in-vivo detection by sequentially playing the action videos recorded in advance according to the prompt information can be effectively avoided, and the safety of the in-vivo detection is greatly improved.
Fig. 8 is a detailed structural schematic diagram of a living body detection apparatus according to yet another embodiment of the present application, and as shown in fig. 8, the apparatus 80 may include a transceiving processing module 81, an acquisition processing module 82, and a second receiving module 83, where functions implemented by the transceiving processing module 81 in fig. 8 are the same as the transceiving processing module 71 in fig. 7, functions implemented by the acquisition processing module 82 in fig. 8 are the same as the acquisition processing module 72 in fig. 7, and functions implemented by the second receiving module 83 in fig. 8 are the same as the second receiving module 73 in fig. 7, and are not repeated herein. The living body detecting apparatus shown in FIG. 8 will be described in detail below:
specifically, the acquisition processing module 82 includes a control output sub-module 821 and an acquisition sub-module 822, wherein:
a control output sub-module 821 for controlling the display screen to display a predetermined prompt pattern and outputting predetermined prompt information so that the target object is in the predetermined prompt pattern according to the predetermined prompt information;
and the acquisition sub-module 822 is used for controlling the display screen to display colors corresponding to the color sequence and controlling the image acquisition equipment to acquire the video information of the target object when the target object is detected to be in the preset prompt pattern according to the preset prompt information.
Specifically, the color information is a color sequence of a preset length generated by the server according to each preset color code based on a preset color generation rule, each preset color code represents a corresponding color, and each color in the color sequence has a corresponding image acquisition frame number;
the collecting submodule 822 is specifically configured to control the display screen to display a color sequence, and control the image collecting device to collect video information according to image collecting frame numbers corresponding to respective colors in the color sequence.
It should be noted that the present embodiment is an apparatus embodiment corresponding to the method embodiment shown in fig. 2, and the present embodiment and the method embodiment can be implemented in cooperation. The related technical details mentioned in the above method embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described method item embodiments.
Yet another embodiment of the present application provides an electronic device, as shown in fig. 9, the electronic device 900 shown in fig. 9 includes: a processor 901 and a memory 903. Wherein the processor 901 is coupled to the memory 903, such as via a bus 902. Further, the electronic device 900 may also include a transceiver 904. It should be noted that the transceiver 904 is not limited to one in practical applications, and the structure of the electronic device 900 is not limited to the embodiment of the present application.
The processor 901 is applied to the embodiment of the present application, and is used to implement the functions of the first processing module and the second processing module shown in fig. 5 and fig. 6, and to implement the functions of the third processing module and the fourth processing module shown in fig. 6. The transceiver 904 includes a receiver and a transmitter, and the transceiver 904 is applied in the embodiment of the present application to realize the functions of the first receiving module shown in fig. 5 and 6.
In addition, the processor 901 is applied to the embodiment of the present application to realize the functions of the acquisition processing module shown in fig. 7 and 8. The transceiver 904 includes a receiver and a transmitter, and the transceiver 904 is applied in the embodiment of the present application to realize the functions of the second receiving module of the transceiving processing module shown in fig. 7 and 8.
The processor 901 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 901 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others.
Bus 902 may include a path that transfers information between the above components. The bus 902 may be a PCI bus or an EISA bus, etc. The bus 902 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The memory 903 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 903 is used for storing application program codes for executing the scheme of the application, and the execution is controlled by the processor 901. The processor 901 is configured to execute application program code stored in the memory 903 to implement the actions of the living body detecting device provided in the embodiment shown in fig. 5 or fig. 6, or to implement the actions of the living body detecting device provided in the embodiment shown in fig. 7 or fig. 8.
The electronic device provided by the embodiment of the application comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the following conditions can be realized:
in the first case: the method has the advantages that the corresponding color information is generated based on the preset color generation rule, the video information of the target object acquired by the client based on the color information is received, the received video information comprises the biological characteristic information of the target object, the specified living body detection action and the like, the server acquires the specific color information generated according to the color information of the client, and therefore the reliability and reliability of the acquired video information are greatly improved.
In the second case: the color information is requested to the server, and the video information of the target object is acquired based on the color information, so that the acquired video information not only comprises the biological characteristic information, the specified living body detection action and the like of the target object, but also comprises the specific color information generated by the server according to the color information acquisition request of the client, thereby greatly improving the reliability and reliability of the acquired video information and being capable of resisting hacker attacks to a certain extent; the collected video information is sent to the server, so that the server can detect whether the target object is a living body based on the video information, the living body detection can be rapidly and accurately carried out, an attacker can be effectively prevented from playing the action videos recorded in advance according to the prompt information in sequence to break the living body detection, and the safety of the living body detection is greatly improved.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method shown in the above embodiment. The method specifically comprises the following conditions:
in the first case: the method has the advantages that the corresponding color information is generated based on the preset color generation rule, the video information of the target object acquired by the client based on the color information is received, the received video information comprises the biological characteristic information of the target object, the specified living body detection action and the like, the server acquires the specific color information generated according to the color information of the client, and therefore the reliability and reliability of the acquired video information are greatly improved.
In the second case: the color information is requested to the server, and the video information of the target object is acquired based on the color information, so that the acquired video information not only comprises the biological characteristic information, the specified living body detection action and the like of the target object, but also comprises the specific color information generated by the server according to the color information acquisition request of the client, thereby greatly improving the reliability and reliability of the acquired video information and being capable of resisting hacker attacks to a certain extent; the collected video information is sent to the server, so that the server can detect whether the target object is a living body based on the video information, the living body detection can be rapidly and accurately carried out, an attacker can be effectively prevented from playing the action videos recorded in advance according to the prompt information in sequence to break the living body detection, and the safety of the living body detection is greatly improved.
The computer-readable storage medium provided by the embodiment of the application is suitable for any embodiment of the method.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (15)

1. A method of in vivo detection, comprising:
when a request sent by a client is received, generating corresponding color information based on a preset color generation rule, and sending response information comprising the color information to the client, wherein the color information comprises a color sequence formed by at least two colors and image acquisition frame numbers respectively corresponding to all colors in the color sequence;
receiving video information of a target object which is sent by a client and acquired based on the color information;
performing living body detection processing on the target object based on the video information, and sending a corresponding living body detection result to a client;
the generating of the corresponding color information based on the predetermined color generation rule includes:
generating a color sequence with a preset length according to at least two preset color codes based on a preset color generation rule, and determining the number of image acquisition frames corresponding to each color in the color sequence, wherein each preset color code represents the corresponding color;
the live body detection processing of the target object based on the video information includes:
extracting each image frame corresponding to each color in the color sequence from the video information;
determining face regions corresponding to the image frames respectively, calculating first RGB values of the face regions, and performing living body detection processing on the target object based on the first RGB values and the color sequence to obtain a first detection result.
2. The method of claim 1,
the receiving of the video information of the target object sent by the client and collected based on the color information includes:
and receiving video information of the target object, which is sent by the client and is acquired based on the image acquisition frame numbers respectively corresponding to the color sequence and the colors.
3. The method according to claim 1, wherein the performing a live-body detection process on the target object based on the video information further comprises:
extracting a first preset number of image frames from the video information;
performing living body detection processing on the target object according to the first preset number of image frames through a pre-trained neural network model to obtain a second detection result;
and obtaining a living body detection result of whether the target object is a living body according to the first detection result and the second detection result.
4. The method according to claim 1 or 3, wherein the performing living body detection processing on the target object based on the respective first RGB values and color sequences to obtain a first detection result comprises:
obtaining a corresponding first R value sequence, a first G value sequence and a first B value sequence according to the first RGB values and the time sequence of each image frame in the video information;
obtaining a corresponding second R value sequence, a second G value sequence and a second B value sequence according to each second RGB value in the color sequence and the corresponding image acquisition frame number;
determining a first degree of match between the first sequence of R values and the second sequence of R values, determining a second degree of match between the first sequence of G values and the second sequence of G values, and determining a third degree of match between the first sequence of B values and the second sequence of B values;
and performing living body detection processing on the target object according to the first matching degree, the second matching degree and the third matching degree to obtain a first detection result.
5. The method of claim 4, wherein determining a degree of match between a first sequence of X values and a second sequence of X values, wherein X is any one of R, G and B, comprises any one of:
when the color sequence comprises black and white, performing normalization processing and binarization processing on the first X value sequence to obtain a binarization X sequence; sequentially comparing the binaryzation X sequence with a binaryzation-processed second X value sequence according to sequence positions, determining a first number of the two sequences with the same value, and determining the matching degree between the first X value sequence and the second X value sequence according to the ratio of the first number to the sequence length of the binaryzation X sequence;
and determining each first hopping point and each first hopping direction of the first X value sequence, determining each second hopping point and each second hopping direction of the second X value sequence, and determining the matching degree between the first X value sequence and the second X value sequence according to the comparison result of each first hopping point and each second hopping point and the comparison result of each first hopping direction and each second hopping direction.
6. The method according to claim 4, wherein before the performing living body detection processing on the target object based on the respective first RGB values and color sequences to obtain a first detection result, the method further comprises:
and respectively carrying out discrete Fourier transform processing, low-pass filtering processing and inverse discrete Fourier transform processing on each first RGB value so as to filter out high-frequency noise in the first RGB values.
7. The method according to claim 1, wherein performing a living body detection process on the target object based on the respective first RGB values and color sequences to obtain a first detection result comprises:
converting the respective first RGB values into first color space values of a predetermined pattern, and converting second RGB values of the color sequence into second color space values of a predetermined pattern, respectively;
dividing each first color space value according to the color channel of the preset mode, and obtaining a first channel color value sequence comprising channel color values under the same channel based on the time sequence of each image frame in the video information;
dividing the second color space value according to the color channel of the preset mode to obtain a second channel color value sequence comprising channel color values under the same channel;
and determining a fourth matching degree of the first channel color value sequence and the second channel color value sequence under the same channel according to the time sequence position, and performing living body detection processing on the target object according to the fourth matching degree to obtain a first detection result.
8. The method of claim 1, further comprising:
when a color information acquisition request sent by a client is received, generating a request identifier aiming at the color information acquisition request, and recording first receiving time when the color information acquisition request corresponding to the request identifier is received;
the response information comprises the request identification, and the video information comprises the request identification;
wherein the detecting whether the target object is a living body based on the video information includes:
determining a second receiving time when the video information is received;
determining first receiving time of the color information acquisition request according to the request identifier included in the video information;
performing living body detection processing on the target object based on a time difference value between the first reception time and the second reception time.
9. A method of in vivo detection, comprising:
sending a request to a server, and receiving response information fed back by the server aiming at the request, wherein the response information comprises color information generated by the server based on a preset color generation rule, and the color information comprises a color sequence formed by at least two colors and image acquisition frame numbers respectively corresponding to the colors in the color sequence;
acquiring video information of a target object based on the color information, and sending the video information to a server so that the server performs living body detection processing on the target object based on the video information;
receiving a living body detection result returned by the server and used for carrying out living body detection processing on the target object;
the color information is a color sequence with a preset length generated by the server according to at least one preset color code based on a preset color generation rule, each preset color code represents a corresponding color, and each color in the color sequence has a corresponding image acquisition frame number;
wherein the server performs a live body detection process on the target object based on the video information by:
the server extracts each image frame corresponding to each color in the color sequence from the video information;
the server determines face areas corresponding to the image frames respectively, calculates first RGB values of the face areas, and performs living body detection processing on the target object based on the first RGB values and the color sequence to obtain a first detection result.
10. The method of claim 9, wherein capturing video information of a target object based on the color information comprises:
controlling a display screen to display a preset prompt pattern and output preset prompt information so that a target object is in the preset prompt pattern according to the preset prompt information;
and when the target object is detected to be in the preset prompt pattern according to the preset prompt information, controlling the display screen to display the color corresponding to the color sequence and controlling the image acquisition equipment to acquire the video information of the target object.
11. The method of claim 10,
the controlling the display screen to display the color information and controlling an image acquisition device to acquire video information of the target object includes:
and controlling the display screen to display the color sequence and controlling image acquisition equipment to acquire the video information according to the image acquisition frame number corresponding to each color in the color sequence.
12. A living body detection device, comprising:
the system comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for generating corresponding color information based on a preset color generation rule when receiving a color information acquisition request sent by a client, and sending response information comprising the color information to the client, and the color information comprises a color sequence formed by at least two colors and image acquisition frame numbers respectively corresponding to all colors in the color sequence;
the first receiving module is used for receiving video information of a target object which is sent by a client and acquired based on the color information;
the second processing module is used for carrying out living body detection processing on the target object based on the video information and sending a corresponding living body detection result to the client;
the first processing module is specifically configured to generate a color sequence with a preset length according to at least two preset color codes based on a preset color generation rule, and determine image acquisition frame numbers corresponding to respective colors in the color sequence, where the respective preset color codes represent respective colors;
the second processing module is specifically configured to:
extracting each image frame corresponding to each color in the color sequence from the video information;
determining face regions corresponding to the image frames respectively, calculating first RGB values of the face regions, and performing living body detection processing on the target object based on the first RGB values and the color sequence to obtain a first detection result.
13. A living body detection device, comprising:
the system comprises a receiving and sending processing module, a color information acquisition module and a color information processing module, wherein the receiving and sending processing module is used for sending a color information acquisition request to a server and receiving response information fed back by the server aiming at the color information acquisition request, the response information comprises color information generated by the server based on a preset color generation rule, and the color information comprises a color sequence formed by at least two colors and image acquisition frame numbers respectively corresponding to all colors in the color sequence;
the color information is a color sequence with a preset length generated by the server according to at least one preset color code based on a preset color generation rule, each preset color code represents a corresponding color, and each color in the color sequence has a corresponding image acquisition frame number;
the acquisition processing module is used for acquiring video information of a target object based on the color information and sending the video information to a server so that the server performs living body detection processing on the target object based on the video information; wherein the server performs a live body detection process on the target object based on the video information by: the server extracts each image frame corresponding to each color in the color sequence from the video information; the server determines face areas corresponding to the image frames respectively, calculates first RGB values of the face areas, and performs living body detection processing on the target object based on the first RGB values and the color sequence to obtain a first detection result;
and the second receiving module is used for receiving a living body detection result which is returned by the server and used for carrying out living body detection processing on the target object.
14. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the liveness detection method of any one of claims 1-11 when executing the program.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the living body detecting method according to any one of claims 1 to 11.
CN201910578487.4A 2019-06-28 2019-06-28 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium Active CN110298312B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910578487.4A CN110298312B (en) 2019-06-28 2019-06-28 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
PCT/CN2020/090976 WO2020259128A1 (en) 2019-06-28 2020-05-19 Liveness detection method and apparatus, electronic device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578487.4A CN110298312B (en) 2019-06-28 2019-06-28 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110298312A CN110298312A (en) 2019-10-01
CN110298312B true CN110298312B (en) 2022-03-18

Family

ID=68029420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578487.4A Active CN110298312B (en) 2019-06-28 2019-06-28 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN110298312B (en)
WO (1) WO2020259128A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298312B (en) * 2019-06-28 2022-03-18 北京旷视科技有限公司 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN110969077A (en) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 Living body detection method based on color change
CN111340896B (en) * 2020-02-21 2023-10-27 北京迈格威科技有限公司 Object color recognition method, device, computer equipment and storage medium
CN111860455B (en) * 2020-08-04 2023-08-18 中国银行股份有限公司 Living body detection method and device based on HTML5 page
CN112491840B (en) * 2020-11-17 2023-07-07 平安养老保险股份有限公司 Information modification method, device, computer equipment and storage medium
CN112507922B (en) * 2020-12-16 2023-11-07 平安银行股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN114445898B (en) * 2022-01-29 2023-08-29 北京百度网讯科技有限公司 Face living body detection method, device, equipment, storage medium and program product
CN115174138A (en) * 2022-05-25 2022-10-11 北京旷视科技有限公司 Camera attack detection method, system, device, storage medium and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396400B1 (en) * 2015-07-30 2016-07-19 Snitch, Inc. Computer-vision based security system using a depth camera
CN108549884A (en) * 2018-06-15 2018-09-18 天地融科技股份有限公司 A kind of biopsy method and device
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011243862A (en) * 2010-05-20 2011-12-01 Sony Corp Imaging device and imaging apparatus
CN103116763B (en) * 2013-01-30 2016-01-20 宁波大学 A kind of living body faces detection method based on hsv color Spatial Statistical Character
US9875393B2 (en) * 2014-02-12 2018-01-23 Nec Corporation Information processing apparatus, information processing method, and program
CN106529512B (en) * 2016-12-15 2019-09-10 北京旷视科技有限公司 Living body faces verification method and device
CN107992794B (en) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 A kind of biopsy method, device and storage medium
CN107992842B (en) * 2017-12-13 2020-08-11 深圳励飞科技有限公司 Living body detection method, computer device, and computer-readable storage medium
CN109101949A (en) * 2018-08-29 2018-12-28 广州洪荒智能科技有限公司 A kind of human face in-vivo detection method based on colour-video signal frequency-domain analysis
CN109886080A (en) * 2018-12-29 2019-06-14 深圳云天励飞技术有限公司 Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
CN110414346A (en) * 2019-06-25 2019-11-05 北京迈格威科技有限公司 Biopsy method, device, electronic equipment and storage medium
CN110298312B (en) * 2019-06-28 2022-03-18 北京旷视科技有限公司 Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN110969077A (en) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 Living body detection method based on color change

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396400B1 (en) * 2015-07-30 2016-07-19 Snitch, Inc. Computer-vision based security system using a depth camera
CN108549884A (en) * 2018-06-15 2018-09-18 天地融科技股份有限公司 A kind of biopsy method and device
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109376592A (en) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 Biopsy method, device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Detection of biopsy needle position in a live body using color Doppler imaging system;N. Shibuya等;《1998 IEEE Ultrasonics Symposium. Proceedings (Cat. No. 98CH36102)》;20020806;第1697-1702页 *
人脸识别中的活体检测方法研究;罗浩;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160415;第I138-976页 *

Also Published As

Publication number Publication date
WO2020259128A1 (en) 2020-12-30
CN110298312A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110298312B (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
RU2738325C2 (en) Method and device for authenticating an individual
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN106156578B (en) Identity verification method and device
CN111274928B (en) Living body detection method and device, electronic equipment and storage medium
CN111611873A (en) Face replacement detection method and device, electronic equipment and computer storage medium
CN110378219B (en) Living body detection method, living body detection device, electronic equipment and readable storage medium
CN107111755B (en) Video counterfeit detection method and system based on liveness evaluation
CN113205057B (en) Face living body detection method, device, equipment and storage medium
CN111814655B (en) Target re-identification method, network training method thereof and related device
CN111401134A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN114387548A (en) Video and liveness detection method, system, device, storage medium and program product
CN111445640A (en) Express delivery pickup method, device, equipment and storage medium based on iris recognition
CN113469085B (en) Face living body detection method and device, electronic equipment and storage medium
CN112989937B (en) Method and device for user identity authentication
CN116798100A (en) Face video detection method and device
CN113255401A (en) 3D face camera device
CN110569760A (en) Living body detection method based on near-infrared and remote photoplethysmography
CN114387674A (en) Living body detection method, living body detection system, living body detection apparatus, storage medium, and program product
CN109271771A (en) Account information method for retrieving, device, computer equipment
CN113554685A (en) Method and device for detecting moving target of remote sensing satellite, electronic equipment and storage medium
CN114299569A (en) Safe face authentication method based on eyeball motion
CN113989870A (en) Living body detection method, door lock system and electronic equipment
WO2017117770A1 (en) Fingerprint imaging system and anti-fake method for fingerprint identification
CN114724257B (en) Living body detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant