CN114445898A - Face living body detection method, device, equipment, storage medium and program product - Google Patents

Face living body detection method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN114445898A
CN114445898A CN202210112290.3A CN202210112290A CN114445898A CN 114445898 A CN114445898 A CN 114445898A CN 202210112290 A CN202210112290 A CN 202210112290A CN 114445898 A CN114445898 A CN 114445898A
Authority
CN
China
Prior art keywords
color
differential signal
color channel
sequence
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210112290.3A
Other languages
Chinese (zh)
Other versions
CN114445898B (en
Inventor
张国生
岳海潇
王珂尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210112290.3A priority Critical patent/CN114445898B/en
Publication of CN114445898A publication Critical patent/CN114445898A/en
Application granted granted Critical
Publication of CN114445898B publication Critical patent/CN114445898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1466Active attacks involving interception, injection, modification, spoofing of data unit addresses, e.g. hijacking, packet injection or TCP sequence number attacks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present disclosure provides a face in-vivo detection method, apparatus, device, storage medium and program product, which relate to the field of artificial intelligence, specifically to the technical field of deep learning and computer vision, and can be applied to scenes such as face image processing, face recognition and the like. The specific scheme is as follows: the method comprises the steps of obtaining a color sequence verification code, controlling a terminal to display colors in sequence according to the color sequence verification code, and obtaining a facial video of a target object collected by the terminal in the process of displaying corresponding colors in sequence; extracting a first differential signal of a color intensity sequence of each color channel from the face video, and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel; and if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video. Can prevent the injection attack in the human face living body detection process.

Description

Face living body detection method, device, equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the field of deep learning and computer vision technologies, which can be applied to scenes such as face image processing and face recognition, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for face in-vivo detection.
Background
The human face living body detection is to distinguish whether an image is shot by a real person, is a basic module of a human face recognition system and ensures the safety of the human face recognition system. The human face living body detection method using the deep learning technology is a mainstream method in the field at present, and compared with the traditional method, the precision is greatly improved.
However, with the continuous improvement of the hacker technology, the attack on the face recognition system is no longer limited to the traditional physical attack (such as paper printing attack, screen attack, three-dimensional mask attack, etc.), and the hacker technology can replace the face image to be uploaded when the mobile device collects the face image, which is called injection attack, so as to affect the security of the face recognition system.
Disclosure of Invention
The present disclosure provides a face in-vivo detection method, apparatus, device, storage medium, and program product for effectively preventing injection attack during face in-vivo detection.
According to a first aspect of the present disclosure, there is provided a face live detection method, including:
the method comprises the steps that a color sequence verification code is obtained, a control terminal sequentially displays corresponding colors according to the color sequence verification code, and a facial video of a target object in the process of sequentially displaying the corresponding colors, which is collected by the terminal, is obtained;
extracting a first differential signal of a color intensity sequence of each color channel from the face video, and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; wherein, the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video;
acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel;
and if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video.
According to a second aspect of the present disclosure, there is provided a face liveness detection apparatus including:
the verification code generating unit is used for acquiring a color sequence verification code;
the video acquisition unit is used for controlling the terminal to sequentially display corresponding colors according to the color sequence verification codes and acquiring a facial video of the target object acquired by the terminal in the process of sequentially displaying the corresponding colors;
the differential signal acquisition unit is used for extracting a first differential signal of a color intensity sequence of each color channel from the face video and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; wherein, the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video;
the color sequence prediction unit is used for acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel;
and the judging unit is used for judging whether the color sequence in the face video is matched with the color sequence verification code or not, if so, determining that the face video is not attacked by injection, and carrying out face living body detection according to the face video.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
According to the face in-vivo detection method, the device, the equipment, the storage medium and the program product, the control terminal sequentially displays corresponding colors according to the color sequence verification codes by acquiring the color sequence verification codes, and acquires the face video of the target object acquired by the terminal in the process of sequentially displaying the corresponding colors; extracting a first differential signal of a color intensity sequence of each color channel from the face video, and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video; acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel; and if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video. The color sequence in the face video is determined by applying the second differential signal representing the color difference of each color channel, so that the external environment interference can be effectively resisted, the robustness of identifying the color sequence in the face video is improved, the injection attack in the human face living body detection process can be effectively prevented through the matching of the color sequence in the face video and the color sequence verification code, and the calculation cost is low.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart of a human face live detection method according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating a living human face detection method according to an exemplary embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a classification model process according to an exemplary embodiment of the present disclosure;
fig. 4 is a schematic flow chart illustrating a face liveness detection method according to an exemplary embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a living human face detection device according to an exemplary embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a living human face detection device according to another exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of an electronic device used to implement methods of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The human face living body detection is to distinguish whether an image is shot by a real person, is a basic module of a human face recognition system and ensures the safety of the human face recognition system. The human face living body detection method using the deep learning technology is a mainstream method in the field at present, and compared with the traditional method, the precision is greatly improved. However, with the continuous improvement of the hacking technology, the attack on the face recognition system is no longer limited to the traditional physical attack (such as paper printing attack, screen attack, three-dimensional mask attack, etc.), and the hacking technology can replace the face image to be uploaded when the mobile device collects the face image, which is called injection attack.
In some face living body detection methods, generally based on a feature two-classification method, firstly, feature extraction is carried out on a face image, and then, two-classification is carried out on the extracted features, wherein the feature extraction comprises features extracted based on a traditional manual feature or a deep neural network (CNN, LSTM); and the two-classification method is mainly based on a traditional machine learning Support Vector Machine (SVM) or a fully-connected network based on a neural network. However, the method based on the feature two classification can only resist the traditional physical attack and cannot resist the injection attack generated by the hacker technology.
In some methods for resisting injection attacks based on active light, a method of an active light verification code strategy is provided, the chromaticity and the intensity of light are coded into the screen light by utilizing the screen light, the gradient of the chromaticity and the intensity is regressed by utilizing a reflection signal for shooting a face image, and finally whether an input image is an injection attack or not is verified according to the gradient value of the regressed chromaticity and intensity. Although accurate verification can be theoretically completed based on the method for resisting injection attack by active light, the chromaticity and the intensity of reflected light are easily interfered due to complex change of natural light, the calculation burden of a living body detection system is undoubtedly increased by using an image and a two-dimensional convolution network, and the method is not beneficial to practical application of the living body detection system.
In order to solve the technical problem, embodiments of the present disclosure provide a method, an apparatus, a device, a storage medium, and a program product for detecting a living human face, which are applied to the field of human face recognition in the field of artificial intelligence, wherein by acquiring a color sequence verification code, a control terminal sequentially displays corresponding colors according to the color sequence verification code, and acquires a facial video of a target object acquired by the terminal in the process of sequentially displaying the corresponding colors; extracting a first differential signal of a color intensity sequence of each color channel from the face video, and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video; acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel; and if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video. The color sequence in the face video is determined by applying the second differential signal representing the color difference of each color channel, so that the external environment interference can be effectively resisted, the robustness of identifying the color sequence in the face video is improved, the injection attack in the human face living body detection process can be effectively prevented through the matching of the color sequence in the face video and the color sequence verification code, and the calculation cost is low.
Fig. 1 is a schematic flow chart of a face live detection method according to an exemplary embodiment of the present disclosure. An execution main body of the face live detection method provided by the present disclosure may be an electronic device such as a server or a terminal, and as shown in fig. 1, the face live detection method provided by the embodiment of the present disclosure includes:
step 101, acquiring a color sequence verification code, sequentially displaying corresponding colors by the control terminal according to the color sequence verification code, and acquiring a facial video of a target object acquired by the control terminal in the process of sequentially displaying the corresponding colors.
The color sequence verification code is used for controlling the terminal to display the color sequence, optionally, the color sequence verification code may be formed by multiple digits, each digit represents switching one color, different colors are represented by different digits, for example, 0 represents red, 1 represents green, and 2 represents blue, and if the color sequence verification code is 1012, the color sequence is green, red, green, and blue, and the terminal screen may be controlled to display green, red, green, and blue in sequence. Of course, the color sequence verification code may be composed of words or other forms. In this embodiment, the color sequence verification code may be randomly generated or generated by using other methods, which is not limited herein.
Optionally, when a face living body verification request of the terminal is received, a color sequence verification code may be generated, and then the color sequence verification code is sent to the terminal, and the terminal sequentially displays corresponding colors according to the color sequence verification code.
In the process that the terminal sequentially displays the corresponding colors, the terminal can collect the facial videos of the target object under the irradiation of various colors of light rays of the terminal, wherein if the colors are sequentially displayed by the front screen of the terminal, the front camera is adopted to collect the facial videos of the target object. It should be noted that, different expressions or actions of the target object may also be indicated in the living face detection process, which is not limited herein.
102, extracting first differential signals of color intensity sequences of all color channels from a face video, and acquiring second differential signals representing color differences of all color channels according to the first differential signals corresponding to all color channels;
and the color intensity sequence of any color channel is a sequence formed by the color intensity mean values of the color channels in each frame of the face video.
After the face video of the target object in the process of sequentially displaying the corresponding colors, acquired by the terminal, is acquired, the face video can be processed, the color intensity mean value of each color channel in each frame of the face video is acquired for each color channel in RGB three color channels, and the color intensity sequence I of the color channels is obtainedr(t),Ig(t),Ib(t); wherein t is a time variable; according to the color intensity sequence of each color channel, the differentiation of the color intensity of each color channel to the time is respectively obtained to obtain a first differential signal of the color intensity sequence of each color channel
Figure BDA0003495425770000061
Furthermore, second differential signals representing color difference of each color channel are obtained according to the first differential signals corresponding to each color channel, wherein the second differential signals corresponding to each color channel are calculated according to the characteristics of the three colors, the change rate characteristic of the color difference can be fully utilized, the characteristic of each color component in a color sequence can be highlighted to a great extent, and the external environment interference can be effectively resisted.
Optionally, the second differential signal representing the color difference of each color channel may be obtained according to the first differential signal corresponding to each color channel and the preset color difference calculation formula corresponding to each color channel, where the preset color difference calculation formula corresponding to any color channel is a product of the first differential signal corresponding to the color channel minus the first differential signals corresponding to other color channels and the corresponding coefficients. Where the coefficient may be 0.
In an alternative embodiment, the predetermined color difference calculation formula for each color channel is as follows:
Figure BDA0003495425770000062
where T represents the second differential signal.
And 103, acquiring a color sequence in the face video according to the second differential signals corresponding to the color channels.
After the second differential signals corresponding to the color channels are obtained, color difference changes can be analyzed based on the second differential signals corresponding to the color channels, so that color change conditions in the face video are determined, color sequences in the face video are obtained, the color sequences in the face video can be effectively resisted by obtaining the color sequences based on the second differential signals, and the calculation cost is low.
In this embodiment, a classification model may be trained in advance, and the input of the classification model is the second differential signal corresponding to each color channel, so as to output color classification. And then, determining the color change condition in the face video based on a preset classification model to obtain a color sequence in the face video.
And step 104, if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video.
And if the color sequence in the face video is consistent with the color sequence corresponding to the color sequence verification code, determining that the injection attack is not received in the face living body detection process, and further continuing to perform face living body detection according to the face video. In this embodiment, a specific method of face live body detection is not limited, for example, a face video may be input into a live body model to be discriminated and returned to a live body score, and a result of face live body detection may be determined according to the live body score.
According to the face in-vivo detection method provided by the embodiment of the disclosure, by acquiring the color sequence verification code, the control terminal sequentially displays corresponding colors according to the color sequence verification code, and acquires the face video of the target object acquired by the terminal in the process of sequentially displaying the corresponding colors; extracting a first differential signal of a color intensity sequence of each color channel from the face video, and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video; acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel; and if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video. The color sequence in the face video is determined by applying the second differential signal representing the color difference of each color channel, so that the external environment interference can be effectively resisted, the robustness of identifying the color sequence in the face video is improved, the injection attack in the human face living body detection process can be effectively prevented through the matching of the color sequence in the face video and the color sequence verification code, and the calculation cost is low.
Fig. 2 is a schematic flow chart of a face live detection method according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the living human face detection method provided by the present disclosure includes:
step 201, acquiring a color sequence verification code, controlling the terminal to sequentially display corresponding colors according to the color sequence verification code, and acquiring a facial video of the target object acquired by the terminal in the process of sequentially displaying the corresponding colors.
Step 201 may refer to step S101 in the above embodiment. The color sequence verification code in this embodiment may be randomly generated or generated by other methods.
In addition, if the execution subject of this embodiment is a server, the terminal may generate a color sequence verification code when receiving a human face live body verification request from the terminal, and then send the color sequence verification code to the terminal, the terminal sequentially displays corresponding colors according to the color sequence verification code, the terminal may send a face video to the server after acquiring the face video of the target object under irradiation of light of various colors, and the server continues to execute the following steps. If the execution subject of the embodiment is the terminal, the terminal acquires the color sequence verification code, sequentially displays corresponding colors according to the color sequence verification code, acquires the face video of the target object under the irradiation of light rays with various colors, and continuously executes the following steps at the terminal.
Step 202, for any color channel, obtaining a color intensity mean value of the color channel in each frame of the face video, and obtaining a color intensity sequence of the color channel.
Aiming at each color channel in RGB three color channels, obtaining the color intensity mean value of each color channel in each frame of the face video to obtain the color intensity sequence I of the color channelr(t),Ig(t),Ib(t); where t is a time variable. For example, for R (red) channel, for any frame of the face video, the average value of R channel values of each pixel in the frame is counted, and the average value of R channel values of multiple frames of the frame forms R channel color intensity sequence Ir(t)。
Step 203, obtaining the differential of the color intensity of the color channel with respect to time according to the color intensity sequence of the color channel, so as to obtain a first differential signal of the color intensity sequence of the color channel.
According to each color channel in RGB three color channels, the differentiation of the color intensity of each color channel to the time is respectively obtained according to the color intensity sequence of each color channel, and a first differential signal of the color intensity sequence of each color channel is obtained
Figure BDA0003495425770000081
Step 204, obtaining a second differential signal representing the color difference of each color channel according to the first differential signal corresponding to each color channel and a preset color difference calculation formula corresponding to each color channel;
the preset color difference calculation formula corresponding to any color channel is obtained by subtracting the product of the first differential signal corresponding to the color channel and the corresponding coefficient from the first differential signal corresponding to the color channel.
Taking the preset color difference calculation formula in the above embodiment as an example, the second differential signal representing the color difference of each color channel can be obtained through the preset color difference calculation formula as follows:
Figure BDA0003495425770000082
where T represents the second differential signal.
Of course, the preset color difference calculation formula is not limited to the above example, and may also be other color difference calculation formulas, and particularly, the first differential signals and the corresponding coefficients corresponding to other color channels may be determined according to experiments.
And 205, segmenting the second differential signals corresponding to the color channels based on the terminal color switching time.
The second differential signal utilizes the change rate characteristic of color difference, so that the second differential signal corresponding to each color channel can be segmented for more fully analyzing the change rate condition of color difference during each color switching, and each segment is used for analyzing the change rate condition of color difference during one color switching process.
Optionally, the switching time of each color of the terminal is used as the center of the segment, and the second differential signals corresponding to the color channels are segmented, so that the accuracy of determining the switched color corresponding to the segment can be improved. As shown in fig. 3, the color switching time is at the center of the segment. It should be noted that, if the duration of each color displayed by the terminal is equal, the second differential signals corresponding to each color channel can be segmented uniformly, so that the color switching time is located at the center of the segment.
And step 206, determining the switched color corresponding to any segment according to the preset classification model for the second differential signal corresponding to each color channel of the segment.
In this embodiment, the second differential signal corresponding to each color channel of any segment may be input into a classification model trained in advance, and the color after switching in the color switching process of the segment may be determined by the classification model.
Among them, the lightweight classification model can be adopted in this embodiment, and this classification model includes: a one-dimensional convolutional layer, a one-dimensional average pooling layer, and a three-classified fully-connected layer, as shown in fig. 3, optionally, the classification model includes three one-dimensional convolutional layers, one-dimensional average pooling layer, and one three-classified fully-connected layer.
For the second differential signal corresponding to each color channel of any segment, the features can be extracted through the one-dimensional convolution layer, the features are subjected to average pooling through the one-dimensional average pooling layer, three classifications are performed through the full-connection layer, and the switched color corresponding to the segment is determined. Optionally, the output of the full connection layer is 0, 1, and 2, where 0 represents that the color is red after switching, 1 represents that the color is green after switching, and 2 represents that the color is blue after switching.
In the embodiment, the second differential signals corresponding to the color channels of each segment are processed by adopting a lightweight classification model, the switched colors corresponding to the segments are determined, compared with the method that the colors are directly identified for the video frame image by directly adopting a two-dimensional convolution network, the calculation cost is low, the second differential signals are processed, the change rate characteristic of color difference can be fully utilized, the characteristic can greatly highlight the characteristic of each color light component in the color sequence, the external environment interference is effectively resisted, and the model robustness is high.
In this embodiment, before determining the switched color corresponding to the segment through a preset classification model, the method further includes a preprocessing process, specifically including:
and performing linear interpolation on the second differential signals corresponding to each color channel of any one segment to obtain an input vector with a preset length corresponding to the segment, and inputting the input vector into the classification model. Through linear interpolation, the second differential signals corresponding to the color channels of each segment can be controlled to be in a fixed length to satisfy the model input, for example, the second differential signals corresponding to the color channels of each segment are controlled to be in 96 bits, that is, the input vector of the input classification model is 3 × 96. In addition, the input vector can be normalized, the calculation cost of the model is reduced, and the processing efficiency and accuracy of the model are improved.
And step 207, obtaining a color sequence in the face video according to the switched color corresponding to each segment.
The switched colors corresponding to each segment can be obtained through the classification model, and the color sequence in the face video can be obtained by arranging the segments according to the sequence of the segments.
And step 208, matching the color sequence in the face video with the color sequence verification code.
Comparing the color sequence in the face video with the color sequence corresponding to the color sequence verification code, if the color sequence is consistent with the color sequence verification code, executing step 209, otherwise executing step 210.
And step 209, if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video.
If the color sequence in the face video is matched with the color sequence verification code, the face video is not replaced, the face video is collected in real time, the face living body detection process is not attacked by injection, and the face living body detection can be continuously carried out.
When the human face living body detection is continuously carried out, the face video can be input into the living body model to judge and return the living body score, the result of the human face living body detection is determined according to the living body score, specifically, if the living body score is larger than a preset threshold value, the human face living body detection is determined to be passed, and otherwise, the human face living body detection is determined not to be passed.
And step 210, if the color sequence in the face video is not matched with the color sequence verification code, determining that the face video is attacked by injection, and finishing the face living body detection.
If the color sequence in the face video is not matched with the color sequence verification code, the face video is possibly replaced, the face video is not collected in real time, the face living body detection process is possibly attacked by injection, the face living body detection is ended, and in addition, prompt can be carried out.
The face living body detection method provided by the embodiment determines the color sequence in the face video by using the second differential signal representing the color difference of each color channel, can effectively resist the external environment interference, improves the robustness of identifying the color sequence in the face video, can effectively prevent the injection attack in the face living body detection process by matching the color sequence in the face video with the color sequence verification code, and has lower calculation cost; and moreover, the second differential signals corresponding to the color channels of each segment are processed by applying a lightweight classification model, the switched colors corresponding to the segments are determined, and compared with the method that the colors are directly identified for the video frame image by directly adopting a two-dimensional convolution network, the calculation cost is low.
On the basis of the above embodiment, as shown in fig. 4, the preset color difference calculation formula corresponding to each color channel is obtained through the following processes:
step 401, obtaining a plurality of candidate color difference calculation formulas corresponding to each color channel, where coefficients in different candidate color difference calculation formulas are different.
The candidate color difference calculation formula corresponding to the R channel may include, but is not limited to:
Figure BDA0003495425770000111
Figure BDA0003495425770000112
etc., wherein
Figure BDA0003495425770000113
And
Figure BDA0003495425770000114
the corresponding coefficient may be 0 or any other value. The candidate color difference calculation formula corresponding to the G channel and the B channel is similar to the R channel, and is obtained by subtracting the product between the first differential signal corresponding to the color channel and the corresponding coefficient from the first differential signal corresponding to the color channel, wherein different candidate color differencesThe coefficients in the calculation formula are different.
Step 402 extracts a first differential signal of each color channel color intensity sequence from the face video for testing.
The facial video for testing can be obtained, and the obtaining process can refer to step 101 or step 201, which is not described herein again. After the face video for testing is acquired, the first differential signal of each color channel color intensity sequence can be extracted using the process of step 202-203.
Step 403, for the first differential signal corresponding to any color channel, obtaining a second differential signal according to any optional color difference calculation formula corresponding to the color channel.
For each alternative color difference calculation formula, a corresponding second differential signal may be obtained in the same step 204.
And 404, visually displaying the second differential signals obtained by the alternative color difference calculation formulas, selecting the alternative color difference calculation formula with jumping of the second differential signals in the color switching time, and determining the alternative color difference calculation formula as a preset color difference calculation formula corresponding to the color channel.
The second differential signals obtained by the alternative color difference calculation formulas are drawn into a curve graph for visual display, the curve graphs of the alternative color difference calculation formulas corresponding to the same color channel are compared, and the jump of the second differential signal obtained by the alternative color difference calculation formula in the color switching time is determined or the jump is most obvious, and the second differential signal is determined as the final color difference calculation formula, so that the change rate characteristic of the color difference can be more highlighted.
On the basis of the above embodiment, the training process of the classification model can be as follows:
obtaining a face video for multi-segment training, obtaining second differential signals corresponding to each color channel of a plurality of segments by adopting the process of the step 202 and the step 205, labeling the colors after switching in the color switching process of each segment as training data, and training a classification model based on the training data, so that the classification model can process the second differential signals corresponding to each color channel of any input segment to obtain the colors after switching in the color switching process of the segment.
It should be noted that the models in the above embodiments are not models for a specific user, and cannot reflect personal information of a specific user. It should be noted that the face video in the present embodiment may be from a public data set.
In the technical scheme of the disclosure, the processing of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal face image information of the user meets the regulations of relevant laws and regulations without violating the customs of public order.
Fig. 5 is a schematic structural diagram of a living human face detection apparatus according to an exemplary embodiment of the present disclosure.
As shown in fig. 5, the present disclosure provides a living human face detection apparatus 500, including:
a verification code generation unit 510 for acquiring a color sequence verification code;
the video obtaining unit 520 is configured to control the terminal to sequentially display corresponding colors according to the color sequence verification code, and obtain a facial video of the target object acquired by the terminal in the process of sequentially displaying the corresponding colors;
a differential signal obtaining unit 530, configured to extract a first differential signal of a color intensity sequence of each color channel from the face video, and obtain a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; wherein, the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video;
a color sequence prediction unit 540, configured to obtain a color sequence in the face video according to the second differential signal corresponding to each color channel;
a determining unit 550, configured to determine whether a color sequence in the face video matches the color sequence verification code, and if the color sequence in the face video matches the color sequence verification code, determine that the face video is not under the injection attack, and perform face live body detection according to the face video.
According to the human face living body detection device provided by the embodiment, the control terminal sequentially displays corresponding colors according to the color sequence verification codes by acquiring the color sequence verification codes, and acquires the face video of the target object acquired by the terminal in the process of sequentially displaying the corresponding colors; extracting a first differential signal of a color intensity sequence of each color channel from the face video, and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video; acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel; and if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video. The color sequence in the face video is determined by applying the second differential signal representing the color difference of each color channel, so that the external environment interference can be effectively resisted, the robustness of identifying the color sequence in the face video is improved, the injection attack in the human face living body detection process can be effectively prevented through the matching of the color sequence in the face video and the color sequence verification code, and the calculation cost is low.
Fig. 6 is a schematic structural diagram of a living human face detection apparatus according to another exemplary embodiment of the present disclosure.
As shown in fig. 6, in the living human face detection apparatus 600 provided by the present disclosure, the verification code generation unit 610 shown is similar to the verification code generation unit 510 shown in fig. 5, the video acquisition unit 620 is similar to the video acquisition unit 520 shown in fig. 5, the differential signal acquisition unit 630 is similar to the differential signal acquisition unit 530 shown in fig. 5, the color sequence prediction unit 640 is similar to the color sequence prediction unit 540 shown in fig. 5, and the judgment unit 650 is similar to the judgment unit 550 shown in fig. 5.
Wherein the differential signal acquiring unit 630 includes:
a color intensity sequence acquiring module 631, configured to acquire, for any color channel, a color intensity mean value of the color channel in each frame of the face video, to obtain a color intensity sequence of the color channel;
the first differential signal obtaining module 632 is configured to obtain a differential of the color intensity of the color channel with respect to time according to the color intensity sequence of the color channel, so as to obtain a first differential signal of the color intensity sequence of the color channel.
Wherein the differential signal acquiring unit 630 includes:
the second differential signal obtaining module 633 is configured to obtain a second differential signal representing a color difference of each color channel according to the first differential signal corresponding to each color channel and a preset color difference calculation formula corresponding to each color channel, where the preset color difference calculation formula corresponding to any color channel is a product obtained by subtracting the first differential signal corresponding to the color channel by the first differential signal corresponding to the other color channel and a corresponding coefficient.
Wherein the color sequence prediction unit 640 includes:
a segmenting module 641, configured to segment the second differential signal corresponding to each color channel based on the terminal color switching time;
the color classification module 642 is configured to determine, for a second differential signal corresponding to each color channel of any one segment, a switched color corresponding to the segment through a preset classification model;
a color sequence determining module 643, configured to obtain a color sequence in the face video according to the switched color corresponding to each segment.
The segmentation module 641 is specifically configured to:
and segmenting the second differential signal corresponding to each color channel by taking each color switching time of the terminal as the center of the segment.
Wherein the color sequence prediction unit 640 further comprises:
the preprocessing module 644 is configured to perform linear interpolation on the second differential signals corresponding to each color channel of any segment to obtain an input vector with a preset length corresponding to the segment, and input the input vector into the classification model.
Wherein the classification model comprises: a one-dimensional convolution layer, a one-dimensional average pooling layer, and three classified fully-connected layers;
the color classification module 642 is specifically configured to:
and for the second differential signal corresponding to each color channel of any one segment, extracting features through the one-dimensional convolutional layer, performing average pooling on the features through the one-dimensional average pooling layer, performing three-classification through the full-connection layer, and determining the switched color corresponding to the segment.
The preset color difference calculation formula corresponding to each color channel is obtained through the following processes:
acquiring a plurality of alternative color difference calculation formulas corresponding to each color channel, wherein coefficients in different alternative color difference calculation formulas are different;
extracting a first differential signal of each color channel color intensity sequence from the face video for testing;
aiming at a first differential signal corresponding to any color channel, acquiring a second differential signal according to any optional color difference calculation formula corresponding to the color channel;
and visually displaying the second differential signals obtained by the alternative color difference calculation formulas, selecting the alternative color difference calculation formula with jumping of the second differential signals in the color switching time, and determining the alternative color difference calculation formula as a preset color difference calculation formula corresponding to the color channel.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 executes the respective methods and processes described above, such as the face live detection method. For example, in some embodiments, the face liveness detection method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the above described living human face detection method may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the face liveness detection method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A face in-vivo detection method comprises the following steps:
the method comprises the steps that a color sequence verification code is obtained, a control terminal sequentially displays corresponding colors according to the color sequence verification code, and a facial video of a target object in the process of sequentially displaying the corresponding colors, which is collected by the terminal, is obtained;
extracting a first differential signal of a color intensity sequence of each color channel from the face video, and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; wherein, the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video;
acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel;
and if the color sequence in the face video is matched with the color sequence verification code, determining that the face video is not attacked by injection, and performing face living body detection according to the face video.
2. The method of claim 1, wherein said extracting from the face video a first differential signal for each color channel color intensity sequence comprises:
aiming at any color channel, acquiring a color intensity mean value of the color channel in each frame of the face video to obtain a color intensity sequence of the color channel;
and acquiring the differential of the color intensity of the color channel to time according to the color intensity sequence of the color channel to obtain a first differential signal of the color intensity sequence of the color channel.
3. The method according to claim 1 or 2, wherein the obtaining a second differential signal characterizing color difference of each color channel according to the first differential signal corresponding to each color channel comprises:
and obtaining a second differential signal representing the color difference of each color channel according to the first differential signal corresponding to each color channel and a preset color difference calculation formula corresponding to each color channel, wherein the preset color difference calculation formula corresponding to any color channel is the product of the first differential signal corresponding to the color channel minus the first differential signals corresponding to other color channels and corresponding coefficients.
4. The method according to any one of claims 1-3, wherein the obtaining of the color sequence in the face video according to the second differential signal corresponding to each color channel comprises:
segmenting second differential signals corresponding to each color channel based on terminal color switching time;
for the second differential signal corresponding to each color channel of any one segment, determining the switched color corresponding to the segment through a preset classification model;
and obtaining a color sequence in the face video according to the corresponding switched color of each segment.
5. The method of claim 4, wherein the segmenting the second differential signal for each color channel based on a terminal color switching time comprises:
and segmenting the second differential signal corresponding to each color channel by taking each color switching time of the terminal as the center of the segment.
6. The method according to claim 4 or 5, wherein before determining the switched color corresponding to the segment by the preset classification model, the method further comprises:
and performing linear interpolation on the second differential signals corresponding to each color channel of any one segment to obtain an input vector with a preset length corresponding to the segment, and inputting the input vector into the classification model.
7. The method of any of claims 4-6, wherein the classification model comprises: a one-dimensional convolution layer, a one-dimensional average pooling layer, and three classified fully-connected layers;
the determining, by a preset classification model, a switched color corresponding to each color channel of any segment according to the second differential signal corresponding to each color channel of the segment includes:
and for the second differential signal corresponding to each color channel of any one segment, extracting features through the one-dimensional convolutional layer, performing average pooling on the features through the one-dimensional average pooling layer, performing three-classification through the full-connection layer, and determining the switched color corresponding to the segment.
8. The method according to claim 3, wherein the preset color difference calculation formula corresponding to each color channel is obtained by:
acquiring a plurality of alternative color difference calculation formulas corresponding to each color channel, wherein coefficients in different alternative color difference calculation formulas are different;
extracting a first differential signal of each color channel color intensity sequence from the face video for testing;
aiming at a first differential signal corresponding to any color channel, acquiring a second differential signal according to any optional color difference calculation formula corresponding to the color channel;
and visually displaying the second differential signals obtained by the alternative color difference calculation formulas, selecting the alternative color difference calculation formula with jumping of the second differential signals in the color switching time, and determining the alternative color difference calculation formula as a preset color difference calculation formula corresponding to the color channel.
9. A face liveness detection device, comprising:
the verification code generating unit is used for acquiring a color sequence verification code;
the video acquisition unit is used for controlling the terminal to sequentially display corresponding colors according to the color sequence verification code and acquiring a facial video of the target object acquired by the terminal in the process of sequentially displaying the corresponding colors;
the differential signal acquisition unit is used for extracting a first differential signal of a color intensity sequence of each color channel from the face video and acquiring a second differential signal representing color difference of each color channel according to the first differential signal corresponding to each color channel; wherein, the color intensity sequence of any color channel is a sequence formed by the color intensity mean value of the color channel in each frame of the face video;
the color sequence prediction unit is used for acquiring a color sequence in the face video according to the second differential signal corresponding to each color channel;
and the judging unit is used for judging whether the color sequence in the face video is matched with the color sequence verification code or not, if so, determining that the face video is not attacked by injection, and carrying out face living body detection according to the face video.
10. The apparatus of claim 9, wherein the differential signal acquiring unit comprises:
the color intensity sequence acquisition module is used for acquiring the color intensity mean value of the color channel in each frame of the face video aiming at any color channel to obtain the color intensity sequence of the color channel;
and the first differential signal acquisition module is used for acquiring the differential of the color intensity of the color channel to time according to the color intensity sequence of the color channel to obtain a first differential signal of the color intensity sequence of the color channel.
11. The apparatus according to claim 9 or 10, wherein the differential signal acquiring unit comprises:
and the second differential signal acquisition module is used for acquiring a second differential signal representing the color difference of each color channel according to the first differential signal corresponding to each color channel and a preset color difference calculation formula corresponding to each color channel, wherein the preset color difference calculation formula corresponding to any color channel is the product of the first differential signal corresponding to the color channel minus the first differential signals corresponding to other color channels and corresponding coefficients.
12. The apparatus according to any one of claims 9-11, wherein the color sequence prediction unit comprises:
the segmentation module is used for segmenting the second differential signals corresponding to the color channels based on the terminal color switching time;
the color classification module is used for determining the switched color corresponding to the segment through a preset classification model for the second differential signal corresponding to each color channel of any segment;
and the color sequence determining module is used for obtaining a color sequence in the face video according to the switched color corresponding to each segment.
13. The apparatus of claim 12, wherein the segmentation module is specifically configured to:
and segmenting the second differential signal corresponding to each color channel by taking each color switching time of the terminal as the center of the segment.
14. The apparatus according to claim 12 or 13, wherein the color sequence prediction unit further comprises:
and the preprocessing module is used for performing linear interpolation on the second differential signals corresponding to each color channel of any one segment to obtain an input vector with a preset length corresponding to the segment, and inputting the input vector into the classification model.
15. The apparatus of any of claims 12-14, wherein the classification model comprises: a one-dimensional convolution layer, a one-dimensional average pooling layer, and three classified fully-connected layers;
the color classification module is specifically configured to:
and for the second differential signal corresponding to each color channel of any one segment, extracting features through the one-dimensional convolutional layer, performing average pooling on the features through the one-dimensional average pooling layer, performing three-classification through the full-connection layer, and determining the switched color corresponding to the segment.
16. The apparatus of claim 11, wherein the preset color difference calculation formula corresponding to each color channel is obtained by:
acquiring a plurality of alternative color difference calculation formulas corresponding to each color channel, wherein coefficients in different alternative color difference calculation formulas are different;
extracting a first differential signal of each color channel color intensity sequence from a face video for testing;
aiming at a first differential signal corresponding to any color channel, acquiring a second differential signal according to any optional color difference calculation formula corresponding to the color channel;
and visually displaying the second differential signals obtained by the alternative color difference calculation formulas, selecting the alternative color difference calculation formula with jumping of the second differential signals in the color switching time, and determining the alternative color difference calculation formula as a preset color difference calculation formula corresponding to the color channel.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202210112290.3A 2022-01-29 2022-01-29 Face living body detection method, device, equipment, storage medium and program product Active CN114445898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210112290.3A CN114445898B (en) 2022-01-29 2022-01-29 Face living body detection method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210112290.3A CN114445898B (en) 2022-01-29 2022-01-29 Face living body detection method, device, equipment, storage medium and program product

Publications (2)

Publication Number Publication Date
CN114445898A true CN114445898A (en) 2022-05-06
CN114445898B CN114445898B (en) 2023-08-29

Family

ID=81371998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210112290.3A Active CN114445898B (en) 2022-01-29 2022-01-29 Face living body detection method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114445898B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116226821A (en) * 2023-05-04 2023-06-06 成都致学教育科技有限公司 Teaching data center management system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
WO2019134536A1 (en) * 2018-01-04 2019-07-11 杭州海康威视数字技术股份有限公司 Neural network model-based human face living body detection
CN110298312A (en) * 2019-06-28 2019-10-01 北京旷视科技有限公司 Biopsy method, device, electronic equipment and computer readable storage medium
CN110969077A (en) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 Living body detection method based on color change
CN111460931A (en) * 2020-03-17 2020-07-28 华南理工大学 Face spoofing detection method and system based on color channel difference image characteristics
CN112614060A (en) * 2020-12-09 2021-04-06 深圳数联天下智能科技有限公司 Method and device for rendering human face image hair, electronic equipment and medium
CN113255511A (en) * 2021-05-21 2021-08-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for living body identification
CN113361349A (en) * 2021-05-25 2021-09-07 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
US20210397822A1 (en) * 2019-10-31 2021-12-23 Shanghai Sensetime Intelligent Technology Co., Ltd. Living body detection method, apparatus, electronic device, storage medium and program product

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
WO2019134536A1 (en) * 2018-01-04 2019-07-11 杭州海康威视数字技术股份有限公司 Neural network model-based human face living body detection
CN110298312A (en) * 2019-06-28 2019-10-01 北京旷视科技有限公司 Biopsy method, device, electronic equipment and computer readable storage medium
WO2020259128A1 (en) * 2019-06-28 2020-12-30 北京旷视科技有限公司 Liveness detection method and apparatus, electronic device, and computer readable storage medium
CN110969077A (en) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 Living body detection method based on color change
US20210397822A1 (en) * 2019-10-31 2021-12-23 Shanghai Sensetime Intelligent Technology Co., Ltd. Living body detection method, apparatus, electronic device, storage medium and program product
CN111460931A (en) * 2020-03-17 2020-07-28 华南理工大学 Face spoofing detection method and system based on color channel difference image characteristics
CN112614060A (en) * 2020-12-09 2021-04-06 深圳数联天下智能科技有限公司 Method and device for rendering human face image hair, electronic equipment and medium
CN113255511A (en) * 2021-05-21 2021-08-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for living body identification
CN113361349A (en) * 2021-05-25 2021-09-07 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
任园园: "基于卷积神经网络的人脸活体检测算法研究", 《中国优秀硕士学位论文全文数据库.信息科技辑》, no. 02 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116226821A (en) * 2023-05-04 2023-06-06 成都致学教育科技有限公司 Teaching data center management system

Also Published As

Publication number Publication date
CN114445898B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN113705425B (en) Training method of living body detection model, and method, device and equipment for living body detection
CN113205057B (en) Face living body detection method, device, equipment and storage medium
CN112883902B (en) Video detection method and device, electronic equipment and storage medium
CN113343826A (en) Training method of human face living body detection model, human face living body detection method and device
EP4047509A1 (en) Facial parsing method and related devices
CN114092759A (en) Training method and device of image recognition model, electronic equipment and storage medium
CN113569708A (en) Living body recognition method, living body recognition device, electronic apparatus, and storage medium
CN112784760A (en) Human behavior recognition method, device, equipment and storage medium
CN116721460A (en) Gesture recognition method, gesture recognition device, electronic equipment and storage medium
CN114445898B (en) Face living body detection method, device, equipment, storage medium and program product
Chen et al. Fresh tea sprouts detection via image enhancement and fusion SSD
CN114120454A (en) Training method and device of living body detection model, electronic equipment and storage medium
CN113869253A (en) Living body detection method, living body training device, electronic apparatus, and medium
CN113569707A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113963197A (en) Image recognition method and device, electronic equipment and readable storage medium
CN115116111B (en) Anti-disturbance human face living body detection model training method and device and electronic equipment
CN111862030A (en) Face synthetic image detection method and device, electronic equipment and storage medium
CN114170642A (en) Image detection processing method, device, equipment and storage medium
CN115249281B (en) Image occlusion and model training method, device, equipment and storage medium
CN115273184A (en) Face living body detection model training method and device
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN113963011A (en) Image recognition method and device, electronic equipment and storage medium
CN114359993A (en) Model training method, face recognition device, face recognition equipment, face recognition medium and product
CN113033372A (en) Vehicle damage assessment method and device, electronic equipment and computer readable storage medium
CN115205939A (en) Face living body detection model training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant