CN111914763A - Living body detection method and device and terminal equipment - Google Patents

Living body detection method and device and terminal equipment Download PDF

Info

Publication number
CN111914763A
CN111914763A CN202010775145.4A CN202010775145A CN111914763A CN 111914763 A CN111914763 A CN 111914763A CN 202010775145 A CN202010775145 A CN 202010775145A CN 111914763 A CN111914763 A CN 111914763A
Authority
CN
China
Prior art keywords
image
background image
optical information
detection
detection object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010775145.4A
Other languages
Chinese (zh)
Other versions
CN111914763B (en
Inventor
丛恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010775145.4A priority Critical patent/CN111914763B/en
Publication of CN111914763A publication Critical patent/CN111914763A/en
Application granted granted Critical
Publication of CN111914763B publication Critical patent/CN111914763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a living body detection method, a living body detection device and terminal equipment. The method includes displaying a background image in a graphical user interface, wherein the background image includes at least two colors; using the background image as a light source to emit light to the detection object; acquiring an image of a detection object; identifying first optical information formed by reflected light of the detection object surface in the image; whether the detection object passes the living body detection is determined according to the first optical information and second optical information of the background image, wherein the second optical information comprises fused color information determined according to at least two colors included in the background image. Generally, the AI fake video does not contain the second optical information of the background image, and therefore cannot pass the live body detection, and the method can increase the accuracy of the live body detection. The method does not require the user to make an appointed action in the process of in-vivo detection, and can improve the experience of the user in the in-vivo detection process.

Description

Living body detection method and device and terminal equipment
Technical Field
The invention relates to the technical field of identity verification, in particular to a method and a device for detecting a living body and terminal equipment.
Background
The living body detection is a method for determining the real physiological characteristics of an authentication object when the identity of a user is authenticated, and the current living body detection method mainly comprises dynamic living body detection and silent living body detection. The dynamic living body detection can verify specific actions of blinking, mouth opening, head shaking, head nodding and the like of a user, and can also verify the pronunciation of reading numbers of the user.
The dynamic living body detection has higher verification success rate on traditional deception modes such as screen reproduction, video counterfeiting and the like. However, with the development of AI (Artificial Intelligence) technology, it appears that AI-fake video can circumvent dynamic live body detection. Therefore, the dynamic in-vivo detection not only requires the user to make a specified action, but also reduces the experience of the user in the in-vivo detection process; and is easy to pass through the living body detection by the deception modes such as AI fake video and the like.
Disclosure of Invention
In view of this, the present invention provides a method, an apparatus and a terminal device for detecting a living body, so as to increase the accuracy of the living body detection and improve the experience of a user in the process of the living body detection.
In a first aspect, an embodiment of the present invention provides a method for detecting a living body, where a graphical user interface is provided by a terminal device, and the method includes: displaying a background image in a graphical user interface, wherein the background image comprises at least two colors; using the background image as a light source to emit light to the detection object; acquiring an image of a detection object; identifying first optical information formed by reflected light of the detection object surface in the image; whether the detection object passes the living body detection is determined according to the first optical information and second optical information of the background image, wherein the second optical information comprises fused color information determined according to at least two colors included in the background image.
In a preferred embodiment of the present invention, the graphical user interface includes a first display area and a second display area, and the background image is displayed in the second display area, and the method further includes: an image of the detection object is displayed in the first display area.
In a preferred embodiment of the present invention, the second display area surrounds the first display area.
In a preferred embodiment of the present invention, the method further includes: superposing a specified layer on the image of the detection object displayed in the first display area; the color of the designated layer is a preset color, and the transparency of the designated layer is a preset value.
In a preferred embodiment of the present invention, the background image is a static image.
In a preferred embodiment of the present invention, the background image is an image sequence, and the image sequence at least includes a first image and a second image; the first image and the second image comprise at least two colors.
In a preferred embodiment of the present invention, the step of displaying the background image in the graphical user interface includes: monitoring the duration of the first image; and if the duration reaches the first preset duration, displaying a second image in the graphical user interface.
In a preferred embodiment of the present invention, the step of acquiring the image of the detection object includes: the images included in the video frame sequence are used as the images of the detection object.
In a preferred embodiment of the present invention, the step of determining whether the detection object passes the living body detection according to the first optical information and the second optical information of the background image includes: determining second fused color information included in the second optical information; inputting the first optical information into a neural network model, and outputting first fused color information included in the first optical information; the neural network model is obtained by training based on pre-stored background sample images and fused color information corresponding to the background sample images; determining whether the first fused color information and the second fused color information are consistent; if the detected objects are consistent, the detected objects pass through the living body detection.
In a preferred embodiment of the present invention, before displaying the background image in the graphical user interface, the method further includes: receiving a background image sent by a server; or, acquiring a background image prestored by the terminal device.
In a preferred embodiment of the present invention, the background image includes a game character image, a cartoon image, or an advertisement image.
In a preferred embodiment of the present invention, the method further includes: prompting an indication action; wherein the indication action comprises at least one of: blinking, opening mouth, shaking head left and right or nodding head; determining the completion condition of the indication action according to the image of the detection object; and judging whether the detection object passes the secondary living body detection or not according to the completion condition of the instruction action.
In a second aspect, an embodiment of the present invention further provides a living body detection apparatus, which provides a graphical user interface through a terminal device, and includes: the background image display module is used for displaying a background image in the graphical user interface, wherein the background image at least comprises two colors; the light emitting module is used for emitting light to the detection object by taking the background image as a light source; the image acquisition module is used for acquiring an image of the detection object; the first optical information identification module is used for identifying first optical information formed by reflected light of the detection object surface in the image; and the living body detection module is used for determining whether the detection object passes through living body detection or not according to the first optical information and second optical information of the background image, wherein the second optical information comprises fused color information determined according to at least two colors included in the background image.
In a third aspect, an embodiment of the present invention further provides a terminal device, which includes a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement the steps of the above-mentioned living body detection method.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the steps of the liveness detection method described above.
The embodiment of the invention has the following beneficial effects:
according to the living body detection method, the living body detection device and the terminal equipment, whether the detection object passes through living body detection or not is determined by the terminal equipment based on the first optical information formed by the collected reflected light on the surface of the detection object and the second optical information of the background image. The screen of the terminal equipment emits an optical signal which comprises second optical information of a background image, the optical signal can be collected by the front camera after being reflected by the detection object, so that the image of the detection object collected by the front camera also comprises the second optical information of the background image, and living body detection can be carried out based on the first optical information of the image of the detection object and the second optical information of the background image. Generally, the AI fake video does not contain the second optical information of the background image, and therefore cannot pass the live body detection, and the method can increase the accuracy of the live body detection. The method does not require the user to make an appointed action in the process of in-vivo detection, and can improve the experience of the user in the in-vivo detection process.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a graphical user interface provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a method for detecting a living subject according to an embodiment of the present invention;
FIG. 3 is a flow chart of another in vivo detection method provided by an embodiment of the invention;
FIG. 4 is a schematic diagram of another graphical user interface provided by embodiments of the present invention;
FIG. 5 is a schematic diagram of another graphical user interface provided by an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of another biopsy device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, dynamic living body detection is easy to pass through living body detection in deceptive modes such as AI fake video and the like, and a user is required to make a specified action in the detection process, so that the experience of the user in the living body detection process can be reduced. In order to increase the accuracy of the in-vivo detection and improve the experience of a user in the in-vivo detection process, the in-vivo detection method, the in-vivo detection device and the terminal device provided by the embodiment of the invention can be applied to a computer, a mobile phone, a tablet computer and other terminal devices which can realize human-computer interaction, and are particularly suitable for an identity recognition scene.
To facilitate understanding of the present embodiment, a detailed description will be given of a method for detecting a living body disclosed in the present embodiment.
The embodiment provides a living body detection method, wherein a graphical user interface is displayed through a terminal device provided with a front camera, and the graphical user interface can display a background image prestored in the terminal device, so that a screen of the terminal device emits an optical signal of the background image.
The terminal device in this embodiment may be a human-computer interaction device configured with a front camera, such as a mobile phone, a computer, a tablet computer, and a notebook computer. Referring to a schematic diagram of a graphical user interface shown in fig. 1, a square area in fig. 1 is the graphical user interface, an oval area in the graphical user interface is a first display area, and the first display area may be used to display an image of a detection object acquired by a front camera of a terminal device, that is, the image of the detection object is displayed in the oval area in the graphical user interface.
In the graphical user interface, the regions other than the elliptical region are the second display regions. The second display area may be used to present a background image. The background image may be pre-stored in the terminal device by a user, when the second display area displays the background image a, the terminal device may emit an optical signal of the background image a through the screen, where the optical signal may be light emitted from the screen of the terminal device, and since the terminal device displays the background image a when emitting light, the light emitted from the screen has optical information of the background image a. Based on the above description, referring to the flowchart of a living body detecting method shown in fig. 2, the living body detecting method includes the steps of:
step S202, displaying a background image in the graphical user interface, wherein the background image at least comprises two colors.
The liveness detection is a method for determining the real physiological characteristics of an authentication object when the identity of a user is authenticated, and for example, the liveness detection can detect whether the user is a real human or not. If a user wants to perform a living body detection, the user can perform a living body detection operation on the graphical user interface, and a normal living body detection operation can be performed by clicking, long pressing and the like, for example: a user can click a living body detection icon displayed on a graphical user interface in the terminal equipment through a mouse or a finger; or the terminal equipment comprises a living body detection entity button, and the user can carry out the living body detection operation only by pressing the entity button.
The terminal device may display the background image in the graphical user interface in response to a liveness detection operation for the graphical user interface. For example, a background image may be presented in a second display area of the graphical user interface. The background image in this embodiment contains at least two colors, i.e., the background image is not a solid color image but a color image.
Step S204, the background image is used as a light source to emit light to the detection object.
Since the light signal emitted from the screen of the terminal device has the optical information of the background image, the background image can be used as a light source to emit light to the detection object. The detection object here may be a human body, an animal, or other objects, and in this embodiment, a human face is taken as an example. Since the background image in this embodiment includes at least two colors, there may be fusion of different colors in the light emitted from the background image, for example: the background image includes red and blue, and the red light emitted by the red and the blue light emitted by the blue may be purple light after being fused, so that the light emitted by the background image as a light source to the detection object may be the light included in the background image after being fused in color.
Step S206, an image of the detection object is acquired.
After the terminal equipment uses the background image as a light source to emit light to the detection object, the terminal equipment can start the front camera, and a user can manually adjust the shooting direction of the front camera so as to enable the shooting direction of the front camera to be aligned to the face of the user. The user can shoot through the front camera after the shooting direction of the front camera is aligned with the face of the user, and the shot image of the detection object can be displayed in the first display area of the graphical user interface.
In step S208, first optical information formed by the reflected light from the detection object surface in the image is recognized.
The light signal emitted by the screen of the terminal device has the second optical information of the background image, wherein the second optical information may be information such as the color and the distribution of different colors of the background image. Therefore, after the front camera collects an image, the image also has optical information of a background image, and the terminal device can identify first optical information formed by reflected light of the surface of the detection object in the image of the detection object.
Step S210, determining whether the detection object passes the living body detection according to the first optical information and second optical information of the background image, wherein the second optical information includes fused color information determined according to at least two colors included in the background image.
Since both the first optical information and the second optical information have optical information of a background image, it is possible to determine whether the detection object passes the living body detection based on the first optical information and the second optical information.
It should be noted here that the background image in this embodiment may include at least two colors, the multiple colors included in the background image may be fused during the light propagation process, and both the second optical information of the background image and the first optical information formed by the reflected light from the surface of the detection object may include fused color information formed by fusing the multiple colors included in the background image.
For example, the background image in this embodiment may include red and green, and the light emitted through the background image should be yellow (the red and green are fused into yellow), and the second optical information and the first optical information may both include optical information of yellow, which may further increase the difficulty of AI counterfeiting, because the background image may include more colors in the actual use process, and the different distributions of different colors may also cause differences in fused colors, so that a counterfeiter may see that the background image including multiple colors cannot accurately determine the fused color information of the background image, which is difficult to perform AI counterfeiting.
According to the living body detection method provided by the embodiment of the invention, the terminal equipment determines whether the detection object passes through living body detection or not based on the first optical information formed by the collected reflected light on the surface of the detection object and the second optical information of the background image. The screen of the terminal equipment emits an optical signal which comprises second optical information of a background image, the optical signal can be collected by the front camera after being reflected by the detection object, so that the image of the detection object collected by the front camera also comprises the second optical information of the background image, and living body detection can be carried out based on the first optical information of the image of the detection object and the second optical information of the background image. Generally, the AI fake video does not contain the second optical information of the background image, and therefore cannot pass the live body detection, and the method can increase the accuracy of the live body detection. The method does not require the user to make an appointed action in the process of in-vivo detection, and can improve the experience of the user in the in-vivo detection process.
The present embodiment provides another in-vivo detection method, which is implemented on the basis of the above-described embodiments; the present embodiment focuses on a specific embodiment of determining that an image of a detection object passes a live body detection. As shown in fig. 3, another living body detecting method is a flowchart, and the living body detecting method in the present embodiment includes the steps of:
step S302, displaying a background image in a graphical user interface, wherein the background image at least comprises two colors.
As shown in fig. 1, the graphical user interface in fig. 1 includes a first display area and a second display area, the terminal device may display a background image in the second display area, and may also display an image of the detection object in the first display area, and therefore, the method further includes: an image of the detection object is displayed in the first display area.
As shown in fig. 1, an image of a detection object (which may be an image of a detection object) may be displayed in a middle elliptical region (i.e., a first display region) of a graphical user interface, and a background image may be displayed in a peripheral second display region, wherein the second display region surrounds the first display region.
For example, when a user performs a living body detection operation on the graphical user interface, the terminal device may capture an image of a detection object of the user through the front camera in response to the living body detection operation, and display the captured image of the detection object in the first display area of the graphical user interface. When the front camera captures an image of a detection object of the user, the terminal device may display a background image in the second display area.
According to the method provided by the embodiment of the invention, a first display area and a second display area can be set on the graphical user interface, wherein the first display area displays the image of the detection object, the second display area displays the background image, and the second display area surrounds the first display area. In this way, the image of the detection object and the background image can be respectively displayed on the graphical user interface of the terminal device through the first display area and the second display area, so that the user can conveniently view the detection object and the background image. The second display area surrounds the first display area, so that the screen of the terminal equipment has a better light emitting effect, and living body detection is convenient to carry out.
In this embodiment, the background image displayed in the graphical user interface may be acquired from a server or a terminal device, for example: receiving a background image sent by a server; or, acquiring a background image prestored by the terminal device.
The background image can be uploaded to the server in advance by a worker or a user, the terminal device is in communication connection with the server, and the terminal device can request to send the background image from the server when the background image needs to be displayed and receive the background image sent by the server. In addition, the terminal device can also be pre-stored with a background image, and can directly obtain the background image from the pre-stored background image when the graphical user interface needs to display the background image.
The background image in the invention can be acquired from a server or terminal equipment, and has various acquisition modes, so that even if the network condition of the terminal equipment is poor and the terminal equipment cannot communicate with the server, the terminal equipment can be displayed from the background image prestored by the terminal equipment, thereby realizing the living body detection.
The background image in this embodiment may be a still image or a sequence of images. Wherein, the static image can be a monochrome image or a color image containing image content, wherein, the monochrome image can be an image of only one color such as red, green, blue, etc., if the monochrome image is used as the background image, the second optical information of the background image only includes optical information of one color, the first optical information of the image of the detected object can pass the living body detection only needing to include the optical information of the one color, and the probability that the image of the detected object of the real person does not pass the living body detection can be reduced.
Referring to the schematic diagram of another graphical user interface shown in fig. 4, the color image of the second display area may be an image such as a cartoon or an advertisement, for example: the background image includes a game character image, a comic image, or an advertisement image. If a color image is used as a background image, the optical information of the background image comprises optical information of a plurality of colors, and the types of color information required to be obtained during AI counterfeiting are more, the difficulty of AI counterfeiting can be increased, and the accuracy of in vivo detection is improved.
The terminal device emits an optical signal of the background image through the screen, the optical signal containing second optical information of the background image. After the light signal irradiates the human face, the light signal can be reflected to the front-facing camera so as to be collected by the front-end camera, and the image of the detection object collected by the front-end camera also contains the optical information of the background image.
However, if the AI counterfeiter can obtain the background image displayed in the second display area, the AI counterfeiter can add the optical information of the background image to the AI counterfeit video, can extract the counterfeit image from the AI counterfeit video, and can also include the optical information of the background image, thereby passing the verification.
In order to prevent an AI counterfeiter from easily acquiring the background image displayed in the second display area, the background image may adopt an image sequence, and the image sequence may include a plurality of images, which may be all color images, that is, at least include a first image and a second image; the first image and the second image comprise at least two colors.
For the image sequence, a mode of replacing the image at regular intervals can be adopted, that is, the background image displayed in the second display area is replaced after a regular interval, so that the second optical information of the background image can also change along with the change of time, thereby increasing the difficulty of obtaining the optical information of the background image by an AI counterfeiting person and creating an AI counterfeiting video. Specifically, the background image displayed in the second display region may be replaced over time through step a 1-step a 2:
step a1, listen for the duration of the first image.
The terminal device may monitor a duration of a first image currently displayed in the second display area, the first image may be randomly selected from all pre-stored background images by the terminal device, and the terminal device may record a name of the selected first image when selecting the first image, and preset a duration (i.e., a first preset duration) of displaying the first image. When the first image is displayed in the second display area, the terminal device may monitor the duration of the first image to determine whether the duration of the first image reaches a first preset duration.
Step a2, if the duration reaches a first preset duration, displaying a second image in the graphical user interface.
If the duration of the first image reaches a first preset duration, the terminal device may end the displaying of the first image, and select a new background image, that is, the second image, from all background images pre-stored in the terminal device.
The terminal device may use the second image as a background image displayed in the second display area, preset a duration of displaying the second image, and monitor a duration of the second image. In addition, the time length of the display of each different background image may be the same or different. For example, the first image may be presented for 4 seconds and the second image may be presented for 3 seconds.
According to the method provided by the embodiment of the invention, the terminal equipment can monitor the duration of the first image, and the first image is replaced into the second image when the duration reaches the preset first preset duration. This results in that each background image may last for a period of time in the second display area, and the duration may also be different. The AI counterfeiting personnel are difficult to acquire the optical information of each background image in sequence and the duration of each background image, and create an AI counterfeiting video, thereby improving the accuracy of the in vivo detection method.
Besides the mode of replacing the background image displayed in the second display area with time, the difficulty of obtaining the optical information of the background image by an AI counterfeiting person and creating an AI counterfeiting video can be increased by adding the specified layer. For example, the above method further comprises: superposing a specified layer on the image of the detection object displayed in the first display area; the color of the designated layer is a preset color, and the transparency of the designated layer is a preset value.
By superimposing a specified layer having transparency on the image of the detection object displayed for the first display area, the optical signal emitted from the screen of the terminal device includes the optical information of the specified layer having transparency in addition to the second optical information of the background image. And the appointed layer with transparency is difficult to obtain by AI counterfeiting personnel, so that the difficulty of AI counterfeiting can be increased, and the accuracy of in-vivo detection is improved.
Referring to another graphical user interface diagram shown in fig. 5, a monochrome image may be used as a background image of the second display region, and a specific image having transparency may be superimposed on an image of the detection object displayed in the first display region added with value.
Further, if a manner is adopted in which the first display area superimposes a specified layer having transparency, optical information of the specified layer is also required as a basis when determining whether the image of the detection object passes the live body detection, for example: determining that the image of the detection object passes the live body detection based on first optical information of the image of the detection object in the first display area, optical information of the specified layer, and second optical information of the background image.
In this case, the determination cannot be made only from the first optical information of the image of the detection target and the second optical information of the background image, and the optical information of the specified layer needs to be comprehensively considered. For example, whether the first optical information of the image of the inspection object includes the optical information of the specified layer and the second optical information of the background image is comprehensively considered.
For example, if the first optical information contains only the optical information of the specified layer, or the optical information of the image of the inspection object contains only the second optical information, the image of the inspection object at this time cannot pass the living body inspection. Only when the first optical information contains both the optical information of the specified layer and the second optical information, the image of the detection object can be detected by the living body.
The method provided by the embodiment of the invention can superimpose the specified layer with transparency on the image of the detection object displayed in the first display area, and the first optical information of the image of the detection object, the optical information of the specified layer and the second optical information of the background image are comprehensively considered based on the optical information of the specified layer when determining whether the image of the detection object passes the living body detection. Because the appointed layer with the transparency is difficult to obtain by AI fake-making personnel, the difficulty of AI fake-making can be increased, thereby improving the accuracy of in-vivo detection.
The image of the detection object may be captured by the camera function of the front camera, that is, a plurality of images are sequentially captured, and the captured images are the images of the detection object. Or, the front camera performs image pickup, and the video frame sequence obtained by image pickup is displayed in the first display area as an image of the detection object. If the mode of shooting by the front camera is adopted, the images contained in the video frame sequence can be used as the images of the detection object.
Here, the number of images to be detected displayed in the first display area is not limited. The first display area may display an image of one detection object or may display images of a plurality of detection objects. If the front camera is used for photographing, one or more images of the detection object with the highest definition can be selected from the photographed images of the detection objects to be displayed in the first display area, or one or more images of the detection object can be manually selected by the user to be displayed in the first display area. If a front camera is used for shooting, one or more video frames can be extracted from the video frame sequence to be used as the image of the detection object displayed in the first display area.
Step S304, the background image is used as a light source to emit light to the detection object.
Step S306, collecting the image of the detection object.
In step S308, first optical information formed by the reflected light from the detection object surface in the image is recognized.
In step S310, second fused color information included in the second optical information is determined.
The background image in this embodiment includes at least two colors, and the background image includes two colors, i.e., a color a and a color B. Color a and color B are both emitted by the screen as part of the light signal, and after the color a and color B are illuminated on the surface of the detected object (e.g. human face), they are fused to form a fused color, for example: red and blue may blend into purple, which is the blended color.
The second optical information is optical information of the background image, and since the background image in this embodiment includes a plurality of colors, these colors will inevitably form a fused color, and the second optical information may include fused color information formed by these colors. The fused color information can be represented by numbers, letters and other symbols.
In order to determine the second fused color information included in the second optical information, the corresponding fused color information may be labeled when the background image is saved. That is, the background image is displayed by the terminal device and irradiated onto the real living body, and the optical information returned from the living body is detected, and the fused color information in the optical information can be used as the second fused color information included in the second optical information.
Alternatively, the second fused color information included in the second optical information may be determined by a specific fused color information determination method. The fused color information corresponding to the background pattern can be determined according to the distribution and proportion of all colors in the background pattern, for example: if the red, green and blue colors in the background pattern are uniformly distributed, the fused color corresponding to the background pattern is white, and the fused color information is the fused color information corresponding to the white color.
Step S312, inputting the first optical information into the neural network model, and outputting first fused color information included in the first optical information; the neural network model is obtained by training based on pre-stored background sample images and fused color information corresponding to the background sample images.
The terminal device can collect first optical information formed by reflected light on the surface of the detection object, and the first fused color information contained in the first optical information can be identified through the neural network model. When training the neural network model, the background sample image carrying the fusion color information can be used as a training set.
The background sample image carrying the fused color information can be prestored in the server or the terminal device, and the terminal device can display the background sample image as the background image, so that the displayed background sample image has the corresponding fused color information, and the second fused color information is the fused color information corresponding to the displayed background sample image.
After the training of the neural network model is completed, the first optical information may be input to the trained neural network model, the first optical information may also be understood as a pattern, and the neural network model may output first fused color information corresponding to the first optical information.
If the first optical information does not contain the first fused color information, a decision step may be added before the step of inputting the first optical information into the neural network model, for example: judging whether the first optical information contains first fused color information or not; if yes, inputting the first optical information into a neural network model; if not, not pass the biopsy. Alternatively, the neural network model may output an error prompt to the first optical information not including the first fused color information or the first optical information that fails to recognize the first fused color information, indicating that the living body detection is not passed.
Step S314, determining whether the first fused color information and the second fused color information are consistent.
If the first fused color information and the second fused color information are characterized by codes or symbols, the step of determining whether they coincide may be understood as whether the first fused color information and the second fused color information have the same codes or symbols. For example: the code of the first fused color information is 001, which represents red; the second information is coded as 002, representing blue; if the first fused color information and the second fused color information are not encoded identically, it is determined that the first fused color information and the second fused color information do not match.
In step S316, if they match, the detection object passes the living body detection.
If the first fused color information and the second fused color information match, it is indicated that the first optical information and the second optical information contain the same fused color information, and it can be indicated that the pattern of the detection object is not a pattern of an AI artifact, the detection object is a living body, and the detection object can be detected by the living body.
If the first fused color information and the second fused color information are not consistent, the first optical information and the second optical information do not contain the same fused color information, the pattern of the detection object can be indicated to be a pattern with high possibility of AI counterfeiting, and the detection object can not pass through living body detection.
Generally, if an AI counterfeiter obtains a background pattern with a solid color, the background pattern with the solid color can be easily added to an AI counterfeit video, so as to implement AI counterfeit. However, the background pattern in this embodiment is a background pattern mixed with multiple colors, and even if an AI counterfeiter obtains the background pattern, the corresponding fused color of the background pattern cannot be determined, that is, the fused color cannot be added to the AI counterfeited video, and the AI counterfeited video cannot pass live body detection. Therefore, the method provided by the embodiment of the invention can obviously increase the difficulty of AI counterfeiting, and AI counterfeiting personnel need to spend a long time to determine the fusion color corresponding to the background pattern, thereby improving the success rate of in vivo detection.
The in-vivo verification mainly aims at the AI fake-making video, and besides the verification mode, the embodiment of the invention can also carry out in-vivo verification on the traditional deception modes such as screen reproduction, video fake-making and the like through the steps B1-B3:
step B1, prompting an instruction action; wherein the indication action comprises at least one of: blinking, opening mouth, shaking head left and right, or nodding head.
The indication action is an action that the user can complete through a human face, for example: blinking, opening mouth, shaking head left and right, or nodding head. The terminal equipment can prompt for indicating action, and the user can complete corresponding indicating action according to the prompt of the terminal equipment. The number of the pointing actions may be one or plural. For example: the terminal device may prompt the user to nod first and blink after nodding.
Step B2, the completion of the instruction operation is determined from the image of the detection target.
When the user finishes the indicating action, the front-facing camera of the user terminal can collect the image of the detection object, and if the image of the detection object collected by the user can represent that the user is performing the indicating action or the user finishes the indicating action. For example: the indicating motion is a left-right shaking motion, the head of the user in the image 1 of the detection object is shaken leftwards, and the head of the user in the image 2 of the detection object is shaken rightwards, so that the fact that the user finishes the left-right shaking motion indicating motion can be explained.
And a step B3 of judging whether the detection object passes the secondary biopsy according to the completion condition of the instruction action.
If the user has completed the instruction action, it can be judged that the image of the detection object passes the secondary live body detection; if the user does not complete the instruction action, it can be judged that the image of the detection object does not pass the secondary live body detection.
Further, if a plurality of pointing actions are included, it is possible to determine whether the image of the detection target passes the secondary living body detection or not according to the scale of the pointing actions completed by the user. For example: assuming that 10 pointing actions are included, the proportional threshold is 80%. When the user completes 9 pointing movements, the rate of completion of the pointing movements is 9/10-90% greater than the rate threshold 80%, and it can be determined that the image of the detection target passes the secondary biopsy.
It should be noted that, in addition to the prompt or the action, the terminal device may prompt the user to speak an instruction voice, and determine whether the image of the detection target passes the secondary living body detection or not based on the voice spoken by the user. The image of the detection object at this time may be a sequence of image frames with speech. By extracting a voice from the image frame sequence, if the extracted voice is the same as the indication voice, the image of the detection object passes through secondary live body detection; if the extracted voice is not the same as the instruction voice, the image of the detection object does not pass the secondary live body detection.
According to the method provided by the embodiment of the invention, the terminal equipment can prompt the user to complete the indication action, determine the completion condition of the indication action according to the image of the detection object and judge whether the image of the detection object passes secondary living body detection. Because the traditional deception modes such as screen reproduction, video counterfeiting and the like can not complete the indication action, the secondary biopsy has higher verification success rate for the traditional deception modes.
It should be noted that the above method embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
Corresponding to the method embodiment, the embodiment of the invention provides a living body detection device, which provides a graphical user interface through terminal equipment; fig. 6 is a schematic structural view of a living body detecting apparatus, which includes:
a background image display module 61, configured to display a background image in the graphical user interface, where the background image includes at least two colors;
a light emitting module 62 for emitting light to the detection object with the background image as a light source;
an image acquisition module 63, configured to acquire an image of the detection object;
a first optical information recognition module 64 for recognizing first optical information formed by detecting the object surface reflected light in the image;
and a living body detection module 65 for determining whether the detection object passes the living body detection according to the first optical information and second optical information of the background image, wherein the second optical information includes fused color information determined according to at least two colors included in the background image.
According to the living body detection device provided by the embodiment of the invention, the terminal equipment determines whether the detection object passes through living body detection or not based on the first optical information formed by the collected reflected light on the surface of the detection object and the second optical information of the background image. The screen of the terminal equipment emits an optical signal which comprises second optical information of a background image, the optical signal can be collected by the front camera after being reflected by the detection object, so that the image of the detection object collected by the front camera also comprises the second optical information of the background image, and living body detection can be carried out based on the first optical information of the image of the detection object and the second optical information of the background image. Generally, the AI fake video does not contain the second optical information of the background image, and therefore cannot pass the live body detection, and the method can increase the accuracy of the live body detection. The method does not require the user to make an appointed action in the process of in-vivo detection, and can improve the experience of the user in the in-vivo detection process.
The graphical user interface comprises a first display area and a second display area, and the background image is displayed on the second display area, referring to the schematic structural diagram of another living body detecting device shown in fig. 7, the living body detecting device further comprises a detected object display module 66 connected with the image acquisition module 63 and used for displaying the image of the detected object on the first display area.
The second display region surrounds the first display region.
Referring to the structural schematic diagram of another living body detecting apparatus shown in fig. 7, the living body detecting apparatus further includes a specified layer overlapping module 67 connected to the detected object displaying module 66, and configured to overlap a specified layer on the detected object image displayed in the first display area; the color of the designated layer is a preset color, and the transparency of the designated layer is a preset value.
The background image is a still image.
The background image is an image sequence, and the image sequence at least comprises a first image and a second image; the first image and the second image comprise at least two colors.
The background image display module is used for monitoring the duration of the first image; and if the duration reaches the first preset duration, displaying a second image in the graphical user interface.
The image acquisition module is used for taking the images contained in the video frame sequence as the images of the detection object.
The living body detection module is configured to determine second fused color information included in the second optical information; inputting the first optical information into a neural network model, and outputting first fused color information included in the first optical information; the neural network model is obtained by training based on pre-stored background sample images and fused color information corresponding to the background sample images; determining whether the first fused color information and the second fused color information are consistent; if the detected objects are consistent, the detected objects pass through the living body detection.
Referring to the structural schematic diagram of another biopsy device shown in fig. 7, the biopsy device further includes a background image receiving module 68, connected to the background image display module 61, for receiving a background image sent by the server; or, acquiring a background image prestored by the terminal device.
The background image includes a game character image, a comic image, or an advertisement image.
Referring to the structural schematic diagram of another biopsy device shown in fig. 7, the biopsy device further includes a secondary biopsy module 69 connected to the biopsy module 65 for prompting an indication action; wherein the indication action comprises at least one of: blinking, opening mouth, shaking head left and right or nodding head; determining the completion condition of the indication action according to the image of the detection object; and judging whether the detection object passes the secondary living body detection or not according to the completion condition of the instruction action.
The embodiment of the invention also provides terminal equipment for operating the living body detection method; referring to the schematic structural diagram of a terminal device shown in fig. 8, the terminal device includes a memory 100 and a processor 101, wherein the memory 100 is used for storing one or more computer instructions, and the one or more computer instructions are executed by the processor 101 to implement the above-mentioned liveness detection method.
Further, the terminal device shown in fig. 8 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The Memory 100 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the above-mentioned biopsy method.
The living body detection method, the living body detection device, and the computer program product of the terminal device provided by the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and/or the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A method for living body detection, wherein a graphical user interface is provided by a terminal device, the method comprising:
displaying a background image in the graphical user interface, wherein the background image comprises at least two colors;
emitting light to a detection object by taking the background image as a light source;
acquiring an image of the detection object;
identifying first optical information formed by reflected light of the surface of the detection object in the image;
and determining whether the detection object passes the living body detection according to the first optical information and second optical information of the background image, wherein the second optical information comprises fused color information determined according to at least two colors included in the background image.
2. The method of claim 1, wherein the graphical user interface comprises a first display area and a second display area, wherein the background image is displayed in the second display area, and wherein the method further comprises:
and displaying the image of the detection object in the first display area.
3. The method of claim 2, wherein the second display area surrounds the first display area.
4. The method of claim 2, further comprising:
superposing a specified layer on the image of the detection object displayed in the first display area; the color of the designated layer is a preset color, and the transparency of the designated layer is a preset value.
5. The method of claim 1, wherein the background image is a static image.
6. The method of claim 1, wherein the background image is a sequence of images, the sequence of images including at least a first image and a second image; the first image and the second image include at least two colors.
7. The method of claim 6, wherein the step of displaying a background image in the graphical user interface comprises:
monitoring the duration of the first image;
and if the duration reaches a first preset duration, displaying the second image in the graphical user interface.
8. The method of claim 1, wherein the step of acquiring an image of the test object comprises: and taking the image contained in the video frame sequence as the image of the detection object.
9. The method according to claim 1, wherein the step of determining whether the detection object passes the live body detection based on the first optical information and the second optical information of the background image comprises:
determining second fused color information included in the second optical information;
inputting the first optical information into a neural network model, and outputting first fused color information included in the first optical information; the neural network model is obtained by training based on a pre-stored background sample image and fused color information corresponding to the background sample image;
determining whether the first fused color information and the second fused color information are consistent;
if the detected objects are consistent, the detected objects pass through the living body detection.
10. The method of claim 1, wherein prior to displaying a background image in the graphical user interface, the method further comprises:
receiving a background image sent by a server; or acquiring a background image prestored in the terminal device.
11. The method of claim 1, wherein the background image comprises a game character image, a caricature image, or an advertisement image.
12. The method of claim 1, further comprising:
prompting an indication action; wherein the act of indicating comprises at least one of: blinking, opening mouth, shaking head left and right or nodding head;
determining the completion condition of the indication action according to the image of the detection object;
and judging whether the detection object passes the secondary living body detection or not according to the finishing condition of the indication action.
13. A living body detection apparatus, wherein a graphical user interface is provided by a terminal device, the apparatus comprising:
a background image display module for displaying a background image in the graphical user interface, wherein the background image comprises at least two colors;
the light emitting module is used for emitting light to the detection object by taking the background image as a light source;
the image acquisition module is used for acquiring an image of the detection object;
the first optical information identification module is used for identifying first optical information formed by reflected light of the surface of the detection object in the image;
and the living body detection module is used for determining whether the detection object passes through living body detection or not according to the first optical information and second optical information of the background image, wherein the second optical information comprises fused color information determined according to at least two colors included in the background image.
14. A terminal device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the steps of the liveness detection method of any one of claims 1-12.
15. A computer-readable storage medium having stored thereon computer-executable instructions that, when invoked and executed by a processor, cause the processor to perform the steps of the liveness detection method of any one of claims 1 to 12.
CN202010775145.4A 2020-08-04 2020-08-04 Living body detection method, living body detection device and terminal equipment Active CN111914763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010775145.4A CN111914763B (en) 2020-08-04 2020-08-04 Living body detection method, living body detection device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010775145.4A CN111914763B (en) 2020-08-04 2020-08-04 Living body detection method, living body detection device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111914763A true CN111914763A (en) 2020-11-10
CN111914763B CN111914763B (en) 2023-11-28

Family

ID=73287120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010775145.4A Active CN111914763B (en) 2020-08-04 2020-08-04 Living body detection method, living body detection device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111914763B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762179A (en) * 2021-09-13 2021-12-07 支付宝(杭州)信息技术有限公司 Living body detection method and apparatus
WO2023197739A1 (en) * 2022-04-14 2023-10-19 京东科技信息技术有限公司 Living body detection method and apparatus, system, electronic device and computer-readable medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832712A (en) * 2017-11-13 2018-03-23 深圳前海微众银行股份有限公司 Biopsy method, device and computer-readable recording medium
CN107992794A (en) * 2016-12-30 2018-05-04 腾讯科技(深圳)有限公司 A kind of biopsy method, device and storage medium
US20190095701A1 (en) * 2017-09-27 2019-03-28 Lenovo (Beijing) Co., Ltd. Living-body detection method, device and storage medium
WO2019071739A1 (en) * 2017-10-13 2019-04-18 平安科技(深圳)有限公司 Face living body detection method and apparatus, readable storage medium and terminal device
CN109784215A (en) * 2018-12-27 2019-05-21 金现代信息产业股份有限公司 A kind of in-vivo detection method and system based on improved optical flow method
CN110414346A (en) * 2019-06-25 2019-11-05 北京迈格威科技有限公司 Biopsy method, device, electronic equipment and storage medium
CN110458063A (en) * 2019-07-30 2019-11-15 西安建筑科技大学 The human face in-vivo detection method that anti-video, photo are cheated
US20200082192A1 (en) * 2018-09-10 2020-03-12 Alibaba Group Holding Limited Liveness detection method, apparatus and computer-readable storage medium
CN110969077A (en) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 Living body detection method based on color change
WO2020083183A1 (en) * 2018-10-25 2020-04-30 腾讯科技(深圳)有限公司 Living body detection method and apparatus, electronic device, storage medium and related system using living body detection method
CN111274928A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Living body detection method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992794A (en) * 2016-12-30 2018-05-04 腾讯科技(深圳)有限公司 A kind of biopsy method, device and storage medium
US20190095701A1 (en) * 2017-09-27 2019-03-28 Lenovo (Beijing) Co., Ltd. Living-body detection method, device and storage medium
WO2019071739A1 (en) * 2017-10-13 2019-04-18 平安科技(深圳)有限公司 Face living body detection method and apparatus, readable storage medium and terminal device
CN107832712A (en) * 2017-11-13 2018-03-23 深圳前海微众银行股份有限公司 Biopsy method, device and computer-readable recording medium
US20200082192A1 (en) * 2018-09-10 2020-03-12 Alibaba Group Holding Limited Liveness detection method, apparatus and computer-readable storage medium
WO2020083183A1 (en) * 2018-10-25 2020-04-30 腾讯科技(深圳)有限公司 Living body detection method and apparatus, electronic device, storage medium and related system using living body detection method
CN109784215A (en) * 2018-12-27 2019-05-21 金现代信息产业股份有限公司 A kind of in-vivo detection method and system based on improved optical flow method
CN110414346A (en) * 2019-06-25 2019-11-05 北京迈格威科技有限公司 Biopsy method, device, electronic equipment and storage medium
CN110458063A (en) * 2019-07-30 2019-11-15 西安建筑科技大学 The human face in-vivo detection method that anti-video, photo are cheated
CN110969077A (en) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 Living body detection method based on color change
CN111274928A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Living body detection method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孟彩霞;: "基于融合双通道视频的暴恐人员检测仿真", 计算机仿真, no. 02, pages 428 - 431 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762179A (en) * 2021-09-13 2021-12-07 支付宝(杭州)信息技术有限公司 Living body detection method and apparatus
CN113762179B (en) * 2021-09-13 2024-03-29 支付宝(杭州)信息技术有限公司 Living body detection method and living body detection device
WO2023197739A1 (en) * 2022-04-14 2023-10-19 京东科技信息技术有限公司 Living body detection method and apparatus, system, electronic device and computer-readable medium

Also Published As

Publication number Publication date
CN111914763B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN109508694B (en) Face recognition method and recognition device
CN109376592B (en) Living body detection method, living body detection device, and computer-readable storage medium
CN106156578B (en) Identity verification method and device
CN112328999B (en) Double-recording quality inspection method and device, server and storage medium
CN111597938B (en) Living body detection and model training method and device
CN111611865B (en) Examination cheating behavior identification method, electronic equipment and storage medium
CN110569808A (en) Living body detection method and device and computer equipment
CN108197586A (en) Recognition algorithms and device
CN107577930B (en) Unlocking detection method of touch screen terminal and touch screen terminal
CN111914763A (en) Living body detection method and device and terminal equipment
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
KR102364929B1 (en) Electronic device, sever, and system for tracking skin changes
CN111553189A (en) Data verification method and device based on video information and storage medium
CN113505682A (en) Living body detection method and device
CN109543635A (en) Biopsy method, device, system, unlocking method, terminal and storage medium
CN115147936A (en) Living body detection method, electronic device, storage medium, and program product
CN114219868A (en) Skin care scheme recommendation method and system
CN113505756A (en) Face living body detection method and device
JP6621383B2 (en) Personal authentication device
CN116524562A (en) Living body detection model training and detecting method, electronic equipment and storage medium
KR100715321B1 (en) Method for juvenile story embodiment using the image processing
CN111046804A (en) Living body detection method, living body detection device, electronic equipment and readable storage medium
CN115731620A (en) Method for detecting counter attack and method for training counter attack detection model
CN114387674A (en) Living body detection method, living body detection system, living body detection apparatus, storage medium, and program product
CN113544700A (en) Neural network training method and device, and associated object detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant