CN117593801A - Biological attack detection method and device - Google Patents

Biological attack detection method and device Download PDF

Info

Publication number
CN117593801A
CN117593801A CN202311631832.9A CN202311631832A CN117593801A CN 117593801 A CN117593801 A CN 117593801A CN 202311631832 A CN202311631832 A CN 202311631832A CN 117593801 A CN117593801 A CN 117593801A
Authority
CN
China
Prior art keywords
biological
video
frame
attack detection
frame difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311631832.9A
Other languages
Chinese (zh)
Inventor
武文琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202311631832.9A priority Critical patent/CN117593801A/en
Publication of CN117593801A publication Critical patent/CN117593801A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

One or more embodiments of the present disclosure disclose a method and an apparatus for detecting a biological attack, where a biological video for detecting a biological attack acquired under a preset illumination mode is firstly acquired, a plurality of polished frames and non-polished frames corresponding to each polished frame are extracted from the acquired biological video, then, video frame difference features of each polished frame and corresponding non-polished frame are respectively calculated, then, feature fusion processing is performed on the plurality of video frame difference features to obtain fused video frame difference features, and finally, the fused video frame difference features are input into a biological attack detection model trained in advance to obtain a biological attack detection result, where the biological attack detection model is a model trained according to biological images, non-biological images and a preset loss function.

Description

Biological attack detection method and device
Technical Field
The present document relates to the field of attack detection technologies, and in particular, to a method and an apparatus for detecting a biological attack.
Background
With the development of the biometric technology and the increasing importance of people on own privacy data, living body attack detection becomes an indispensable process in a biometric system, and non-living body type attack samples (such as a screen, paper, mask and the like) can be effectively intercepted through the living body attack detection. Meanwhile, with the increasing of the living body attack types, injection type attacks bypassing the camera gradually appear, the injection type attacks have extremely high attack success rate and high risk to the biological recognition system, and therefore, a biological attack detection method and device are needed to be provided, so that the injection type attacks bypassing the camera are effectively intercepted.
Disclosure of Invention
In one aspect, one or more embodiments of the present specification provide a bio-attack detection method, including: acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of lighting frames from the acquired biological video, and obtaining non-lighting frames corresponding to each lighting frame; respectively calculating video frame difference characteristics of each polished frame and the corresponding non-polished frame; performing feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features; inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
In another aspect, one or more embodiments of the present specification provide a bioattack detection device, including: the system comprises a lighting frame extraction module, a biological attack detection module and a lighting frame detection module, wherein the lighting frame extraction module is used for acquiring a biological video for biological attack detection acquired in a preset lighting mode, extracting a plurality of lighting frames from the acquired biological video and non-lighting frames corresponding to each lighting frame; the video frame difference feature calculation module is used for calculating the video frame difference feature of each polished frame and the corresponding non-polished frame respectively; the multi-feature fusion module is used for carrying out feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features; the classification module inputs the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
In yet another aspect, one or more embodiments of the present specification provide an electronic device comprising: a processor; and a memory arranged to store computer executable instructions that, when executed, enable the processor to: acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of lighting frames from the acquired biological video, and obtaining non-lighting frames corresponding to each lighting frame; respectively calculating video frame difference characteristics of each polished frame and the corresponding non-polished frame; performing feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features; inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
In yet another aspect, one or more embodiments of the present description provide a storage medium storing a computer program executable by a processor to implement the following flow: acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of lighting frames from the acquired biological video, and obtaining non-lighting frames corresponding to each lighting frame; respectively calculating video frame difference characteristics of each polished frame and the corresponding non-polished frame; performing feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features; inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
Drawings
In order to more clearly illustrate one or more embodiments of the present specification or the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described, and it is apparent that the drawings in the following description are only some embodiments described in one or more embodiments of the present specification, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic flow chart of a method for detecting a biological attack according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method of detecting a biological attack according to another embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the implementation principle of the color histogram analysis process in the embodiment of the present specification;
FIG. 4 is a schematic diagram of the implementation principle of the biological attack detection of the single-frame difference chart in the embodiment of the present disclosure;
FIG. 5 is a schematic diagram of the implementation principle of a bio-attack detection method according to an embodiment of the present disclosure;
FIG. 6 is a schematic block diagram of a bioattack detection device according to another embodiment of the present disclosure;
fig. 7 is a schematic block diagram of an electronic device in accordance with an embodiment of the present description.
Detailed Description
One or more embodiments of the present disclosure provide a method and apparatus for detecting a biological attack.
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive effort by one of ordinary skill in the art, are intended to be within the scope of the present disclosure.
As shown in fig. 1, the embodiment of the present disclosure provides a bio-attack detection method, where an execution subject of the method may be a terminal device or a server, where the terminal device may be a certain terminal device such as a mobile phone, a tablet computer, or a computer device such as a notebook computer or a desktop computer, or may also be an IoT device (specifically, such as a smart watch, an in-vehicle device, or the like). The server may be a single server, a server cluster including a plurality of servers, a background server such as a financial service or an online shopping service, or a background server of an application program. In this embodiment, a server is taken as an example for detailed description, and the following related contents may be referred to for the execution process of the terminal device, which is not described herein. The method specifically comprises the following steps:
In step S102, a biological video for bio-attack detection acquired under a preset illumination mode is acquired, a plurality of polished frames are extracted from the acquired biological video, and a non-polished frame corresponding to each polished frame is extracted.
Bio-attack detection is commonly used in various systems for security detection based on biometric identification, such as: the system comprises a biological identification system, a line-brushing payment system, a network security system and the like, and whether the current behavior is an attack behavior is determined by detecting a biological video corresponding to the current behavior. The method for detecting the colorful living body attack is a relatively effective biological attack detection type, and takes biological recognition as an example, the attack detection mode comprises a colorful interaction process in a biological recognition system, namely, firstly, videos of the parts of a user, which are irradiated by light, are collected through a screen lighting mode and the like, and then, a real biological image and an attack image in the biological recognition process are distinguished based on the collected videos.
The plurality of lighting frames in the embodiment of the present disclosure may be a plurality of lighting images extracted from images included in the biological video, and the plurality of images may include a plurality of lighting key frames in the biological video, or may include a plurality of lighting non-key frames in the biological video. The key frame refers to the frame of image where the key action in the motion change of the character or the object is located, and the key frame corresponds to the original picture in the two-dimensional animation. If the plurality of images are a plurality of polished key frames in the biological video, the plurality of polished frames extracted in step S102 refer to polished key frames extracted from the key frames of the biological video. Similarly, the non-illuminated frames in the embodiments of the present disclosure may be a plurality of images extracted from images included in the biological video, where the plurality of images may include a plurality of non-illuminated key frames in the biological video, or may include a plurality of non-illuminated non-key frames in the biological video.
In order to implement bio-attack detection, a lit frame and a non-lit frame are required to be extracted from a bio-video, wherein the lit frame is an image with lit information greater than a preset threshold (for example, illuminance is greater than a preset illuminance threshold, or luminous flux is greater than a preset luminous flux threshold, etc.) in an obtained lit image of the bio-video, the non-lit frame is an image with no lit information or lit information less than a preset threshold in the obtained bio-video, and each lit frame is matched with one non-lit frame so as to facilitate subsequent image comparison. The matching mode of the polished frames and the un-polished frames can be matched according to the color characteristics of each frame of image in the biological video, and can also be matched according to a preset matching mode according to the requirements of users.
The method for extracting the light frames from the biological video can be based on a sampling method, namely, extracting the light frames by setting a reasonable sampling distance. And clustering all video frames by using an initial clustering center based on a clustering method, and taking the video frame nearest to each class center as a lighting frame. Image features may also be based, for example: color features, texture features, local features, lighting information and the like of the image, calculating the similarity between video frames through the features, and performing de-duplication on the extracted similar frames by means of setting a threshold value and the like, so that a lighting frame set is generated.
The preset illumination mode may be a screen lighting mode, or may be a mode of lighting other light sources except for screen lighting, for example: and a light supplementing lamp.
In step S104, the video frame difference characteristics of each lit frame and the corresponding unlit frame are calculated respectively.
The video frame difference feature refers to a distinguishing feature for detecting biological attack, which is extracted by extracting a key frame and calculating a difference feature based on the acquired biological video. The video frame difference characteristics required to be calculated in practical application are determined according to the specific scene of the biological attack detection, for example: in a biological recognition system, the video frame difference features to be calculated are mainly used for shielding color features outside a lighting area of a part to be recognized for determination.
In an implementation, the method for calculating the video frame difference feature may be to calculate according to the similarity between the polished frame and the pixel point corresponding to the non-polished frame. And the method can also directly compare the polished frame with the corresponding un-polished frame to determine the video frame difference characteristic.
In step S106, feature fusion processing is performed on the multiple video frame difference features, so as to obtain fused video frame difference features.
The method of fusing the plurality of video frame difference features may be to directly add each video frame difference feature, so as to obtain a fused video frame difference feature with more robustness; corresponding weights can be set for different video frame difference features according to specific biological attack detection requirements, and then the different video frame difference features are added, so that the fused video frame difference features are obtained; the fusion processing may be performed on the multiple video frame difference features according to a preset algorithm, which is not limited in the embodiment of the present specification.
In step S108, the fused video frame difference features are input into a pre-trained bioattack detection model, so as to obtain a bioattack detection result.
The biological attack detection model is a model which is obtained by training according to biological images, non-biological images and preset loss functions. The biological class image may be a real image of a part of the living being. The non-biological type image may be an image photographed by a mobile phone, an image presented through paper, a photograph containing living beings, or the like.
The biological attack detection model belongs to a classification model, and can be a classification module constructed based on a neural network, and the corresponding biological attack detection result can comprise: the biological type image and the attack type image, if the biological attack detection result is the biological type image, the behavior corresponding to the current biological video is not the attack behavior, and if the biological attack detection result is the attack type image, the behavior corresponding to the current biological video is the attack behavior. The predetermined loss function may be a loss function of a classification model, such as: cross entropy loss function.
In implementations, the attack class image may be one or more of an image of the living being presented through an electronic screen, an image of the living being presented through paper, an image of the living being presented through a mask, an image containing a photograph of the living being.
The embodiment of the specification provides a biological attack detection method, which comprises the steps of firstly acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of shining frames from the acquired biological video, and non-shining frames corresponding to each shining frame, secondly, respectively calculating video frame difference characteristics of each shining frame and the corresponding non-shining frame, then carrying out characteristic fusion processing on the video frame difference characteristics to obtain fused video frame difference characteristics, and finally inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function. The method has the advantages that the polished frames and the non-polished frames are extracted from the biological video instead of directly collecting images, so that more polished information is extracted, the contrast effect of video frame difference characteristics is improved, and inter-frame information in the biological video can be more fully utilized to assist a biological video detection task. By calculating the video frame difference characteristics of the polished frames and the non-polished frames, the polished color characteristics and the characteristic differences of the biological images and the attack images can be amplified from the angle of the difference characteristics, so that the biological attack detection model can better identify the biological images and the attack images bypassing the camera, further effectively intercept the attack images, and be beneficial to improving the accuracy of the biological attack detection result. By carrying out feature fusion processing on the multiple video frame difference features, the features with better separability and robustness can be provided for the biological attack detection model based on a multi-layer feature fusion mechanism, and the improvement of the expression capability of the features is facilitated, so that the accuracy of the biological attack detection result is improved.
Further, the above-mentioned process of extracting a plurality of polished frames from the obtained biological video in step S102, and the processing of the non-polished frames corresponding to each polished frame may be varied, and an alternative processing manner is provided below, and in particular, see the following processes of steps S1022 to S1026.
In step S1022, color histogram analysis processing is performed on the biological video, and color features of each frame of image in the biological video are acquired.
The color histogram analysis refers to the statistical analysis of the proportion of different colors in the image of the whole image, so that the color characteristics of the whole image are extracted. In implementation, a color histogram algorithm module may be used to perform color histogram analysis processing on the biological video, where the input information of the color histogram algorithm module is the biological video with time sequence information, and the output result is a lighting frame.
The schematic implementation principle of the color histogram analysis process can be seen in fig. 3, where the left side in fig. 3 is the color histogram analysis result of the biological image, and the right side is the histogram analysis result of the attack type image. As shown in the color histogram analysis results of different images in fig. 3, since the biological video in the embodiment of the present disclosure is a video collected under a preset illumination mode, the color histogram is adopted to effectively display the color difference between the illuminated frame and the non-illuminated frame, so that the image with sufficient illuminated information (i.e. the illuminated frame) in the biological video can be extracted more effectively, which is beneficial to improving the accuracy and the extraction efficiency of image extraction.
In step S1024, a plurality of lighting frames are extracted according to the color characteristics of each frame of image in the biological video.
Based on the color histogram analysis processing result, extracting an image with the shining information larger than a preset threshold value as a shining frame, for example: and 20 frames of images, and 6 frames of lighting frames can be extracted.
In step S1026, according to each lit frame and the color characteristics of each frame image in the biological video, an unlit frame corresponding to each lit frame is determined.
The non-illuminated frame may be an image with no illuminated information or with illuminated information less than a preset threshold.
In the embodiment of the present disclosure, the schematic implementation diagram of the biological attack detection by using the single frame difference chart may be referred to fig. 4, and the schematic implementation diagram of the biological attack detection method may be referred to fig. 5.
Further, the illumination mode preset in step S102 may be a color random illumination mode, a position random illumination mode, or a illumination mode with both color and position random. Specifically, the illumination mode with random colors refers to that the types of colors of illumination and the sequence of colors of illumination can be randomly selected. The illumination mode with random positions refers to adding randomness to the sequence of the illuminated images, taking fig. 5 as an example, 1-5 frames of images can be illuminated, 6-10 frames of images are not illuminated, and 11-15 frames of images are illuminated. And the method can also be used for polishing 1-3 frames of images, not polishing 4-6 frames of images and the like, and other random polishing sequences are set. The effectiveness and reliability of interactive bio-attack detection can be further increased by introducing illumination patterns with random colors and/or random positions.
Further, the process of calculating the video frame difference characteristic of each frame of the video frame and the corresponding frame of the video frame in step S104 may be varied, and the following provides an alternative processing manner, and in particular, see the following steps S1042 to S1044.
In step S1042, a plurality of pixels of each lit frame and corresponding unlit frame are determined, and a similarity value of each pixel is calculated.
In step S1044, a video frame difference characteristic between each lit frame and the corresponding unlit frame is determined according to the similarity values of the plurality of pixel points.
In implementation, the similarity value of each pixel point can be calculated by subtracting the corresponding pixel points of each polished frame and each non-polished frame, and then the video frame difference characteristic of each polished frame and the corresponding non-polished frame is calculated by averaging the similarity values of a plurality of pixel points.
And comparing the image difference value of the illuminated frame with the corresponding non-illuminated frame based on the similarity value of the pixel points, so that the comparison result has higher reliability and accuracy.
Further, as shown in fig. 2, after step S104 in the embodiment of the present disclosure, step S110 may further include: and performing color amplification processing on each video frame difference feature obtained through calculation to obtain color amplified video frame difference features.
In the implementation, a pre-trained color amplification model can be adopted, specifically, an image corresponding to each video frame difference feature is input into the color amplification model, and an image of the color amplified video frame difference feature is obtained. When the model is trained, the input data of the color amplification model are picture samples with various colors, and the output result is a picture with a corresponding color with a preset contrast difference value through the processing of the neural network structure in the color amplification model.
The color amplification processing is carried out on each video frame difference feature, so that the feature difference between video frame differences can be further improved, and the accuracy of the classification result of the biological attack detection model is improved.
The process of step S106 can be varied in accordance with step S110, and an alternative process is provided below, and in particular, reference can be made to the process of step S10626 below.
In step S1062, feature fusion processing is performed on the video frame difference features amplified by the plurality of colors, so as to obtain fused video frame difference features.
Further, the bio-video in step S102 is a bio-video with timing information and inter-frame information.
Because the biological video in the embodiment of the specification carries the time sequence information and the inter-frame information, when the biological attack detection is carried out, when the color is changed, the frame difference processing can be carried out on the images before and after the color change based on the inter-frame information, so that the video frame difference characteristics containing certain time sequence information are obtained, and the accuracy and the reliability of the biological attack detection are further improved.
Accordingly, step S108 may be performed as: inputting the time sequence information and the inter-frame information in the biological video and the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result.
In the implementation, time sequence information and frame information in the biological video can be used as labels, and the accuracy of model training can be further improved by adding the time sequence information and the frame information into the biological attack detection model, so that the accuracy of a biological attack detection result is provided.
The embodiment of the specification provides a biological attack detection method, which comprises the steps of firstly acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of shining frames from the acquired biological video, and non-shining frames corresponding to each shining frame, secondly, respectively calculating video frame difference characteristics of each shining frame and the corresponding non-shining frame, then carrying out characteristic fusion processing on the video frame difference characteristics to obtain fused video frame difference characteristics, and finally inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function. The method has the advantages that the polished frames and the non-polished frames are extracted from the biological video instead of directly collecting images, so that more polished information is extracted, the contrast effect of video frame difference characteristics is improved, and inter-frame information in the biological video can be more fully utilized to assist a biological video detection task. By calculating the video frame difference characteristics of the polished frames and the non-polished frames, the polished color characteristics and the characteristic differences of the biological images and the attack images can be amplified from the angle of the difference characteristics, so that the biological attack detection model can better identify the biological images and the attack images bypassing the camera, further effectively intercept the attack images, and be beneficial to improving the accuracy of the biological attack detection result. By carrying out feature fusion processing on the multiple video frame difference features, the features with better separability and robustness can be provided for the biological attack detection model based on a multi-layer feature fusion mechanism, and the improvement of the expression capability of the features is facilitated, so that the accuracy of the biological attack detection result is improved.
In summary, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
The above method for detecting a biological attack provided for one or more embodiments of the present disclosure further provides a device for detecting a biological attack based on the same concept, as shown in fig. 6.
The bio-attack detection apparatus includes: a lighting frame extraction module 210, a video frame difference feature calculation module 220, a multi-feature fusion module 230, and a classification module 240, wherein:
the lighting frame extraction module 210 acquires a biological video for biological attack detection acquired in a preset lighting mode, extracts a plurality of lighting frames from the acquired biological video, and non-lighting frames corresponding to each lighting frame;
the video frame difference feature calculation module 220 calculates video frame difference features of each polished frame and the corresponding non-polished frame respectively;
The multi-feature fusion module 230 performs feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features;
the classification module 240 inputs the fused video frame difference features into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological class images, non-biological class images and a preset loss function.
Further, the illumination mode preset in the frame extraction module 210 includes: illumination with random colors and/or illumination with random positions.
Further, the video frame difference feature calculation module 220 includes:
a pixel similarity value calculation unit for determining a plurality of pixels of each shining frame and the corresponding non-shining frame and calculating the similarity value of each pixel;
and the video frame difference feature determining unit is used for determining the video frame difference feature of each shining frame and the corresponding non-shining frame according to the similarity values of the plurality of pixel points.
Further, the lighting frame extraction module 210 includes:
the video acquisition unit is used for acquiring biological videos for biological attack detection, which are acquired in a preset illumination mode;
the color histogram analysis unit is used for performing color histogram analysis processing on the biological video to obtain the color characteristics of each frame of image in the biological video;
The lighting frame extraction unit extracts a plurality of lighting frames according to the color characteristics of each frame of image in the biological video;
and the non-polished frame determining unit is used for determining the non-polished frame corresponding to each polished frame according to each polished frame and the color characteristics of each frame of image in the biological video.
Further, the biological attack detection result obtained by the classification module 240 includes an attack class image, where the attack class image includes: one or more of an image of the living being is presented through an electronic screen, an image of the living being is presented through paper, an image of the living being is presented through a mask, an image of a photograph containing the living being.
Further, the biological attack detection model is a two-class model constructed based on a neural network.
Further, the biological attack detection device also comprises a color amplification processing module, and color amplification processing is carried out on each calculated video frame difference characteristic to obtain the video frame difference characteristic after color amplification. Accordingly, the multi-feature fusion module 230 performs feature fusion processing on the video frame difference features amplified by the plurality of colors, so as to obtain fused video frame difference features.
The biological video acquired by the polishing frame extraction module 210 is a biological video with time sequence information and inter-frame information, and accordingly, the classification module 240 inputs the time sequence information and inter-frame information in the biological video and the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result.
The embodiment of the specification provides a biological attack detection device, firstly, biological video for biological attack detection acquired in a preset illumination mode is acquired through a lighting frame extraction module, a plurality of lighting frames and non-lighting frames corresponding to each lighting frame are extracted from the acquired biological video, secondly, video frame difference feature calculation modules are utilized to calculate video frame difference features of each lighting frame and the corresponding non-lighting frame respectively, then a plurality of video frame difference features are subjected to feature fusion processing through a multi-feature fusion module to obtain fused video frame difference features, finally, the fused video frame difference features are input into a biological attack detection model trained in advance through a classification module to obtain biological attack detection results, and the biological attack detection model is a model trained according to biological images, non-biological images and preset loss functions. The method has the advantages that the polished frames and the non-polished frames are extracted from the biological video instead of directly collecting images, so that more polished information is extracted, the contrast effect of video frame difference characteristics is improved, and inter-frame information in the biological video can be more fully utilized to assist a biological video detection task. By calculating the video frame difference characteristics of the polished frames and the non-polished frames, the polished color characteristics and the characteristic differences of the biological images and the attack images can be amplified from the angle of the difference characteristics, so that the biological attack detection model can better identify the biological images and the attack images bypassing the camera, further effectively intercept the attack images, and be beneficial to improving the accuracy of the biological attack detection result. By carrying out feature fusion processing on the multiple video frame difference features, the features with better separability and robustness can be provided for the biological attack detection model based on a multi-layer feature fusion mechanism, and the improvement of the expression capability of the features is facilitated, so that the accuracy of the biological attack detection result is improved.
It should be understood by those skilled in the art that the above-mentioned bio-attack detection apparatus can be used to implement the bio-attack detection method described above, and the detailed description thereof should be similar to that of the method described above, so as to avoid complexity, and is not repeated herein.
Based on the same considerations, one or more embodiments of the present disclosure also provide an electronic device, as shown in fig. 7. The electronic device may be configured or configured differently, may include one or more processors 301 and memory 302, and may have one or more applications or data stored in memory 302. Wherein the memory 302 may be transient storage or persistent storage. The application programs stored in memory 302 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for use in an electronic device. Still further, the processor 301 may be arranged to communicate with the memory 302 and execute a series of computer executable instructions in the memory 302 on an electronic device. The electronic device may also include one or more power supplies 303, one or more wired or wireless network interfaces 304, one or more input/output interfaces 305, and one or more keyboards 306.
In particular, in this embodiment, an electronic device includes a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the electronic device, and the one or more programs configured to be executed by one or more processors include instructions for:
acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of lighting frames from the acquired biological video, and extracting non-lighting frames corresponding to each lighting frame;
respectively calculating video frame difference characteristics of each polished frame and the corresponding non-polished frame;
performing feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features;
inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
One or more embodiments of the present description provide a storage medium for storing computer-executable instructions that, when executed by a processor, implement the following:
acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of lighting frames from the acquired biological video, and extracting non-lighting frames corresponding to each lighting frame;
respectively calculating video frame difference characteristics of each polished frame and the corresponding non-polished frame;
performing feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features;
inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing one or more embodiments of the present description.
One skilled in the art will appreciate that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (trans itory media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description of one or more embodiments is merely illustrative of one or more embodiments of the present disclosure and is not intended to be limiting of the present disclosure. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of one or more embodiments of the present disclosure, are intended to be included within the scope of the claims of one or more embodiments of the present disclosure.

Claims (10)

1. A method of bio-attack detection, comprising:
acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of lighting frames from the acquired biological video, and obtaining non-lighting frames corresponding to each lighting frame;
respectively calculating video frame difference characteristics of each polished frame and the corresponding non-polished frame;
Performing feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features;
inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
2. The method of claim 1, the extracting a plurality of lit frames from the acquired biological video, and an unlit frame corresponding to each of the lit frames, comprising:
performing color histogram analysis processing on the biological video to obtain color characteristics of each frame of image in the biological video;
extracting a plurality of lighting frames according to the color characteristics of each frame of image in the biological video;
and determining an un-polished frame corresponding to each polished frame according to each polished frame and the color characteristic of each frame image in the biological video.
3. The method of claim 1, the biological attack detection result comprising an attack class image, the attack class image comprising: one or more of an image of the living being is presented through an electronic screen, an image of the living being is presented through paper, an image of the living being is presented through a mask, an image of a photograph containing the living being.
4. The method of claim 3, wherein the biological attack detection model is a classification model constructed based on a neural network.
5. The method of claim 1, after separately computing video frame difference characteristics for each of the lit frames and the corresponding unlit frames, the method further comprising:
performing color amplification processing on each video frame difference feature obtained through calculation to obtain color amplified video frame difference features;
the feature fusion processing is performed on the multiple video frame difference features to obtain fused video frame difference features, including:
and carrying out feature fusion processing on the video frame difference features amplified by the plurality of colors to obtain fused video frame difference features.
6. The method of claim 1, wherein the preset illumination mode comprises:
illumination with random colors and/or illumination with random positions.
7. The method of claim 1, a method of computing video frame difference characteristics for each of the lit frames and corresponding unlit frames, comprising:
determining a plurality of pixel points of each shining frame and a corresponding non-shining frame, and calculating a similarity value of each pixel point;
and determining the video frame difference characteristics of each shining frame and the corresponding non-shining frame according to the similarity values of the pixel points.
8. The method of claim 1, wherein the bio-video is a bio-video with timing information and inter-frame information, the inputting the fused video frame difference features into a pre-trained bio-attack detection model to obtain a bio-attack detection result, and the method comprises:
and inputting the time sequence information and the inter-frame information in the biological video and the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result.
9. A bioattack detection device, comprising:
the system comprises a lighting frame extraction module, a biological attack detection module and a lighting frame detection module, wherein the lighting frame extraction module is used for acquiring a biological video for biological attack detection acquired in a preset lighting mode, extracting a plurality of lighting frames from the acquired biological video and non-lighting frames corresponding to each lighting frame;
the video frame difference feature calculation module is used for calculating the video frame difference feature of each polished frame and the corresponding non-polished frame respectively;
the multi-feature fusion module is used for carrying out feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features;
the classification module inputs the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
10. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, enable the processor to:
acquiring a biological video for biological attack detection acquired in a preset illumination mode, extracting a plurality of lighting frames from the acquired biological video, and obtaining non-lighting frames corresponding to each lighting frame;
respectively calculating video frame difference characteristics of each polished frame and the corresponding non-polished frame;
performing feature fusion processing on the multiple video frame difference features to obtain fused video frame difference features;
inputting the fused video frame difference characteristics into a pre-trained biological attack detection model to obtain a biological attack detection result, wherein the biological attack detection model is a model obtained by training according to biological images, non-biological images and a preset loss function.
CN202311631832.9A 2023-11-30 2023-11-30 Biological attack detection method and device Pending CN117593801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311631832.9A CN117593801A (en) 2023-11-30 2023-11-30 Biological attack detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311631832.9A CN117593801A (en) 2023-11-30 2023-11-30 Biological attack detection method and device

Publications (1)

Publication Number Publication Date
CN117593801A true CN117593801A (en) 2024-02-23

Family

ID=89921694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311631832.9A Pending CN117593801A (en) 2023-11-30 2023-11-30 Biological attack detection method and device

Country Status (1)

Country Link
CN (1) CN117593801A (en)

Similar Documents

Publication Publication Date Title
CN112800997B (en) Living body detection method, device and equipment
US10430694B2 (en) Fast and accurate skin detection using online discriminative modeling
CN111324874B (en) Certificate authenticity identification method and device
CN112200187A (en) Target detection method, device, machine readable medium and equipment
EP2792149A1 (en) Scene segmentation using pre-capture image motion
US9846956B2 (en) Methods, systems and computer-readable mediums for efficient creation of image collages
CN114973049B (en) Lightweight video classification method with unified convolution and self-attention
CN114238904B (en) Identity recognition method, and training method and device of dual-channel hyper-resolution model
CN112784857A (en) Model training and image processing method and device
CN112347512A (en) Image processing method, device, equipment and storage medium
CN112990172B (en) Text recognition method, character recognition method and device
Kompella et al. A semi-supervised recurrent neural network for video salient object detection
CN111310531A (en) Image classification method and device, computer equipment and storage medium
CN113642359B (en) Face image generation method and device, electronic equipment and storage medium
Senthilkumar et al. Suspicious human activity detection in classroom examination
CN110598555B (en) Image processing method, device and equipment
CN112529939A (en) Target track matching method and device, machine readable medium and equipment
CN112598016A (en) Image classification method and device, communication equipment and storage medium
CN117593801A (en) Biological attack detection method and device
CN111652074B (en) Face recognition method, device, equipment and medium
CN111818364B (en) Video fusion method, system, device and medium
Zhong et al. Background modelling using discriminative motion representation
Song et al. Real-time hand gesture recognition on unmodified wearable devices
Youjiao et al. A Hierarchical Scheme for Video‐Based Person Re‐identification Using Lightweight PCANet and Handcrafted LOMO Features
CN112950732B (en) Image generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination