CN117058769A - Facial anomaly attack screening method and device based on similarity calculation - Google Patents

Facial anomaly attack screening method and device based on similarity calculation Download PDF

Info

Publication number
CN117058769A
CN117058769A CN202310355911.5A CN202310355911A CN117058769A CN 117058769 A CN117058769 A CN 117058769A CN 202310355911 A CN202310355911 A CN 202310355911A CN 117058769 A CN117058769 A CN 117058769A
Authority
CN
China
Prior art keywords
face
order background
image
attack
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310355911.5A
Other languages
Chinese (zh)
Inventor
洪叁亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202310355911.5A priority Critical patent/CN117058769A/en
Publication of CN117058769A publication Critical patent/CN117058769A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a face anomaly attack screening method and device based on similarity calculation, which belong to the technical field of face recognition and identity verification, and identify face anomaly attacks by calculating whether the similarity between the high-order background features of an image to be identified and a high-order background feature library of a face anomaly attack sample is larger than a preset threshold value, realize face anomaly attack screening based on the similarity calculation of the high-order background features based on a two-dimensional image sequence, extract the high-order background features of the image to be identified by adopting a deep twin network, ensure better robustness against the background of a complex environment, judge whether the image is a face anomaly attack based on the similarity between the high-order background features of the image to be identified and the high-order background feature library of the face anomaly attack sample, improve the identification precision and robustness of the face anomaly attacks, and can be widely used for assisting the face living body detection and improving the capability of defending false face attacks.

Description

Facial anomaly attack screening method and device based on similarity calculation
Technical Field
The application relates to the technical field of face recognition and identity verification, in particular to a face anomaly attack screening method and device based on similarity calculation.
Background
With the development of the mobile internet, the authentication scene (determining the authenticity of a user object) is getting more attention in the fields of financial insurance, bank securities and the like, the threat of false face attack on the reliability of human face living body detection is also getting more and more serious, and the false face attack means are getting more and more abundant.
The method for resisting the false face attack commonly used in the prior art mainly comprises silent living body identification based on a single image and motion living body identification based on random motion. The single-image-based silent living body recognition method carries out classification training network models by collecting massive data (including face living body pictures and face non-living body pictures), the single-image-based silent living body recognition method has the advantages that the massive data are required to be collected, the non-living body synthesized by various abnormal software is relatively poor in the two classification network models, and the false face attack recognition accuracy is low. The motion living body identification method based on random motion is usually that the equipment side prompts the user Zhang Zhangzui, shakes the head, blinks, and the like to perform matched verification, and the motion living body identification method is easy to bypass motion living body experience when an attack user adopts an attack video with the motion recorded in advance, so that the false face attack identification reliability is poor.
Disclosure of Invention
The application provides a face abnormal attack screening method and device based on similarity calculation, which are used for assisting face living body detection and improving the capability of defending false face attacks.
The technical scheme of the application is as follows:
according to a first aspect of an embodiment of the present application, a face anomaly attack screening method based on similarity calculation is provided, including:
extracting high-order background features of the image to be identified by adopting a depth twin network;
calculating the similarity between the high-order background features of the image to be identified and a high-order background feature library of a face anomaly attack sample;
if the similarity is larger than a preset threshold, the image to be identified is a face abnormal attack sample;
and if the similarity is not greater than a preset threshold, the image to be identified is a normal identification sample.
Optionally, the calculating the similarity between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample includes:
according to the formulaCalculating the similarity with cosine between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, wherein ∈10>For the higher-order background features of the image to be identified, < + >>And attacking the high-order background features of the sample high-order background feature library for the face abnormality.
Optionally, the face anomaly attack sample high-order background feature library is a high-order background feature library formed by respectively inputting face anomaly attack samples of a face anomaly attack sample library constructed in advance through manual labeling into high-order background features of the face anomaly attack sample extracted by the deep twin network.
Optionally, the preset threshold is 70-80.
Optionally, the backbone network of the deep twin network adopts mobilenet v2, and the mobilenet v2 is accessed into the custom convolution layer, the multi-scale convolution layer and the full connection layer after the last full connection layer is removed.
According to a second aspect of the embodiment of the present application, there is provided a face anomaly attack screening device based on similarity calculation, including:
the feature extraction module is used for extracting high-order background features of the image to be identified by adopting a depth twin network;
the computing module is used for computing the similarity between the high-order background features of the image to be identified and the high-order background feature library of the face anomaly attack sample;
the first judging module is used for judging whether the image to be identified is a face abnormal attack sample if the similarity is larger than a preset threshold value;
and the second judging module is used for judging that the image to be identified is a normal identification sample if the similarity is not greater than a preset threshold value.
Optionally, the computing module is specifically configured to:
according to the formulaCalculating the similarity with cosine between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, wherein ∈10>For the higher-order background features of the image to be identified, < + >>And attacking the high-order background features of the sample high-order background feature library for the face abnormality.
Optionally, the face anomaly attack sample high-order background feature library is a high-order background feature library formed by respectively inputting face anomaly attack samples of a face anomaly attack sample library constructed in advance through manual labeling into high-order background features of the face anomaly attack sample extracted by the deep twin network.
Optionally, the preset threshold is 70-80.
Optionally, the backbone network of the deep twin network adopts mobilenet v2, and the mobilenet v2 is accessed into the custom convolution layer, the multi-scale convolution layer and the full connection layer after the last full connection layer is removed.
The beneficial effects are that:
the application relates to a face anomaly attack screening method and a face anomaly attack screening device based on similarity calculation, which adopt a depth twin network to extract high-order background characteristics of an image to be identified; the method comprises the steps of identifying a face anomaly attack by calculating whether the similarity between the high-order background feature of an image to be identified and a high-order background feature library of a face anomaly attack sample is larger than a preset threshold value, realizing face anomaly attack screening based on the high-order background feature similarity calculation based on a two-dimensional image sequence, adopting a deep twin network to extract the high-order background feature of the image to be identified, ensuring better robustness against the background of a complex environment, judging whether the image anomaly attack is the face anomaly attack based on the similarity between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, improving the identification precision and the robustness of the face anomaly attack, and being widely used for assisting the living face detection and improving the capability of defending false face attacks.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application and do not constitute a undue limitation on the application.
FIG. 1 is a flow chart of a face anomaly attack screening method based on similarity calculation, according to an exemplary embodiment;
FIG. 2 is a network architecture diagram of a deep twin network, shown in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a face anomaly attack screening method based on similarity calculation, according to an example embodiment;
fig. 4 is a schematic diagram of a face anomaly attack screening device based on similarity calculation according to an exemplary embodiment.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions of the present application, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
The face anomaly attack screening method and device based on similarity calculation according to the embodiment of the application will be described in detail with reference to fig. 1 to 4, wherein fig. 1 is a flowchart of the face anomaly attack screening method based on similarity calculation according to an exemplary embodiment of the application. As shown in fig. 1, the specific steps of the face anomaly attack screening method based on similarity calculation are as follows:
step 110: and extracting high-order background features of the image to be identified by adopting a depth twin network.
Referring to fig. 2, the backbone network of the deep twin network adopted in the embodiment of the present application adopts mobilenet v2, and the mobilenet v2 is accessed to the custom convolution layer, the multi-scale convolution layer and the full connection layer after removing the last full connection layer. The network structure of the deep twin network adopted in the embodiment of the application is shown in fig. 2, and the output of the full-connection layer is 1024 neurons.
The training and optimizing process of the deep twin network adopted by the embodiment of the application is as follows:
(1) Preprocessing the training image of the face anomaly attack, wherein the preprocessing comprises data amplification and data normalization processing, and the data amplification comprises operations such as random edge repair, random color dithering and the like.
(2) The preprocessed training images of the face anomaly attack are input into a deep twin network, the deep twin networks respectively have 1024-dimensional vectors, two column vectors are input into a binary classification network,output 0 indicates that is similar to the background, +.>An output other than 0 indicates background dissimilarity. Through training, a classifier of the binary classification network can obtain a proper weight, and the probability that two pictures are similar in background can be accurately output after operation. The Loss function used in the depth twin network training process of the embodiment of the application adopts a contrast Loss function (contrast Loss), and the formula of the contrast Loss function is shown as follows:
wherein,is defined as the euclidean distance between the outputs of the depth twinning network. European distance->The formula of (2) is:,/>is the output of one of the twin networks. />And->Is the input data pair.YThe value is 1 or 0. If the model predictive inputs are similar, thenYThe value of (2) is 0, otherwiseY1.max () is represented by 0 anda function of a larger value in between. m is a margin value (margin value) greater than 0.
The deep twin network adopted by the embodiment of the application comprises two identical twin networks, the network structures of the two identical twin networks are identical, and all network layers share weights.
(3) Will exceed the parametersSet to 0.9 and 0.999, respectively, training optimizer Adam.
(4) To learn the learning rate asAnd repeating the iterative training for 20 times to reduce the learning rate to +.>Repeating the iteration for 20 times continuously, when the final loss value is smaller than or equal to a preset threshold value, adjusting the internal parameters of the depth twin network model (the internal parameters are weights, gradients and the like of the model), and outputting the standard depth twin network model as the image to be identified in the embodiment of the application until the final loss value is smaller than the preset threshold valueIs used for extracting the high-order background features of the model (C).
According to the embodiment of the application, the image to be recognized (the face image to be recognized) is input into the trained depth twin network, and 1024-dimensional feature vectors are output as the high-order background features of the image to be recognized.
Step 120: and calculating the similarity between the high-order background features of the image to be identified and a high-order background feature library of the face anomaly attack sample.
Wherein, according to the formulaCalculating the similarity with cosine between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, wherein,for the higher-order background features of the image to be identified, < + >>And attacking the high-order background features of the sample high-order background feature library for the face abnormality.
The face anomaly attack sample high-order background feature library is a high-order background feature library formed by respectively inputting face anomaly attack samples of a face anomaly attack sample library constructed in advance through manual labeling into high-order background features of the face anomaly attack sample extracted by the deep twin network.
Step 130: and if the similarity is greater than a preset threshold, the image to be identified is a face abnormal attack sample.
Step 140: and if the similarity is not greater than a preset threshold, the image to be identified is a normal identification sample.
Preferably, the face anomaly attack screening method in the embodiment of the application sets the preset threshold to 70-80, and in the range, the detection speed is high and the detection reliability is high. Referring to fig. 3, a human face abnormal attack sample library is constructed by manual labeling, and high-order background features are extracted by respectively inputting depth twin networks to construct the human face abnormal attack sample high-order background feature library. The front-end acquired face image to be recognized is input into a deep twin network to extract high-order background features and is matched with a face anomaly attack sample high-order background feature library 1: and N, carrying out similarity calculation and comparison, and when the similarity is larger than a preset threshold (for example, the preset threshold T=78), indicating that the face image to be identified is a suspicious abnormal attack sample, otherwise, indicating that the face image to be identified is a normal sample.
The application relates to a face anomaly attack screening method and a face anomaly attack screening device based on similarity calculation, which adopt a depth twin network to extract high-order background characteristics of an image to be identified; the method comprises the steps of identifying a face anomaly attack by calculating whether the similarity between the high-order background feature of an image to be identified and a high-order background feature library of a face anomaly attack sample is larger than a preset threshold value, realizing face anomaly attack screening based on the high-order background feature similarity calculation based on a two-dimensional image sequence, adopting a deep twin network to extract the high-order background feature of the image to be identified, ensuring better robustness against the background of a complex environment, judging whether the image anomaly attack is the face anomaly attack based on the similarity between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, improving the identification precision and the robustness of the face anomaly attack, and being widely used for assisting the living face detection and improving the capability of defending false face attacks.
Fig. 4 is a schematic structural diagram of a face anomaly attack screening device based on similarity calculation according to an exemplary embodiment of the present application. The face anomaly attack screening device based on the similarity calculation provided by the embodiment of the application can execute the processing flow provided by the face anomaly attack screening method based on the similarity calculation. As shown in fig. 4, a face anomaly attack screening device 20 based on similarity calculation provided by the present application includes:
the feature extraction module 201 is configured to extract high-order background features of an image to be identified by using a depth twin network;
the computing module 202 is configured to compute a similarity between a high-order background feature of an image to be identified and a high-order background feature library of a face anomaly attack sample;
a first judging module 203, configured to, if the similarity is greater than a preset threshold, determine that the image to be identified is a face abnormal attack sample;
the second determining module 204 is configured to, if the similarity is not greater than a preset threshold, determine that the image to be identified is a normal identification sample.
Optionally, the computing module 202 is specifically configured to:
according to the formulaCalculating the similarity with cosine between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, wherein ∈10>For the higher-order background features of the image to be identified, < + >>And attacking the high-order background features of the sample high-order background feature library for the face abnormality.
Optionally, the face anomaly attack sample high-order background feature library is a high-order background feature library formed by respectively inputting face anomaly attack samples of a face anomaly attack sample library constructed in advance through manual labeling into high-order background features of the face anomaly attack sample extracted by the deep twin network.
Optionally, the preset threshold is 70-80.
Optionally, the backbone network of the deep twin network adopts mobilenet v2, and the mobilenet v2 is accessed into the custom convolution layer, the multi-scale convolution layer and the full connection layer after the last full connection layer is removed.
The device provided by the embodiment of the present application may be specifically used to execute the scheme provided by the embodiment of the method corresponding to fig. 1, and specific functions and technical effects that can be achieved are not repeated herein.
The embodiment of the application also provides a terminal comprising: a processor, a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored in the memory to implement the solution provided by any of the method embodiments described above, and specific functions and technical effects that can be implemented are not described herein. The electronic device may be the server mentioned above.
The embodiment of the application also provides a computer readable storage medium, in which computer executable instructions are stored, and when the computer executable instructions are executed by a processor, the computer executable instructions are used for implementing the scheme provided by any one of the method embodiments, and specific functions and technical effects that can be implemented are not repeated herein.
The application scenario described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present application, and does not constitute a limitation on the technical solution provided by the embodiment of the present application, and as a person of ordinary skill in the art can know that the technical solution provided by the embodiment of the present application is applicable to similar technical problems as the new application scenario appears.
In some possible embodiments, an electronic device according to the application may comprise at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the operational data management method according to various exemplary embodiments of the application described above in this specification. For example, the processor may perform steps as in an operational data management method.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to either imply that the operations must be performed in that particular order or that all of the illustrated operations be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image scaling device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image scaling device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image scaling device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The face abnormal attack screening method based on similarity calculation is characterized by comprising the following steps of:
extracting high-order background features of the image to be identified by adopting a depth twin network;
calculating the similarity between the high-order background features of the image to be identified and a high-order background feature library of a face anomaly attack sample;
if the similarity is larger than a preset threshold, the image to be identified is a face abnormal attack sample;
and if the similarity is not greater than a preset threshold, the image to be identified is a normal identification sample.
2. The method for screening a face anomaly attack according to claim 1, wherein the calculating the similarity between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample comprises:
according to the formulaCalculating the similarity with cosine between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, wherein ∈10>For the higher-order background features of the image to be identified, < + >>And attacking the high-order background features of the sample high-order background feature library for the face abnormality.
3. The method for screening the face anomaly attack according to claim 2, wherein the face anomaly attack sample high-order background feature library is a high-order background feature library formed by manually labeling face anomaly attack samples of a pre-constructed face anomaly attack sample library and respectively inputting the face anomaly attack sample high-order background features extracted by a deep twin network.
4. The method for screening for a face anomaly attack according to claim 1, wherein the preset threshold is 70-80.
5. The method for screening the face anomaly attack according to claim 1, wherein the backbone network of the deep twin network adopts mobilenet v2, and the mobilenet v2 is accessed into the custom convolution layer, the multi-scale convolution layer and the full connection layer after the last full connection layer is removed.
6. The utility model provides a facial anomaly attack screening device based on similarity calculation which characterized in that, facial anomaly attack screening device includes:
the feature extraction module is used for extracting high-order background features of the image to be identified by adopting a depth twin network;
the computing module is used for computing the similarity between the high-order background features of the image to be identified and the high-order background feature library of the face anomaly attack sample;
the first judging module is used for judging whether the image to be identified is a face abnormal attack sample if the similarity is larger than a preset threshold value;
and the second judging module is used for judging that the image to be identified is a normal identification sample if the similarity is not greater than a preset threshold value.
7. The face anomaly attack screening device according to claim 6, wherein the computing module is specifically configured to:
according to the formulaCalculating the similarity with cosine between the high-order background feature of the image to be identified and the high-order background feature library of the face anomaly attack sample, wherein ∈10>For the higher-order background features of the image to be identified, < + >>And attacking the high-order background features of the sample high-order background feature library for the face abnormality.
8. The device for screening a face anomaly attack according to claim 7, wherein the face anomaly attack sample high-order background feature library is a high-order background feature library composed of face anomaly attack sample high-order background features extracted by a depth twin network and respectively input face anomaly attack samples of a face anomaly attack sample library constructed in advance through manual labeling.
9. The face anomaly attack screening device according to claim 6, wherein the preset threshold is 70-80.
10. The device for screening the face anomaly attack according to claim 6, wherein the backbone network of the deep twin network adopts mobilenet v2, and the mobilenet v2 is accessed into the custom convolution layer, the multi-scale convolution layer and the full connection layer after the last full connection layer is removed.
CN202310355911.5A 2023-03-29 2023-03-29 Facial anomaly attack screening method and device based on similarity calculation Pending CN117058769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310355911.5A CN117058769A (en) 2023-03-29 2023-03-29 Facial anomaly attack screening method and device based on similarity calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310355911.5A CN117058769A (en) 2023-03-29 2023-03-29 Facial anomaly attack screening method and device based on similarity calculation

Publications (1)

Publication Number Publication Date
CN117058769A true CN117058769A (en) 2023-11-14

Family

ID=88659602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310355911.5A Pending CN117058769A (en) 2023-03-29 2023-03-29 Facial anomaly attack screening method and device based on similarity calculation

Country Status (1)

Country Link
CN (1) CN117058769A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118250078A (en) * 2024-04-16 2024-06-25 北京瑞莱智慧科技有限公司 Network request detection method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118250078A (en) * 2024-04-16 2024-06-25 北京瑞莱智慧科技有限公司 Network request detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20230022943A1 (en) Method and system for defending against adversarial sample in image classification, and data processing terminal
TW202036463A (en) Living body detection method, device, apparatus, and storage medium
CN112699786B (en) Video behavior identification method and system based on space enhancement module
CN111652290A (en) Detection method and device for confrontation sample
Zhang et al. Face anti-spoofing detection based on DWT-LBP-DCT features
Zhou et al. Msflow: Multiscale flow-based framework for unsupervised anomaly detection
CN114693607B (en) Tamper video detection method and tamper video detection system based on multi-domain block feature marker point registration
Gong et al. Deepfake forensics, an ai-synthesized detection with deep convolutional generative adversarial networks
Yeh et al. Face liveness detection based on perceptual image quality assessment features with multi-scale analysis
CN117058769A (en) Facial anomaly attack screening method and device based on similarity calculation
Zhu et al. Deepfake detection with clustering-based embedding regularization
CN114842524A (en) Face false distinguishing method based on irregular significant pixel cluster
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN114330652A (en) Target detection attack method and device
CN109697240A (en) A kind of image search method and device based on feature
CN116664880B (en) Method for generating depth fake anti-reflection evidence obtaining image
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
Geradts et al. Interpol review of forensic video analysis, 2019–2022
CN113128427A (en) Face recognition method and device, computer readable storage medium and terminal equipment
CN117351578A (en) Non-interactive human face living body detection and human face verification method and system
CN116978130A (en) Image processing method, image processing device, computer device, storage medium, and program product
CN114898137A (en) Face recognition-oriented black box sample attack resisting method, device, equipment and medium
CN114743148A (en) Multi-scale feature fusion tampering video detection method, system, medium, and device
CN113782033B (en) Voiceprint recognition method, voiceprint recognition device, voiceprint recognition equipment and storage medium
CN114663965B (en) Testimony comparison method and device based on two-stage alternative learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination