CN115081643B - Confrontation sample generation method, related device and storage medium - Google Patents

Confrontation sample generation method, related device and storage medium Download PDF

Info

Publication number
CN115081643B
CN115081643B CN202210855303.6A CN202210855303A CN115081643B CN 115081643 B CN115081643 B CN 115081643B CN 202210855303 A CN202210855303 A CN 202210855303A CN 115081643 B CN115081643 B CN 115081643B
Authority
CN
China
Prior art keywords
sample
target
similarity
curvature
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210855303.6A
Other languages
Chinese (zh)
Other versions
CN115081643A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Beijing Real AI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Real AI Technology Co Ltd filed Critical Beijing Real AI Technology Co Ltd
Priority to CN202210855303.6A priority Critical patent/CN115081643B/en
Publication of CN115081643A publication Critical patent/CN115081643A/en
Application granted granted Critical
Publication of CN115081643B publication Critical patent/CN115081643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Abstract

The embodiment of the application relates to the field of computer vision, and provides a confrontation sample generation method, a related device and a storage medium, wherein the method comprises the following steps: acquiring candidate confrontation samples and target similarity, wherein the target similarity comprises curvature similarity of the candidate confrontation samples and an original sample and identification similarity of the candidate confrontation samples and the original sample, and the original sample comprises a set of points on each surface of a target object; the target sample is determined based on the attack target of the anti-attack; if the target similarity does not meet the first preset condition, updating the candidate confrontation sample and the target similarity until the target similarity meets the first preset condition, and taking the candidate confrontation sample when the target similarity meets the first preset condition as the target confrontation sample; the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition. The curvature distance between the target counterattack sample and the original sample is small, the attack success rate is high, and the target counterattack sample has strong concealment in the physical world.

Description

Confrontation sample generation method, related device and storage medium
Technical Field
The embodiment of the application relates to the field of computer vision, in particular to a confrontation sample generation method, a related device and a storage medium.
Background
How to efficiently generate countermeasure samples for different deep learning models in the countermeasure attack research is beneficial to discovering the vulnerability of the deep learning models in time and evaluating the robustness of the deep learning models. Some counter-attack methods generate counter samples in the digital world that add less counter-disturbance, which may cause the counter samples to be incorrectly identified by the deep-learning model or as a designated label.
However, some identification systems in practical use (e.g., unmanned barricade identification systems) typically collect three-dimensional data and perform identification based on real objects. Therefore, when the anti-attack test is performed on the identification system in practical application, a three-dimensional form of anti-sample needs to be generated to implement the anti-attack.
At present, three-dimensional reconstruction is usually carried out after smoothness of a confrontation point cloud is constrained to obtain a three-dimensional confrontation sample, and the three-dimensional confrontation sample of a physical world can be obtained after 3D printing of the confrontation sample. However, in the process of three-dimensional reconstruction, a lot of noise and errors are generated due to the constraints for ensuring the smoothness of each point of the confrontation sample and the point cloud data compensation operation, so that the success rate of attack of the finally obtained confrontation sample is low, and the confrontation sample is not concealed in the physical world.
Therefore, the three-dimensional counterattack sample generated by the existing counterattack method is easy to be distinguished from the original sample, and the maximum probability cannot achieve the expected detection effect on the identification system in practical application.
Disclosure of Invention
The embodiment of the application provides a countermeasure sample generation method, a related device and a storage medium, in the process of generating a target countermeasure sample based on an original sample, the identification similarity between a candidate countermeasure sample and the target sample and the curvature similarity between the candidate countermeasure sample and the original sample are considered, so that the finally generated target countermeasure sample has high similarity with the target sample, smaller curvature distance with the original sample, high attack success rate and less possibility of being discovered, and has stronger concealment in the physical world.
In a first aspect, an embodiment of the present application provides a countermeasure sample generation method, including:
acquiring a candidate confrontation sample;
acquiring target similarity, wherein the target similarity comprises curvature similarity of the candidate confrontation sample and an original sample, and identification similarity of the candidate confrontation sample and the target sample, and the original sample at least comprises a set of points on each surface of the target object; the target sample is determined based on an attack target for resisting the attack;
if the target similarity does not meet a first preset condition, updating the candidate confrontation sample and the target similarity until the target similarity meets the first preset condition, and taking the candidate confrontation sample when the target similarity meets the first preset condition as the target confrontation sample;
the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition.
In a second aspect, an embodiment of the present application provides a data processing apparatus having a function of implementing the countermeasure sample generation method corresponding to the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware.
In one embodiment, the data processing apparatus comprises:
an input-output module configured to obtain candidate confrontation samples;
the processing module is configured to obtain target similarity, wherein the target similarity comprises curvature similarity of the candidate confrontation sample and an original sample, and identification similarity of the candidate confrontation sample and the target sample, and the original sample at least comprises a set of points on each surface of the target object; the target sample is determined based on an attack target for resisting the attack;
the processing module is further configured to update the candidate confrontation sample and the target similarity if the target similarity does not meet a first preset condition until the target similarity meets the first preset condition, and use the candidate confrontation sample when the target similarity meets the first preset condition as the target confrontation sample;
the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to execute the countermeasure sample generation method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computing device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the countermeasure sample generation method according to the first aspect when executing the computer program.
Compared with the prior art, in the embodiment of the application, the target confrontation sample is obtained by gradually updating based on the original sample, wherein the original sample at least comprises a set of points on each surface of the target object, that is, the positions of some points on the surface of the original sample (candidate confrontation sample) can be updated during updating; in the process of gradually updating the candidate confrontation sample based on the original sample and finally obtaining the target confrontation sample, the target similarity comprising two similarities is comprehensively considered, specifically the identification similarity between the candidate confrontation sample and the target sample and the curvature similarity between the candidate confrontation sample and the original sample, so that the finally generated target confrontation sample is high in identification similarity with the target sample and high in curvature similarity with the original sample. Because the generation process of the target confrontation sample is restricted by curvature, namely the curvature similarity of the finally generated target confrontation sample and the original sample is higher, the deformation of the target confrontation sample relative to the original sample is smaller, the target confrontation sample is more concealed, and the target confrontation sample is not easy to be perceived by naked eyes of human beings.
Drawings
Objects, features and advantages of embodiments of the present application will become apparent from the detailed description of embodiments of the present application with reference to the accompanying drawings. Wherein:
fig. 1 is a schematic diagram of a communication system of a countermeasure sample generation method in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a challenge sample generation method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a method of generating a table confrontation sample based on a sofa raw sample using the method of an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a point to be disturbed based on an original sample of a road block in the embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a visualization process of a countermeasure sample generation method in an embodiment of the present application;
FIG. 6 is a flowchart illustrating an exemplary process of iteratively updating candidate challenge samples according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a computing device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a mobile phone in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a server in an embodiment of the present application.
In the drawings, like or corresponding reference characters designate like or corresponding parts.
Detailed Description
The terms "first," "second," and the like in the description and claims of the embodiments of the present application and in the drawings described above are used for distinguishing between similar elements (e.g., a first loss value and a second loss value may be different loss values, respectively, and may be similar or different) and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise," "include," and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that a division of modules presented in an embodiment of the present application is merely a logical division and may be implemented in a practical application in a different manner, such that multiple modules may be combined or integrated into another system or some features may be omitted or not implemented, such that a shown or discussed coupling or direct coupling or communication between modules may be through some interfaces and an indirect coupling or communication between modules may be electrical or other similar, and such that embodiments are not limited in this application. Moreover, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiments of the present application.
The embodiment of the application provides a countermeasure sample generation method, a related device and a storage medium, which can be applied to a countermeasure sample generation system. The data processing device is at least used for updating the candidate confrontation sample and generating the target confrontation sample based on the recognition result fed back by the recognition device. The identification device is used for identifying the candidate confrontation sample to obtain an identification result. At least one recognition result (e.g., recognition probability distribution) obtained by the recognition means can be used by the data processing means to iteratively update the perturbation points of the candidate confrontation samples, such as the positions of the perturbation points in the three-dimensional space. Wherein the data processing device can be an application program for updating the candidate confrontation sample and outputting the target confrontation sample, or a server for installing the application program for updating the candidate confrontation sample and outputting the target confrontation sample; the identification device may be an identification program for identifying the candidate confrontation sample to obtain an identification result, the identification program is a target identification model, and the identification device may also be a terminal device with the target identification model deployed.
The scheme provided by the embodiment of the application relates to technologies such as Artificial Intelligence (AI), computer Vision (CV), machine Learning (ML), and is specifically explained by the following embodiments:
the AI is a theory, method, technique and application system that simulates, extends and expands human intelligence, senses the environment, acquires knowledge and uses the knowledge to obtain the best results using a digital computer or a machine controlled by a digital computer. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The AI technology is a comprehensive subject, and relates to the field of extensive technology, both hardware level technology and software level technology. The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
CV is a science for researching how to make a machine look, and in particular, it refers to replacing human eyes with a camera and a computer to perform machine vision such as identification, tracking and measurement on a target, and further performing image processing, so that the computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include techniques such as anti-disturbance generation, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, and also include common biometric techniques such as face recognition and fingerprint recognition.
In some embodiments, the data processing device and the identification device are separately deployed, and referring to fig. 1, the countermeasure sample generation method provided in the embodiment of the present application may be implemented based on one of the communication systems shown in fig. 1. The communication system may include a server 01 and a terminal device 02.
The server 01 may be a data processing apparatus in which a countermeasure sample generation program may be deployed.
The terminal device 02 may be a recognition apparatus, in which a recognition model, such as a target recognition model trained by a machine learning based method, may be deployed. The target recognition model can be a road block recognition model, a vehicle recognition model or a building recognition model.
The server 01 may receive the attack target and the original sample from the outside, then iteratively update a candidate countermeasure sample dedicated to achieving the attack target on the basis of the original sample, and transmit the candidate countermeasure sample to the terminal device 02. The terminal device 02 may process the candidate confrontation sample by using the recognition model to obtain a recognition result, which may be, for example, a recognition probability distribution, and then feed back the recognition result to the server 01. The server 01 may determine, based on the identification result, an identification similarity between the candidate countermeasure sample and the target sample, determine whether the candidate countermeasure sample can achieve the attack target, and if the candidate countermeasure sample can achieve the attack target and the curvature similarity between the candidate countermeasure sample and the original sample is also greater than a preset threshold, determine the candidate countermeasure sample as the target countermeasure sample.
It should be noted that the server according to the embodiment of the present application may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.
The terminal device referred to in the embodiments of the present application may refer to a device providing voice and/or data connectivity to a user, a handheld device having a wireless connection function, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, such as portable, pocket, hand-held, computer-included, or vehicle-mounted mobile devices, that exchange voice and/or data with a radio access network. Examples of such devices include Personal Communication Service (PCS) phones, cordless phones, session Initiation Protocol (SIP) phones, wireless Local Loop (WLL) stations, and Personal Digital Assistants (PDA).
Referring to fig. 2, fig. 2 is a schematic flow chart of a challenge sample generation method according to an embodiment of the present disclosure. The method can be executed by a data processing device and a recognition device, updates an original sample to obtain a confrontation sample, and is embodied into a three-dimensional confrontation sample of a physical world, and the confrontation sample generation method comprises the following steps:
in step S110, candidate confrontation samples are obtained.
Wherein the candidate confrontation sample is obtained based on a historical candidate confrontation sample, the historical candidate confrontation sample comprises an original sample, and the original sample at least comprises a set of points on each face of the target object.
Specifically, in the embodiment of the present application, the target countermeasure sample is obtained by gradually iteratively updating based on the original sample. The initial candidate confrontation sample may be an original sample, and in the subsequent process of generating the target confrontation sample, the target confrontation sample is updated based on the historical candidate confrontation sample. That is, the target countermeasure sample is updated from the candidate countermeasure sample updated in the last time step, for example, assuming that the target countermeasure sample a is updated 3 times based on the original sample a1, a first time update is performed based on the original sample a1 to obtain a candidate countermeasure sample a2, a second time update is performed based on the candidate countermeasure sample a2 to obtain a candidate countermeasure sample a3, and a target countermeasure sample a is updated based on the candidate countermeasure sample a 3.
In the embodiment of the present application, the original sample is a multi-dimensional sample, and may be, for example, a three-dimensional image, that is, the original sample is at least capable of representing the surface shape of the target object, and may be, for example, a three-dimensional data set (three-dimensional point cloud) obtained by scanning an external surface of a solid object, where the three-dimensional data set includes at least three-dimensional coordinates capable of mapping points on each surface of the solid object. Thus, iteratively generating a challenge sample based on the original sample may be perturbing the position of at least one point of the outer surface of the target object, i.e. the challenge sample has been deformed with respect to the original sample.
It will be appreciated that the target object may be a virtual object of the digital world obtained by scanning or modeling based on a physical object of the physical world, and the virtual object may be a mapping of the physical object in the digital world, such as the three-dimensional digital model depicted by the three-dimensional point cloud in the above-described embodiment.
It should be noted that the surfaces of some solid objects are not all flat surfaces, but rather are irregular surfaces, such as the surfaces of basketball, tree, etc., all of which are uneven, i.e., the surfaces include curved surfaces. In addition, the curved surfaces may include closed curved surfaces, which refer to surfaces that are compact and without boundaries, such as spheres, toroids, and clarinet bottles, and non-closed curved surfaces; the non-closed curved surface refers to a surface with a boundary, for example, the attack on some solid objects may only need to attack an isolated surface, for example, attacking the lamppost, and may only need to disturb each point on the front surface of the lamppost, and the point on the back surface of the lamppost does not need to disturb because the point does not provide help in identification model identification.
It will be appreciated that if the original sample itself is obtained by three-dimensional scanning of a regular solid object, for example a cube. Then the original sample does not include surfaces and is all flat, with 0 curvature at each point of the original sample. However, in the embodiment of the present application, the positions of some points in the original sample are updated, so that the surface of the candidate challenge sample obtained by updating generates the concave-convex, that is, the curvature of some disturbed points can be changed from 0 to other values.
It is to be understood that the multi-dimension of the original sample does not only refer to the three-dimensional coordinates of each point on the surface of the target object, but also includes color information or brightness information of each point, i.e. a fourth dimension or a fifth dimension other than three dimensions, which is not limited by the embodiment of the present application. It should be noted that, when curvature calculation is performed in the embodiment described below in the present application, only three-dimensional coordinates of each point are required, and color information or luminance information is not involved. In one possible design, the update of the candidate confrontation sample may include, in addition to the update of the position of the disturbance point according to the embodiment described below, an update of color information or brightness information of the disturbance point, so as to improve the attack success rate.
In the embodiment of the present application, the target sample is determined based on an attack target for resisting an attack, for example, if the attack target for resisting an attack is that the recognition model is expected to recognize the sofa as a table, the target sample may be three-dimensional point cloud data acquired based on the table, which is an entity object.
Referring to fig. 3, fig. 3 is an exemplary process of generating a countermeasure sample that is mistakenly recognized as a table by the recognition model based on an original sample of a sofa by using the method of the embodiment of the present application, and the countermeasure sample in fig. 3 has some deformation relative to the original sample, for example, corners of the sofa are modified into rounded corners. In fig. 3, a plurality of disturbance point sets are marked in the original sample of the sofa, and in the process of performing iterative update based on the original sample to obtain the target countermeasure sample (the countermeasure sample labeled as a table), the three-dimensional coordinates of the disturbance points in the disturbance point sets are updated, for example, after the three-dimensional coordinates of each disturbance point in the disturbance point set 1 are updated, the three-dimensional coordinates are changed into each disturbance point in the disturbance point set 1. It should be noted that, although fig. 3 illustrates, for convenience of reference, a plurality of to-be-perturbed points that are continuous within one area of an original sample are labeled in a set form, during an actual iterative update process of the original sample or a candidate antagonistic sample, the update of each to-be-perturbed point or perturbed point is still independent, that is, the update direction and the update amount of each to-be-perturbed point or perturbed point are calculated independently, and the update direction and the update amount of a certain point are not taken as the update direction and the update amount of its neighboring point.
A three-Dimensional data set (three-Dimensional point cloud) may be acquired by a three-Dimensional Scanner (3 Dimensional Scanner), also known as a three-Dimensional Digitizer (3 Dimensional Digitizer). The three-dimensional scanner is one of the important tools currently used for three-dimensional modeling of solid objects. The method can quickly and conveniently convert the three-dimensional color information of the real world into digital signals which can be directly processed by a computer, and provides an effective means for the digitization of real objects. Compared with a traditional plane scanner, a traditional video camera and a traditional graphic acquisition card, the three-dimensional space coordinate (namely three-dimensional point cloud) of each sampling point on the surface of an object can be obtained through scanning.
In the embodiment of the present application, the original sample may also be obtained by performing digital three-dimensional modeling on an entity object, for example, by software modeling such as 3DMAX, softImage, maya, UG, and AutoCAD. Or may also be obtained by three-dimensional reconstruction of a two-dimensional Image of the solid object, i.e. Image-Based Modeling and rendering (IBMR).
It can be understood that the original sample may also be referred to as an original mesh, that is, the point cloud data obtained based on the three-dimensional scanning may present the surface contour of the solid object in the form of a three-dimensional mesh; similarly, the challenge samples may also be referred to as challenge grids.
It is considered that perturbing the location points included in the original sample once may not result in a satisfactory target challenge sample. Therefore, iteration updating can be carried out for multiple times based on the original sample, multiple candidate confrontation samples are generated in the process, and after a new candidate confrontation sample is obtained each time, whether the candidate confrontation sample meets the requirements or not can be judged once, and whether the candidate confrontation sample can be used as a target confrontation sample or not can be judged.
In the embodiment of the present application, the determination as to whether it can be used as a target confrontation sample is made by the target similarity obtained based on the candidate confrontation image.
And step S120, acquiring the target similarity.
Wherein the target similarity comprises curvature similarity of the candidate confrontation sample and an original sample, and identification similarity of the candidate confrontation sample and a target sample.
In the embodiment of the application, the curvature similarity and the recognition similarity respectively obtained based on the candidate confrontation sample are both required to be met, so that the candidate confrontation sample can be determined to be used as the target confrontation sample. How to obtain the curvature similarity and the recognition similarity based on the candidate confrontation sample is described below:
for curvature similarity:
in the embodiment of the present application, after obtaining the candidate confrontation sample, it is necessary to determine whether the candidate confrontation sample meets the requirement, and whether the candidate confrontation sample can be used as the target confrontation sample. The target of the embodiment of the application is the three-dimensional confrontation sample which generates more hidden disturbance, and in order to ensure that the target confrontation sample is more hidden and is not easy to be perceived by human eyes, the embodiment of the application carries out curvature constraint, namely, the curvature similarity between the target confrontation sample and the original sample is required to be greater than a preset threshold value. Therefore, in this step, after a new candidate confrontation sample is obtained each time through updating, the curvature similarity between the new candidate confrontation sample and the original sample is obtained, so as to determine whether the new candidate confrontation sample meets the curvature constraint requirement.
Considering that the original sample or the candidate confrontation sample is updated, that is, the position of one or more points in the sample is updated, and different points in the sample have a plane and a curved surface at the positions of the target object surface (that is, the curvature of different points is different), the updating effect of the points with different curvatures may be different. In order to improve the update iteration efficiency and enable the countermeasure sample meeting the requirement to be obtained only by the minimum number of iteration updates, in the embodiment of the present application, the point to be perturbed may be predetermined based on the original sample, that is, only the position of the point to be perturbed is updated when the original sample is updated. Referring to fig. 4, fig. 4 is a three-dimensional original sample of a roadblock, and it can be seen that a point on the top surface of the roadblock is not substantially in the same plane as a point on the periphery of the roadblock, and is less noticeable after disturbance, so that a point on the top area of the roadblock can be set as a point to be disturbed.
It should be noted that, since a preset number of points to be disturbed have been determined in the original sample, the candidate countermeasure sample updated based on the original sample still includes the preset number of "points to be disturbed". The position of the "point to be perturbed" in the candidate confrontation sample may have changed compared to the point to be perturbed in the original sample. Thus, the "point to be perturbed" in the candidate countermeasure sample may be referred to as a perturbation point.
It can be understood that, although the position of the disturbance point may have changed compared with the disturbance point, the disturbance point still corresponds to the disturbance point one to one; for example, the original sample a includes points p1, q1, and r1 to be perturbed, then the candidate countermeasure sample a1 includes perturbed points p2, q2, and r2, where p2 corresponds to p1 and is obtained by updating the position of p1 point, and the corresponding relationships of other points are not described herein again. Thus, the curvature similarity of the original sample and the candidate confrontation sample can be determined based on the curvatures of the respective corresponding points in the original sample and the candidate confrontation sample.
In consideration, in the embodiment of the application, the positions of some points in the original sample are changed to generate a confrontation sample which is inconsistent with the appearance of the original sample; i.e. the target deforms the outer contour of the challenge sample relative to the original sample. It can be understood that the perturbation effect and the concealment after perturbation are different for different (different curvatures) points; for example, a point (curvature of 0) on a plane changes in position, whether convex or concave, more abruptly, i.e. is easier to detect, while a point (curvature of 0) on a curved surface itself, if moved upward or downward by a slight distance, is not particularly abruptly and is not easy to detect by the human eye. Therefore, in the embodiment of the present application, the point to be perturbed of the original sample may be a non-planar point of the original sample, that is, the curvature of the point to be perturbed is not 0.
Considering that the shape of an object may be more affected by the vertex, for example, if the vertex position of a triangular pyramid changes, the recognition result may be more affected, and therefore, in order to improve the perturbation efficiency, i.e., the efficiency of generating a countersample capable of successfully attacking, in one possible design, the point to be perturbed is the vertex of the original sample, i.e., the curvature of the point to be perturbed is not 0, and the first derivative of the curvature is 0.
It can be understood that in one possible design, it is desirable to seek a balance between disturbance efficiency and concealment performance, that is, it is desirable to not only update the target countermeasure samples which can attack success more quickly in an iteration manner, but also to have better disturbance concealment performance of the target countermeasure samples; considering that the position of each point on the curved surface of the original sample, which is not the vertex, is not easily perceived by naked eyes after being changed, and the position of each point on the curved surface of the original sample is also easily influenced on the recognition result, the non-planar point and the non-vertex of the original sample can be set as the point to be perturbed, that is, the curvature of the point to be perturbed is not 0, and the first derivative of the curvature is not 0.
It should be noted that the point to be perturbed determined in the embodiment of the present application may be not only a convex point on the surface of the target object, but also a concave point.
In one possible design, the point to be perturbed may also be randomly acquired; for example, a three-dimensional grid drawing based on the target object may be performed to obtain the original sample, where the three-dimensional grid itself includes a plurality of grid points, and thus, the grid points may be set as points to be perturbed.
It should be noted that if each face of the target object includes a curved surface, the points to be perturbed in the original sample necessarily include some points on the curved surface.
Based on curvatures of corresponding points (to-be-perturbed points and perturbed points) in an original sample and a candidate antagonistic sample, 3 methods for determining the similarity of the curvatures of the original sample and the candidate antagonistic sample are provided in the embodiment of the present application, and the following methods (1) to (3) are specifically described:
the method (1) may determine the curvature similarity of the original sample and the candidate confrontation sample based on the distance of the curvature between the point to be perturbed and the perturbed point in the preset norm.
Specifically, the curvatures of each to-be-disturbed point in the original sample and each disturbed point in the candidate confrontation sample can be respectively obtained, then one to-be-disturbed point and the corresponding disturbed point are listed as a group, and the preset norm distance between each group of to-be-disturbed point and the curvatures of the disturbed point is obtained; for example, the curvatures K (p 1), K (q 1), and K (r 1) of each to-be-perturbed point in the original sample a may be obtained first, then the curvatures K (p 2), K (q 2), and K (r 2) of each perturbed point in the candidate confrontation sample a1 may be obtained, and finally the L2 norm distance between the curvatures of each set of corresponding points may be calculated:
D1=|| K(p2)- K(p1)|| 2 ;
D2=|| K(q2)- K(q1)|| 2 ;
D3=|| K(r2)- K(r1)|| 2 ;
the curvature similarity between the original sample a and the candidate confrontation sample a1 can be determined according to D1, D2 and D3, and the specific way may be to take the sum or average of three norm distances.
It should be noted that, although the embodiment of the present application describes how to calculate the curvature distance between two points by taking the L2 norm as an example, a person skilled in the art may also adopt other norms to calculate the curvature distance between two points according to actual needs, for example, the Lp norm, which is not limited in the embodiment of the present application.
The method (2) may determine the curvature similarity of the original sample and the candidate confrontation sample based on a square value of a distance of the curvature between the point to be perturbed and the perturbed point in a preset norm.
Considering that the norm distance between the curvatures of the corresponding points may not be significant enough, the similarity of the curvatures between the original sample and the candidate confrontation sample cannot be expressed obviously. In one possible design, a square value of a preset norm distance between curvatures of the corresponding points in each group may be calculated, and the curvature similarity may be determined based on a sum of the square values or an average of the square values. The curvature similarity is determined based on the square value of the preset norm distance, so that a smaller distance value smaller than 1 is smaller, and a larger distance value larger than 1 is more obvious, so that the curvature similarity obtained through calculation is easier to represent the deformation degree between the two, namely the curvature similarity is closer, and the curvature difference is far away.
The method (3) may determine the curvature similarity of the original sample and the candidate countermeasure sample based on the distance (or the square value of the preset norm distance) of the multiple curvatures between the point to be disturbed and the disturbance point at the preset norm.
It is contemplated that the manner of curvature calculation includes a variety of, for example, approximate curvature, mean curvature, and gaussian curvature. In the embodiment of the present application, when the curvature of each point is calculated, the curvature calculation is performed according to at least one curvature calculation method, that is, if only an approximate curvature is calculated based on each to-be-perturbed point in the original sample, only an approximate curvature is calculated based on each perturbed point in the candidate confrontation sample, and then the curvature similarity between the original sample and the candidate confrontation sample is determined according to the distance between each set of approximate curvatures in the preset norm.
In order to make the disturbance of the target resisting sample more hidden and not easily perceived by people. In the embodiment of the application, multiple curvature constraints can be set, namely, when the curvature of each point is calculated, multiple curvature calculation modes are adopted, and then different curvatures between the candidate confrontation sample and the original sample obtained by each curvature calculation mode are required to accord with the preset distance limit; for example, an approximate curvature constraint, an average curvature constraint and a gaussian curvature constraint may be set simultaneously, so that when the curvature similarity calculation is performed, the approximate curvature similarity, the average curvature similarity and the gaussian curvature similarity between corresponding points of each group are obtained, specifically:
the approximate curvatures { K1 (p 1), K1 (q 1) and K1 (r 1) }, the average curvatures { K2 (p 1), K2 (q 1) and K2 (r 1) }, and the gaussian curvatures { K3 (p 1), K3 (q 1) and K3 (r 1) }, of the respective points to be perturbed in the original sample a may be obtained first, then the approximate curvatures { K1 (p 2), K1 (q 2) and K1 (r 2) }, the average curvatures { K2 (p 2), K2 (q 2) and K2 (r 2) }, and the gaussian curvatures { K3 (p 2), K3 (q 2) and K3 (r 2) }, of the respective perturbation points in the candidate countermeasure sample a1 may be obtained, and finally the L2 norm distances between the respective curvatures of the respective sets of corresponding points may be calculated:
D11=|| K1(p2)- K1(p1)|| 2 ;
D12=|| K1(q2)- K1(q1)|| 2 ;
D13=|| K1(r2)- K1(r1)|| 2 ;
D21=|| K2(p2)- K2(p1)|| 2 ;
D22=|| K2(q2)- K2(q1)|| 2 ;
D23=|| K2(r2)- K2(r1)|| 2 ;
D31=|| K3(p2)- K3(p1)|| 2 ;
D32=|| K3(q2)- K3(q1)|| 2 ;
D33=|| K3(r2)- K3(r1)|| 2 ;
according to D11, D12, D13, D21, D22, D23, D31, D32 and D33, the curvature similarity between the original sample a and the candidate confrontation sample a1 can be determined; specifically, the approximate curvature similarity between the original sample a and the candidate confrontation sample a1 may be determined according to D11, D12 and D13, the average curvature similarity between the original sample a and the candidate confrontation sample a1 may be determined according to D21, D22 and D23, the gaussian curvature similarity between the original sample a and the candidate confrontation sample a1 may be determined according to D31, D32 and D33, and then it may be determined whether the various curvature similarities between the original sample a and the candidate confrontation sample a1 may meet the requirement.
It should be noted that, although the way (3) takes the case that the various curvatures of the sets of points are at the preset norm distance as an example, how to determine the curvature similarity between the original sample and the candidate confrontation sample is described; in one possible design, the mode (2) and the mode (3) can be combined according to actual needs; that is, the curvature similarity between the original sample and the candidate confrontation sample is determined based on the square value of the various curvatures of the sets of points at the preset norm distance.
It can be understood that the specific way of calculating the curvature similarity according to the distance between a certain curvature among the plurality of sets of points and the preset norm may be a summation value or an average value, and will not be described herein again.
Although how to determine the curvature similarity between the original sample and the candidate confrontation sample is specifically described in the embodiment of the present application through the modes (1) to (3), a person skilled in the art may also adopt other feasible modes to calculate the curvature similarity between the original sample and the candidate confrontation sample according to actual needs, and the key point of the embodiment of the present application is that curvature constraint is introduced in the iterative update process of the confrontation sample, so that the distance between the finally generated target confrontation sample and the original sample in the curvature sense is small, disturbance is not easily perceived by human eyes, and is more hidden in the physical world.
For recognition similarity:
in the embodiment of the application, on one hand, the generated target countermeasure sample is required to be more hidden and not easy to be found by human eyes, and on the other hand, the generated target countermeasure sample is required to be easily confused by an identification model, namely, the target countermeasure sample is identified as having the same identification result as the target sample. Therefore, it is also necessary to obtain the recognition similarity between the candidate confrontation sample and the target sample to determine whether the candidate confrontation sample is easily confused by the recognition model, such as being mistakenly recognized as having the same recognition result as the target sample.
In the embodiment of the application, whether the candidate confrontation sample can attack successfully or not can be determined through the identification similarity of the candidate confrontation sample and the target sample. The recognition similarity of the candidate confrontation sample and the target sample can be determined through a preset recognition model; for example, the candidate confrontation sample may be input into the recognition model, and the recognition model directly outputs the recognition similarity between the candidate confrontation sample and the target sample, or, considering that the recognition model usually performs recognition based on sample features, the features of the candidate confrontation sample may be extracted, and then the candidate confrontation sample and the features of the target sample are subjected to similarity comparison, so as to obtain the recognition similarity between the candidate confrontation sample and the target sample.
Step S130, if the target similarity does not meet a first preset condition, updating the candidate countermeasure sample and the target similarity until the target similarity meets the first preset condition, and taking the candidate countermeasure sample when the target similarity meets the first preset condition as the target countermeasure sample.
The first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition.
Referring to fig. 5, in the embodiment of the present application, iterative updating is performed continuously based on the original sample until a target confrontation sample meeting requirements is obtained, and in the iterative updating process, an intermediate product obtained in each updating is called a candidate confrontation sample. After a new candidate confrontation sample is obtained by each update, it is required to determine whether the candidate confrontation sample meets the requirement, that is, whether the obtained target similarity meets a first preset condition, specifically, whether the curvature similarity between the candidate confrontation sample and the original sample is greater than the preset threshold and whether the identification similarity between the candidate confrontation sample and the target sample meets the second preset condition is determined.
According to the embodiment of the application, at least two requirements need to be met because the hope that the disturbance is not easy to be perceived by human eyes, the challenge sample is more hidden in the physical world and has high attack success rate. The first is that the disturbance of the target confrontation sample is required to be not easily detected by human eyes and is more hidden in the physical world, and the embodiment of the present application is constrained by the curvature similarity to meet the first requirement, specifically, the curvature similarity of the target confrontation sample and the original sample is required to be greater than the preset threshold (for example, 90%); the second is that the attack success rate of the target counterattack sample is required to be sufficiently high, the embodiment of the application performs constraint through the identification similarity, specifically, the identification similarity between the target counterattack sample and the target sample is required to meet the second preset condition, and the counterattack includes multiple attack modes, so that the specific content of the second preset condition is related to the attack modes.
Specifically, the attack includes a targeted attack and a untargeted attack, wherein the untargeted attack is: the recognition result of the target confrontation sample by the recognition model is different from the recognition result of the original sample (which can also be regarded as a target image) by the recognition model. A targeted attack may refer to: the recognition result of the recognition model on the target confrontation sample is a specific recognition result, and the specific recognition result is the same as or slightly different from the recognition result of the recognition model on the target sample.
Therefore, when the countermeasure attack is a targeted attack, the embodiment of the application may require that the recognition similarity between the target countermeasure sample and the target sample is greater than a first preset value (e.g., 80%), that is, the second preset condition may be that the recognition similarity is greater than the first preset value (e.g., 80%); when the countermeasure attack is a non-target attack, the embodiment of the present application may require that the recognition similarity between the target countermeasure sample and the target sample (original sample) is less than a second preset value (e.g., 30%), that is, the second preset condition may be that the recognition similarity is less than the second preset value (e.g., 30%).
Considering that two requirements are met at the same time, that is, two constraints are met at the same time, in order to more conveniently perform update iteration of the original sample (the candidate confrontation sample), an objective function can be constructed based on the two constraints, and the candidate confrontation sample is updated in a manner of continuously solving an extreme value of the objective function. Specifically, in one possible design, referring to fig. 6, step S130 may include the following steps S131-S133:
step S131, a first loss value is obtained based on the identification similarity of the candidate confrontation sample and the target sample.
In the embodiment of the application, the first loss value may be determined by a first loss function which is constructed in advance, and the first loss function may take the candidate confrontation sample and the target sample as inputs and determine the recognition similarity between the candidate confrontation sample and the target sample; or the first loss function may also be a classification loss function identifying common models in the task, such as cross-entropy loss. In one possible design, the first loss function may beC Mis (M adv ,y) For determining the recognition feature distance between the candidate confrontation sample and the target sample, for example, the distance (inversely proportional to the recognition similarity) in euclidean space of the feature for recognition extracted by the recognition model based on the two samples can be used;M adv is a sample of the candidate confrontation that,yis the target sample.
Step S132, a second loss value is obtained based on the curvature similarity between the candidate confrontation sample and the original sample.
In the embodiment of the present application, the second loss value may be determined by a second loss function that is constructed in advance, and the second loss function may take the candidate confrontation sample and the original sample as inputs to determine the curvature similarity between the two samples. In one possible design, the second loss function may be:
Figure 632733DEST_PATH_IMAGE001
wherein the content of the first and second substances,M adv is a sample of the candidate confrontation that,Mis a sample of the original sample, and,K(v adv ) Is the point to be perturbed in the candidate confrontation sample Madvv adv Is not required to be constant,K(v) Is thatRaw sampleMPoint to be disturbed invIs not required to be constant,nthe number of points to be perturbed or perturbed,C Reg (M adv ,M) It is determined the distance in terms of curvature of the candidate confrontation sample and the original sample, inversely proportional to the curvature similarity.
Step S133, updating the candidate confrontation sample according to the first loss value and the second loss value, and reacquiring the first loss value and the second loss value based on the updated candidate confrontation sample until the sum of the first loss value and the second loss value reaches an extreme value.
When the sum of the first loss value and the second loss value reaches an extreme value (including a maximum value and a minimum value), the target similarity meets the first preset condition.
Considering that the attack mode for resisting the attack includes a target attack and a non-target attack, and the requirements of the target attack on the target resisting sample are different (mainly the requirement of identifying the similarity), the following describes the difference requirements of the target attack and the non-target attack on the identification similarity, respectively.
When a target attack is carried out, the label type of the target sample is different from that of the original sample; the first loss value is inversely proportional to the recognition similarity; the second loss value is inversely proportional to the curvature similarity; and when the sum of the first loss value and the second loss value reaches a minimum value, the target similarity meets the first preset condition.
Specifically, if the attack mode is targeted attack, the following first target function may be constructed in advance:
Figure 612190DEST_PATH_IMAGE002
wherein the content of the first and second substances,βthe curvature constraint coefficient can be a predetermined hyper-parameter, and can be determined by a person skilled in the art according to an actual attack scene; for example, if stronger curvature constraint is desired, i.e., the similarity of the curvature of the antagonizing sample and the original sample is higherA larger curvature constraint coefficient may be set, and vice versa, and will not be described herein.
Then, the first objective function is iteratively optimized, and the objective countermeasure sample with the minimum value is obtained by the first objective function through continuous solving, so that the objective countermeasure sample meeting the objective attack requirement can be generated.
When a non-target attack is carried out, the label category of the target sample is the same as that of the original sample; the first loss value is inversely proportional to the recognition similarity; the second loss value is proportional to the curvature similarity; and when the sum of the first loss value and the second loss value reaches a maximum value, the target similarity meets the first preset condition.
Specifically, if the attack mode is a non-target attack, the following second target function may be constructed in advance:
Figure 250982DEST_PATH_IMAGE003
then, the second objective function is iteratively optimized, and the objective countermeasure sample which enables the second objective function to obtain the maximum value is continuously solved, so that the objective countermeasure sample which meets the requirement of target-free attack can be generated.
After defining the objective function to be optimized, each time step for updating the candidate confrontation sample, in the embodiment of the present application, how to update the candidate confrontation sample may be determined according to the objective function value, which specifically includes:
determining gradient change information of each disturbance point in the candidate confrontation sample according to the sum of the first loss value and the second loss value, and then updating the position of each disturbance point in the three-dimensional space according to the gradient change information of each disturbance point.
In the embodiment of the present application, the determining of the gradient change information of each perturbation point in the candidate confrontation sample based on the sum of the first loss value and the second loss value may be to determine the sum of the loss values, for example, a first objective function value or a second objective function value, and then calculate a ratio of a deviation of the sum of the loss values to a deviation of each perturbation point, respectively, as the gradient change information of each perturbation point. For example, the candidate confrontation sample a1 includes disturbance points p2, q2, and r2, and the confrontation sample generation with the target attack is performed based on the candidate confrontation sample a1, the original sample a, and the target sample b, and the first objective function value of the current time step is determined to be L1 (1), so that the gradient change information gp of the disturbance point p2, the gradient change information gq of the disturbance point q2, and the gradient change information gr of the disturbance point r2 can be respectively calculated and obtained, and each disturbance point can be updated based on the gradient change information of each disturbance point, so as to obtain the candidate confrontation sample a2.
It should be noted that, in this embodiment of the present application, the gradient change information of a target disturbance point at least includes an update direction of the target disturbance point in a three-dimensional space, where the target disturbance point is any disturbance point in the candidate countermeasure sample; namely, the position of each disturbance point in the three-dimensional space can be updated according to the gradient change information of each disturbance point so as to update the candidate confrontation sample; for example, the directions of up, down, left, right, front, and back, not just the positive and negative directions that resist the disturbance in two dimensions when updating the image.
In this embodiment of the application, if it is determined that the candidate countermeasure sample obtained by updating the current time step meets the requirement, that is, the curvature similarity between the candidate countermeasure sample and the original sample is greater than the preset threshold, and the identification similarity between the candidate countermeasure sample and the target sample meets the second preset condition, the loop iteration process may be ended. And the current time step is the last time step, and the candidate confrontation sample obtained by updating the current time step is the target confrontation sample.
It is to be understood that, although the embodiment of the present application uses the numerical value of the curvature similarity between the candidate confrontation sample and the original sample and the recognition similarity between the candidate confrontation sample and the target sample as the update stop condition, the present application is not limited thereto. In other possible designs, it may also be that the update iterations reach a preset number of times, for example 100 times.
After the target countermeasure sample is obtained, the identification model can be directly tested in the digital world, for example, a target attack-free roadblock countermeasure sample can be generated based on an original sample of a roadblock, that is, the roadblock countermeasure sample can not be identified by the identification model; and then, in an automatic driving test system for simulating a road environment, replacing the roadblock with the roadblock confrontation sample, and testing whether the identification model of the automatic driving automobile can correctly identify the roadblock confrontation sample replacement or not, so that the roadblock confrontation sample is avoided in driving instead of colliding with the roadblock confrontation sample.
After the target countermeasure sample is obtained, the target countermeasure sample can be materialized, for example, by means of 3D printing, and then the materialized target countermeasure sample is used to perform attack test on the identification model of the physical world, measure the security of the identification model, and determine the vulnerability of the identification model so as to optimize the identification model.
For example, the target confrontation samples generated in the embodiment of the present application can be used for measuring the safety of the road condition recognition model of the automatic driving system. Specifically, a three-dimensional roadblock countermeasure sample may be generated by using the method of the embodiment of the present application, where the three-dimensional roadblock countermeasure sample may be generated in an attack manner without a target attack, that is, the three-dimensional roadblock countermeasure sample cannot be identified by the identification model; the three-dimensional barrier countermeasure sample can then be placed in an autonomous driving test site, such as the middle of a road in the driving route of an autonomous vehicle, to test whether the autonomous vehicle can correctly recognize the three-dimensional barrier countermeasure sample, thereby avoiding collision with the barrier while driving.
It should be noted that, although the target countermeasure sample is used as an output result in the embodiment of the present application, in some other possible designs, the countermeasure disturbance may also be output to facilitate the materialization and attachment to the object to be attacked. For example, taking the measurement of the safety of the road condition identification model of the automatic driving system as an example, after the three-dimensional roadblock confrontation sample is obtained by the method of the embodiment of the present application, the three-dimensional roadblock confrontation sample may be compared with the original roadblock sample to obtain a difference therebetween, which is the confrontation disturbance, and then the confrontation disturbance is substantiated and attached to the original roadblock sample to test the safety of the road condition identification model of the automatic driving system. Because the volume of the materialized countermeasure disturbance is obviously smaller than that of the materialized countermeasure sample, the materialized countermeasure sample can save materials during materialization, and the workload of the materialization is less, thereby saving time.
In the countermeasure sample generation method of the embodiment of the application, in the process of gradually updating the candidate countermeasure sample based on the original sample and finally obtaining the target countermeasure sample, the identification similarity between the candidate countermeasure sample and the target sample and the curvature similarity between the candidate countermeasure sample and the original sample are considered, so that the finally generated target countermeasure sample is high in identification similarity with the target sample and high in curvature similarity with the original sample. Because the generation process of the target confrontation sample is restrained by the curvature, namely the curvature similarity of the finally generated target confrontation sample and the original sample is higher, the deformation of the target confrontation sample relative to the original sample is smaller, the target confrontation sample is more concealed and is not easy to be perceived by naked eyes of human beings.
Having described the method of the embodiment of the present application, next, a data processing apparatus of the embodiment of the present application is described with reference to fig. 7, the apparatus may also be applied to the server 01 shown in fig. 1, where the apparatus 60 includes:
an input-output module 601 configured to obtain candidate confrontation samples;
a processing module 602 configured to obtain a target similarity, where the target similarity includes a curvature similarity between the candidate confrontation sample and an original sample, and an identification similarity between the candidate confrontation sample and the target sample, and the original sample at least includes a set of points on each face of the target object; the target sample is determined based on an attack target for resisting the attack;
the processing module 602 is further configured to, if the target similarity does not meet a first preset condition, update the candidate confrontation sample and the target similarity until the target similarity meets the first preset condition, and use the candidate confrontation sample when the target similarity meets the first preset condition as the target confrontation sample;
the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition.
The input-output module 601 is further configured to output the target countermeasure sample to perform attack testing on the recognition model of the digital world.
The input-output module 601 is further configured to instantiate the target countermeasure sample to perform an attack test on the identification model of the physical world.
The input and output module 601 is further configured to attach the target countermeasure sample and the original sample output countermeasure to a physical object of the original sample of the digital world to perform an attack test on the identification model of the digital world; or the anti-disturbance is substantiated to be attached to the entity object of the original sample, and the attack test is carried out on the identification model of the physical world.
In one embodiment, the original sample includes a preset number of to-be-disturbed points, the candidate countermeasure sample includes a preset number of disturbed points, and the disturbed points are in one-to-one correspondence with the to-be-disturbed points;
the processing module 602 is further configured to obtain the curvature of each to-be-perturbed point and the curvature of each perturbed point respectively; determining the curvature similarity based on a preset norm distance of the curvature between each disturbance point to be disturbed and the corresponding disturbance point;
the processing module 602 is further configured to update the position of the perturbation point in the candidate confrontation sample in the three-dimensional space.
In one embodiment, one to-be-perturbed point and the corresponding perturbation point are a group, and the processing module 602 is further configured to obtain a square value of a preset norm distance between each group of to-be-perturbed points and a curvature of the perturbation point; and determining the curvature similarity based on the average value of the respective squared values.
In one embodiment, the point to be perturbed is obtained in a random manner;
or the curvature of the point to be perturbed satisfies one of the following:
the curvature of the point to be disturbed is not 0;
the curvature of the point to be perturbed is not 0 and the first derivative of the curvature is 0;
the curvature of the point to be perturbed is not 0 and the first derivative of curvature is not 0.
In one embodiment, the processing module 602 is further configured to obtain a first loss value based on the identified similarity between the candidate confrontation sample and the target sample; obtaining a second loss value based on the curvature similarity of the candidate confrontation sample and the original sample; updating the candidate confrontation sample according to the first loss value and the second loss value, and re-acquiring the first loss value and the second loss value based on the updated candidate confrontation sample until the sum of the first loss value and the second loss value reaches an extreme value;
when the sum of the first loss value and the second loss value reaches an extreme value, the target similarity meets the first preset condition.
In one embodiment, the extremum is a maximum or a minimum; the counter attack comprises at least one of a targeted attack and a non-targeted attack;
when a target attack is carried out, the label category of the target sample is different from that of the original sample; the first loss value is inversely proportional to the recognition similarity; the second loss value is inversely proportional to the curvature similarity; when the sum of the first loss value and the second loss value reaches a minimum value, the target similarity meets the first preset condition;
when the non-target attack is carried out, the label categories of the target sample and the original sample are the same; the first loss value is inversely proportional to the recognition similarity; the second loss value is proportional to the curvature similarity; and when the sum of the first loss value and the second loss value reaches a maximum value, the target similarity meets the first preset condition.
In one embodiment, the processing module 602 is further configured to determine gradient change information of each perturbation point in the candidate confrontation sample according to a sum of the first loss value and the second loss value, where the gradient change information of a target perturbation point at least includes an update direction of the target perturbation point in a three-dimensional space, and the target perturbation point is any perturbation point in the candidate confrontation sample; and updating the position of each disturbance point in the three-dimensional space according to the gradient change information of each disturbance point so as to update the candidate confrontation sample.
In the data processing device of the embodiment of the application, in the process of gradually updating the candidate confrontation sample based on the original sample and finally obtaining the target confrontation sample, the identification similarity between the candidate confrontation sample and the target sample and the curvature similarity between the candidate confrontation sample and the original sample are considered, so that the finally generated target confrontation sample is high in identification similarity with the target sample and high in curvature similarity with the original sample. Because the generation process of the target confrontation sample is restricted by curvature, namely the curvature similarity of the finally generated target confrontation sample and the original sample is higher, the deformation of the target confrontation sample relative to the original sample is smaller, the target confrontation sample is more concealed, and the target confrontation sample is not easy to be perceived by naked eyes of human beings.
Having described the method and apparatus of the present application, a computer-readable storage medium, which may be an optical disc, having a computer program (i.e., a program product) stored thereon, which, when executed by a processor, implements the steps described in the above method embodiments, for example, obtaining a candidate confrontation sample; acquiring target similarity, wherein the target similarity comprises curvature similarity of the candidate confrontation sample and an original sample, and identification similarity of the candidate confrontation sample and the target sample, and the original sample at least comprises a set of points on each surface of the target object; the target sample is determined based on an attack target for resisting the attack; if the target similarity does not meet a first preset condition, updating the candidate confrontation sample and the target similarity until the target similarity meets the first preset condition, and taking the candidate confrontation sample when the target similarity meets the first preset condition as the target confrontation sample; the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition. The specific implementation of each step is not repeated here.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The data processing apparatus 60 in the embodiment of the present application is described above from the perspective of a modular functional entity, and the server and the terminal device that execute the countermeasure sample generation method in the embodiment of the present application are described below from the perspective of hardware processing.
It should be noted that, in the embodiment of the data processing apparatus of the present application, the entity device corresponding to the input/output module 610 shown in fig. 7 may be an input/output unit, a transceiver, a radio frequency circuit, a communication module, an input/output (I/O) interface, and the like, and the entity device corresponding to the processing module 620 may be a processor. The data processing device 60 shown in fig. 7 may have a structure as shown in fig. 8, when the data processing device 60 shown in fig. 7 has a structure as shown in fig. 8, the processor and the transceiver in fig. 8 can implement the same or similar functions of the processing module 620 and the input-output module 610 provided in the device embodiment corresponding to the device, and the memory in fig. 8 stores a computer program that the processor needs to call when executing the countermeasure sample generation method.
As shown in fig. 9, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the terminal device as the mobile phone:
fig. 9 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 9, the handset includes: radio Frequency (RF) circuit 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuit 1060, wireless fidelity (WiFi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 9:
RF circuit 1010 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information to processor 1080; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 1010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1010 may communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communication (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), email, short Messaging Service (SMS), etc.
The memory 1020 may be used to store software programs and modules, and the processor 1080 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1030 may include a touch panel 1031 and other input devices 1032. The touch panel 1031, also referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near the touch panel 1031 using any suitable object or accessory such as a finger, a stylus, etc.) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1031 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1080, and can receive and execute commands sent by the processor 1080. In addition, the touch panel 1031 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1030 may include other input devices 1032 in addition to the touch panel 1031. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, a joystick, or the like.
The display unit 1040 may be used to display information input by a user or information provided to the user and various menus of the cellular phone. The display unit 1040 may include a display panel 1041, and optionally, the display panel 1041 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1031 can cover the display panel 1041, and when the touch panel 1031 detects a touch operation on or near the touch panel 1031, the touch operation is transmitted to the processor 1080 to determine the type of the touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1041 according to the type of the touch event. Although in fig. 9, the touch panel 1031 and the display panel 1041 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1050, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1060, speaker 1061, and microphone 1062 may provide an audio interface between a user and a cell phone. The audio circuit 1060 can transmit the electrical signal converted from the received audio data to the speaker 1061, and the electrical signal is converted into a sound signal by the speaker 1061 and output; on the other hand, the microphone 1062 converts the collected sound signal into an electrical signal, which is received by the audio circuit 1060 and converted into audio data, which is then processed by the audio data output processor 1080 and then sent to, for example, another cellular phone via the RF circuit 1010, or output to the memory 1020 for further processing.
WiFi belongs to short-range wireless transmission technology, and a mobile phone can help a user to send and receive e-mail, browse a web page, access streaming media, etc. through the Wi-Fi module 1070, which provides wireless broadband internet access for the user. Although fig. 9 shows the Wi-Fi module 1070, it is understood that it does not belong to the essential constitution of the cellular phone and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1080 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1020 and calling data stored in the memory 1020, thereby integrally monitoring the mobile phone. Optionally, processor 1080 may include one or more processing units; optionally, processor 1080 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset also includes a power source 1090 (e.g., a battery) for powering the various components, which may optionally be logically coupled to the processor 1080 via a power management system to manage charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 1080 included in the handset also has a method flow for controlling the execution of the above recognition of the candidate countermeasure sample performed by the recognition device.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a server provided in the embodiment of the present application, where the server 1100 may generate relatively large differences due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1122 (e.g., one or more processors) and a memory 1132, and one or more storage media 1130 (e.g., one or more mass storage devices) for storing an application program 1142 or data 1144. Memory 1132 and storage media 1130 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1122 may be provided in communication with the storage medium 1130 to execute a series of instruction operations in the storage medium 1130 on the server 1100.
The Server 1100 may also include one or more power supplies 1120, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, and/or one or more operating systems 1141, such as Windows Server, mac OS X, unix, linux, freeBSD, etc.
The steps performed by the server in the above embodiments may be based on the structure of the server 1100 shown in fig. 10. For example, the steps performed by the data processing apparatus 60 shown in fig. 10 in the above-described embodiment may be based on the server configuration shown in fig. 10. For example, the central processor 1122, by calling the instructions in the memory 1132, performs the following operations:
obtaining candidate confrontation samples through input-output interface 1158;
obtaining target similarity, wherein the target similarity comprises curvature similarity of the candidate confrontation sample and an original sample, and identification similarity of the candidate confrontation sample and the target sample, and the original sample at least comprises a set of points on each surface of a target object; the target sample is determined based on an attack target for resisting the attack;
the candidate confrontation sample may be transmitted to the recognition device, for example, through the input-output interface 1158, to obtain the recognition similarity of the candidate confrontation sample and the target sample;
if the target similarity does not meet a first preset condition, updating the candidate confrontation sample and the target similarity until the target similarity meets the first preset condition, and taking the candidate confrontation sample when the target similarity meets the first preset condition as the target confrontation sample;
the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition.
The target countermeasure sample can also be output through the input-output interface 1158 so as to be materialized, attack is carried out on the target model in the physical world, and the safety of the target model is measured.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one position, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The technical solutions provided by the embodiments of the present application are introduced in detail, and the principles and implementations of the embodiments of the present application are explained by applying specific examples in the embodiments of the present application, and the descriptions of the embodiments are only used to help understanding the method and core ideas of the embodiments of the present application; meanwhile, for a person skilled in the art, according to the idea of the embodiment of the present application, there may be a change in the specific implementation and application scope, and in summary, the content of the present specification should not be construed as a limitation to the embodiment of the present application.

Claims (9)

1. A method of confrontational sample generation, the method comprising:
acquiring a candidate confrontation sample;
acquiring target similarity, wherein the target similarity comprises curvature similarity of the candidate confrontation sample and an original sample, and identification similarity of the candidate confrontation sample and the target sample, and the original sample at least comprises a set of points on each surface of the target object; the target sample is determined based on an attack target for resisting the attack;
if the target similarity does not meet a first preset condition, updating the candidate confrontation sample and the target similarity until the target similarity meets the first preset condition, and taking the candidate confrontation sample when the target similarity meets the first preset condition as the target confrontation sample;
the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition;
wherein the original sample, the candidate confrontation sample and the target confrontation sample are all three-dimensional images;
the original sample comprises a preset number of disturbance points to be disturbed, the candidate confrontation sample comprises a preset number of disturbance points, and the disturbance points are in one-to-one correspondence with the disturbance points to be disturbed;
obtaining curvature similarity of the candidate confrontation sample and the original sample, including:
respectively acquiring the curvature of each to-be-perturbed point and the curvature of each perturbed point;
determining the curvature similarity based on a preset norm distance of the curvature between each disturbance point to be disturbed and the corresponding disturbance point;
the updating the candidate confrontation sample includes:
and updating the position of the disturbance point in the candidate confrontation sample in the three-dimensional space.
2. The method of claim 1, wherein one to-be-perturbed point is grouped with a corresponding perturbation point, and the determining the curvature similarity based on a preset norm distance of curvature between each to-be-perturbed point and the corresponding perturbation point comprises:
acquiring a square value of a preset norm distance between each group of points to be disturbed and the curvature of the disturbed point;
determining the curvature similarity based on an average of the respective squared values.
3. The method of claim 1, wherein the point to be perturbed is obtained randomly;
or the curvature of the point to be disturbed satisfies one of the following items:
the curvature of the point to be perturbed is not 0;
the curvature of the point to be perturbed is not 0 and the first derivative of the curvature is 0;
the curvature of the point to be perturbed is not 0 and the first derivative of curvature is not 0.
4. The method of any one of claims 1-3, wherein if the target similarity does not meet a first predetermined condition, updating the candidate confrontation sample and the target similarity until the target similarity meets the first predetermined condition comprises:
obtaining a first loss value based on the identification similarity of the candidate confrontation sample and the target sample;
obtaining a second loss value based on the curvature similarity of the candidate confrontation sample and the original sample;
updating the candidate confrontation sample according to the first loss value and the second loss value, and re-acquiring the first loss value and the second loss value based on the updated candidate confrontation sample until the sum of the first loss value and the second loss value reaches an extreme value;
when the sum of the first loss value and the second loss value reaches an extreme value, the target similarity accords with the first preset condition.
5. The method of claim 4, wherein the extremum is a maximum or a minimum; the counter attack comprises at least one of a targeted attack and a non-targeted attack;
when a target attack is carried out, the label types of the target sample and the original sample are different; the first loss value is inversely proportional to the recognition similarity; the second loss value is inversely proportional to the curvature similarity; when the sum of the first loss value and the second loss value reaches a minimum value, the target similarity meets the first preset condition;
when a non-target attack is carried out, the label category of the target sample is the same as that of the original sample; the first loss value is inversely proportional to the recognition similarity; the second loss value is proportional to the curvature similarity; and when the sum of the first loss value and the second loss value reaches a maximum value, the target similarity meets the first preset condition.
6. The method of claim 4, wherein the updating the candidate challenge sample comprises:
determining gradient change information of each disturbance point in the candidate confrontation sample according to the sum of the first loss value and the second loss value, wherein the gradient change information of a target disturbance point at least comprises an update direction of the target disturbance point in a three-dimensional space, and the target disturbance point is any disturbance point in the candidate confrontation sample;
and updating the position of each disturbance point in the three-dimensional space according to the gradient change information of each disturbance point so as to update the candidate confrontation sample.
7. A data processing apparatus comprising:
an input-output module configured to obtain candidate confrontation samples;
the processing module is configured to obtain target similarity, wherein the target similarity comprises curvature similarity of the candidate confrontation sample and an original sample, and identification similarity of the candidate confrontation sample and the target sample, and the original sample at least comprises a set of points on each surface of the target object; the target sample is determined based on an attack target for resisting the attack;
the processing module is further configured to update the candidate confrontation sample and the target similarity if the target similarity does not meet a first preset condition until the target similarity meets the first preset condition, and take the candidate confrontation sample with the target similarity meeting the first preset condition as the target confrontation sample;
the original sample comprises a preset number of disturbance points to be disturbed, the candidate confrontation sample comprises a preset number of disturbance points, and the disturbance points are in one-to-one correspondence with the disturbance points to be disturbed;
the processing module is further configured to obtain a curvature similarity of the candidate confrontation sample and the original sample by: respectively acquiring the curvature of each disturbance point and the curvature of each disturbance point; determining the curvature similarity based on a preset norm distance of the curvature between each disturbance point to be disturbed and the corresponding disturbance point;
the processing module further configured to update the candidate confrontation sample by: updating the position of a disturbance point in the candidate confrontation sample in a three-dimensional space;
the first preset condition comprises that the curvature similarity is larger than a preset threshold value, and the identification similarity meets a second preset condition;
wherein the original sample, the candidate challenge sample and the target challenge sample are all three-dimensional images.
8. A computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1-6 when executing the computer program.
9. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-6.
CN202210855303.6A 2022-07-20 2022-07-20 Confrontation sample generation method, related device and storage medium Active CN115081643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210855303.6A CN115081643B (en) 2022-07-20 2022-07-20 Confrontation sample generation method, related device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210855303.6A CN115081643B (en) 2022-07-20 2022-07-20 Confrontation sample generation method, related device and storage medium

Publications (2)

Publication Number Publication Date
CN115081643A CN115081643A (en) 2022-09-20
CN115081643B true CN115081643B (en) 2022-11-08

Family

ID=83260332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210855303.6A Active CN115081643B (en) 2022-07-20 2022-07-20 Confrontation sample generation method, related device and storage medium

Country Status (1)

Country Link
CN (1) CN115081643B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909020B (en) * 2022-09-30 2024-01-09 北京瑞莱智慧科技有限公司 Model robustness detection method, related device and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611030A (en) * 2015-10-27 2017-05-03 杭州海康威视数字技术股份有限公司 Object similarity comparison method and object search method based on video, and object similarity comparison system and object search system based on video
CN111724412A (en) * 2020-06-17 2020-09-29 杭州海康威视数字技术股份有限公司 Method and device for determining motion trail and computer storage medium
CN111814916A (en) * 2020-08-28 2020-10-23 北京智源人工智能研究院 Multi-sample anti-disturbance generation method and device, storage medium and computing equipment
CN112882382A (en) * 2021-01-11 2021-06-01 大连理工大学 Geometric method for evaluating robustness of classified deep neural network
CN113688912A (en) * 2021-08-26 2021-11-23 平安国际智慧城市科技股份有限公司 Confrontation sample generation method, device, equipment and medium based on artificial intelligence
CN113808165A (en) * 2021-09-14 2021-12-17 电子科技大学 Point disturbance attack resisting method facing three-dimensional target tracking model
CN114299313A (en) * 2021-12-24 2022-04-08 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium
CN114297730A (en) * 2021-12-31 2022-04-08 北京瑞莱智慧科技有限公司 Countermeasure image generation method, device and storage medium
CN114387647A (en) * 2021-12-29 2022-04-22 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium
CN114444579A (en) * 2021-12-31 2022-05-06 北京瑞莱智慧科技有限公司 General disturbance acquisition method and device, storage medium and computer equipment
CN114462489A (en) * 2021-12-29 2022-05-10 浙江大华技术股份有限公司 Training method of character recognition model, character recognition method and equipment, electronic equipment and medium
CN114758113A (en) * 2022-03-29 2022-07-15 浙大城市学院 Confrontation sample defense training method, classification prediction method and device, and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11036857B2 (en) * 2018-11-15 2021-06-15 International Business Machines Corporation Protecting a machine learning model
AU2020437435B2 (en) * 2020-03-26 2023-07-20 Shenzhen Institutes Of Advanced Technology Adversarial image generation method, apparatus, device, and readable storage medium
CN112333402B (en) * 2020-10-20 2021-10-22 浙江大学 Image countermeasure sample generation method and system based on sound waves

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611030A (en) * 2015-10-27 2017-05-03 杭州海康威视数字技术股份有限公司 Object similarity comparison method and object search method based on video, and object similarity comparison system and object search system based on video
CN111724412A (en) * 2020-06-17 2020-09-29 杭州海康威视数字技术股份有限公司 Method and device for determining motion trail and computer storage medium
CN111814916A (en) * 2020-08-28 2020-10-23 北京智源人工智能研究院 Multi-sample anti-disturbance generation method and device, storage medium and computing equipment
CN112882382A (en) * 2021-01-11 2021-06-01 大连理工大学 Geometric method for evaluating robustness of classified deep neural network
CN113688912A (en) * 2021-08-26 2021-11-23 平安国际智慧城市科技股份有限公司 Confrontation sample generation method, device, equipment and medium based on artificial intelligence
CN113808165A (en) * 2021-09-14 2021-12-17 电子科技大学 Point disturbance attack resisting method facing three-dimensional target tracking model
CN114299313A (en) * 2021-12-24 2022-04-08 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium
CN114387647A (en) * 2021-12-29 2022-04-22 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium
CN114462489A (en) * 2021-12-29 2022-05-10 浙江大华技术股份有限公司 Training method of character recognition model, character recognition method and equipment, electronic equipment and medium
CN114297730A (en) * 2021-12-31 2022-04-08 北京瑞莱智慧科技有限公司 Countermeasure image generation method, device and storage medium
CN114444579A (en) * 2021-12-31 2022-05-06 北京瑞莱智慧科技有限公司 General disturbance acquisition method and device, storage medium and computer equipment
CN114758113A (en) * 2022-03-29 2022-07-15 浙大城市学院 Confrontation sample defense training method, classification prediction method and device, and electronic equipment

Also Published As

Publication number Publication date
CN115081643A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
US20210019627A1 (en) Target tracking method and apparatus, medium, and device
CN114297730B (en) Countermeasure image generation method, device and storage medium
CN108985220B (en) Face image processing method and device and storage medium
CN114387647B (en) Anti-disturbance generation method, device and storage medium
WO2016184276A1 (en) Evaluation method and evaluation device for facial key point positioning result
CN114444579B (en) General disturbance acquisition method and device, storage medium and computer equipment
CN116310745B (en) Image processing method, data processing method, related device and storage medium
CN115937638A (en) Model training method, image processing method, related device and storage medium
CN115081643B (en) Confrontation sample generation method, related device and storage medium
CN115526055B (en) Model robustness detection method, related device and storage medium
CN115471495B (en) Model robustness detection method, related device and storage medium
CN115239941B (en) Countermeasure image generation method, related device and storage medium
CN115392405A (en) Model training method, related device and storage medium
CN113808209B (en) Positioning identification method, positioning identification device, computer equipment and readable storage medium
CN117011929A (en) Head posture estimation method, device, equipment and storage medium
CN113706446A (en) Lens detection method and related device
CN114943639B (en) Image acquisition method, related device and storage medium
CN116486463B (en) Image processing method, related device and storage medium
CN111681255B (en) Object identification method and related device
CN112990236B (en) Data processing method and related device
CN114743081B (en) Model training method, related device and storage medium
CN111981975B (en) Object volume measuring method, device, measuring equipment and storage medium
CN110929588A (en) Face feature point positioning method and electronic equipment
CN115496924A (en) Data processing method, related equipment and storage medium
CN116721317A (en) Image processing method, related apparatus, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant