CN115392434A - Depth model reinforcement method based on graph structure variation test - Google Patents

Depth model reinforcement method based on graph structure variation test Download PDF

Info

Publication number
CN115392434A
CN115392434A CN202210953766.6A CN202210953766A CN115392434A CN 115392434 A CN115392434 A CN 115392434A CN 202210953766 A CN202210953766 A CN 202210953766A CN 115392434 A CN115392434 A CN 115392434A
Authority
CN
China
Prior art keywords
model
test
mutation
original target
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210953766.6A
Other languages
Chinese (zh)
Inventor
陈晋音
陈宇冲
贾澄钰
倪洪杰
赵云波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210953766.6A priority Critical patent/CN115392434A/en
Publication of CN115392434A publication Critical patent/CN115392434A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a depth model reinforcement method based on graph structure variation test, which comprises the steps of injecting different variation operators into a deep learning system to construct different variation test models, analyzing the detection degree of variation to evaluate the quality of the test models and preferentially sequencing the models, so that graph network indexes with guiding significance are found according to sequencing results after the models are mapped into a graph structure to generate a new robust graph structure and reconstruct the new robust graph structure back to the models, and the robustness of the models is improved.

Description

一种基于图结构变异测试的深度模型加固方法A Deep Model Hardening Method Based on Graph Structure Mutation Test

技术领域technical field

本发明涉及分布式机器学习、人工智能安全领域,具体涉及一种基于图结构变异测试的深度模型加固方法。The invention relates to the fields of distributed machine learning and artificial intelligence security, in particular to a deep model reinforcement method based on graph structure variation testing.

背景技术Background technique

在过去的几十年里,深度学习在许多领域取得了巨大的成功,包括自动驾驶、人工智能、视频监控等。然而,随着最近一系列与深度学习相关的灾难性事故的频出,使得深度学习系统的鲁棒性和安全性成为一个大问题。尤其是最近Christian等人提出在原样本图像上添加一种人类察觉不到的误导性扰动的对抗性测试生成技术进一步加剧了这个问题。这种新生成的测试样本被称作对抗样本,它对深度学习系统,如人脸识别系统、自动校验系统和自动驾驶系统等构成潜在的安全威胁。In the past few decades, deep learning has achieved great success in many fields, including autonomous driving, artificial intelligence, video surveillance, etc. However, with the recent series of catastrophic accidents related to deep learning, the robustness and safety of deep learning systems has become a big problem. In particular, the recent adversarial test generation technique proposed by Christian et al. to add a misleading perturbation imperceptible to humans on the original sample image has further exacerbated this problem. This newly generated test sample is called an adversarial sample, which poses a potential security threat to deep learning systems, such as face recognition systems, automatic verification systems, and automatic driving systems.

在过去的几年里,大量的防御方法被提出来提高模型对对抗样本的鲁棒性,以避免现实应用中潜在的危险。这些方法大致可以分为对抗性训练、输入转换、模型架构转换和对抗性样本检测。然而,上述这些方法大部分是针对输入图像的像素空间,很少有通过研究模型层间结构来分析对抗性扰动的影响。这是因为虽然人们普遍认为神经网络的性能取决于其网络参数和训练样本,但对神经网络精度与其底层图结构之间的关系缺乏系统的了解。而在最近的研究中,You等人提出了一种将神经网络表示为图的新方法,称之为关系图。该图主要关注信息的交换,而不仅仅是定向数据流。但是这种方法只能构建原始的鲁棒模型,一旦遭受对抗样本的攻击就很难从细粒度的角度去进行解释和防御。同时,由于深度学习系统的独特特性,也需要新的质量评价标准对其进行指导。最近Ma等人就提出了一种专门用于深度学习系统的变异测试框架,通过定义一组源级变异算子和一组模型级变异算子,将故障注入到深度学习系统当中,最终通过分析注入故障的检测程度来评估测试数据的质量。但这种变异测试仅对局部区域进行了故障检测,无法对模型的整体结构提出一个具有指导性意见的优化方法。In the past few years, a large number of defense methods have been proposed to improve the robustness of models to adversarial examples to avoid potential dangers in real-world applications. These methods can be broadly categorized into adversarial training, input transformation, model architecture transformation, and adversarial example detection. However, most of these methods above focus on the pixel space of the input image, and few of them analyze the impact of adversarial perturbations by studying the interlayer structure of the model. This is because while it is generally accepted that the performance of a neural network depends on its network parameters and training samples, there is a lack of systematic understanding of the relationship between neural network accuracy and its underlying graph structure. In a recent study, You et al. proposed a new way to represent neural networks as graphs, called relational graphs. The diagram focuses primarily on the exchange of information, not just directed data flow. However, this method can only build the original robust model, and it is difficult to explain and defend from a fine-grained perspective once it is attacked by an adversarial example. At the same time, due to the unique characteristics of deep learning systems, new quality evaluation standards are needed to guide them. Recently, Ma et al. proposed a mutation testing framework for deep learning systems. By defining a set of source-level mutation operators and a set of model-level mutation operators, faults are injected into the deep learning system, and finally through the analysis The degree of detection of injected faults is used to evaluate the quality of test data. However, this kind of mutation test only detects faults in local areas, and cannot provide a guiding optimization method for the overall structure of the model.

而就在最近Laura等人就对模拟人脑网络进行了一次底层图结构指标的探索性实验,他们利用网络拓扑结构和模块化组织计算方法模拟了人脑的网络结构,并绘制了脑结构图统计了包括特征路径长度和聚类系数等指标来指导研究。但这些图指标缺乏对深度学习网络多层次、多视角的复杂研究,对网络鲁棒性能变化也缺乏可解释性理论基础。And just recently, Laura et al. conducted an exploratory experiment on the underlying graph structure index of the simulated human brain network. They simulated the network structure of the human brain by using the network topology and modular organization calculation methods, and drew a brain structure map. Indexes including feature path length and clustering coefficient were counted to guide the research. However, these graph indicators lack complex research on multi-level and multi-perspective deep learning networks, and lack an interpretable theoretical basis for network robust performance changes.

针对上述问题,本发明提出了一种将不同的变异算子注入到深度学习系统当中构建不同的变异测试模型,并通过分析变异的检测程度来评估模型的质量以对模型进行择优排序,最后在将模型映射成图结构之后根据排序的结果找到具有指导意义的图网络指标,从而生成新的鲁棒图结构重构回模型,提高模型的鲁棒性的方法。In view of the above problems, the present invention proposes a method of injecting different mutation operators into the deep learning system to construct different mutation test models, and evaluates the quality of the models by analyzing the degree of detection of mutations to prioritize the models. After mapping the model into a graph structure, find the guiding graph network indicators according to the sorting results, so as to generate a new robust graph structure and refactor it back into the model to improve the robustness of the model.

发明内容Contents of the invention

为了进一步探究多层次、多视角的复杂网络结构与模型鲁棒性能之间的关系,以及为网络的图结构表征方式提供一种具有指导性意义的细粒度解释,本发明提出了一种基于图结构变异测试的深度模型加固方法,通过将不同的变异算子注入到深度学习系统当中构建不同的变异测试模型,分析变异的检测程度来评估测试模型的质量并对模型进行择优排序,从而在将模型映射成图结构之后根据排序的结果找到具指导意义的图结构指标以生成新的鲁棒图结构重构回模型,最终提高模型的鲁棒性,加固模型的方法。In order to further explore the relationship between the multi-level, multi-view complex network structure and the robust performance of the model, and to provide a guiding fine-grained explanation for the graph structure representation of the network, the present invention proposes a graph-based The in-depth model reinforcement method of structural variation testing, injects different mutation operators into the deep learning system to construct different variation testing models, analyzes the degree of variation detection to evaluate the quality of the testing models and prioritizes the models. After the model is mapped into a graph structure, find the instructive graph structure index according to the sorting results to generate a new robust graph structure and reconstruct it back into the model, and finally improve the robustness of the model and strengthen the model.

为实现上述发明目的,本发明提供以下技术方案:In order to realize the above-mentioned purpose of the invention, the present invention provides the following technical solutions:

一种基于图结构变异测试的深度模型加固方法,包括:A deep model reinforcement method based on graph structure mutation testing, including:

1)获取训练数据和测试数据;1) Obtain training data and test data;

2)构建原始目标模型,并基于获取的训练数据训练原始目标模型;2) Construct the original target model, and train the original target model based on the acquired training data;

3)构建所述原始目标模型的变异测试模型,所述变异测试模型包括使用源级变异算子和/或模型级变异算子生成的变异测试模型;其中,对于使用模型级变异算子生成的已经训练好的变异测试模型直接保存,对于使用源级变异算子生成的未训练过的变异测试模型基于获取的训练数据并使用与原始目标模型训练相同的参数进行训练并保存;3) Construct the mutation test model of the original target model, the mutation test model includes the mutation test model generated by using the source-level mutation operator and/or the model-level mutation operator; wherein, for the generated by using the model-level mutation operator The trained mutation test model is directly saved, and the untrained mutation test model generated by the source-level mutation operator is trained and saved based on the obtained training data and using the same parameters as the original target model training;

4)基于测试数据对变异测试模型进行变异检测,计算每种变异测试模型的变异得分:4) Carry out mutation detection to the variation test model based on the test data, and calculate the variation score of each variation test model:

Figure BDA0003790286520000021
Figure BDA0003790286520000021

其中,M′是每种变异测试模型的集合,m′是每种变异测试模型中变异测试模型的索引,K是测试数据的标签的种类数目,KilledClasses(T′,m′)表示测试数据T′中杀死变异测试模型m′的测试数据种类的集合;对于测试数据T′当中的一个测试数据点t,如果满足以下条件,则说t杀死变异测试模型m′中的xi类数据;Among them, M' is the collection of each mutation test model, m' is the index of the mutation test model in each mutation test model, K is the number of types of labels of the test data, KilledClasses(T', m') represents the test data T The set of test data types in the 'killing mutation test model m'; for a test data point t in the test data T', if the following conditions are met, it is said that t kills the xi type data in the mutation test model m';

I、t被原始目标模型m正确地分类为xiI,t is correctly classified as x i by the original target model m;

II、t被变异测试模型m′没有正确地分类为xi;其中,测试数据基于标签的种类数目划分为K类,xi表示标签的种类为i的测试数据集合;II, t is not correctly classified as xi by the mutation test model m′; wherein, the test data is divided into K classes based on the number of types of labels, and xi represents the test data set whose type of label is i;

5)构建原始目标模型及每种变异测试模型的图结构,其中,图结构中的节点对应每层神经网络的神经元,图结构中的边对应每前后两层之间有传播关系的两个神经元节点之间的连线,边的权重为由前一层向后一层传播时对应节点的特征向量矩阵的分量;5) Construct the graph structure of the original target model and each variation test model, wherein the nodes in the graph structure correspond to the neurons of each layer of neural network, and the edges in the graph structure correspond to two The connection between neuron nodes, the weight of the edge is the component of the eigenvector matrix of the corresponding node when propagating from the previous layer to the next layer;

6)计算原始目标模型及每种变异测试模型的图结构对应的图结构指标,根据每种变异测试模型的图结构指标随变异得分的变化趋势,重新调整原始目标模型的图结构的生长方向,最后将调整后的图结构重构回模型,实现模型加固。6) Calculate the graph structure index corresponding to the graph structure of the original target model and each variation test model, readjust the growth direction of the graph structure of the original target model according to the variation trend of the graph structure index of each variation test model with the variation score, Finally, the adjusted graph structure is reconstructed back to the model to achieve model reinforcement.

本发明的技术构思为:本发明提供的基于图结构变异测试的深度模型加固方法,通过将不同的变异算子注入到深度学习系统当中构建不同的变异测试模型,分析变异的检测程度来评估测试模型的质量并对模型进行择优排序,从而在将模型映射成图结构之后根据排序的结果找到具指导意义的图网络指标以生成新的鲁棒图结构并重构回模型,提高模型的鲁棒性。The technical idea of the present invention is: the deep model reinforcement method based on the graph structure mutation test provided by the present invention constructs different mutation test models by injecting different mutation operators into the deep learning system, and analyzes the degree of detection of the mutation to evaluate the test The quality of the model and the optimal ranking of the models, so that after the model is mapped to the graph structure, the graph network indicators with guiding significance can be found according to the sorting results to generate a new robust graph structure and reconstruct it back to the model to improve the robustness of the model sex.

进一步地,所述源级变异算子包括:Further, the source-level mutation operator includes:

LR算子:从未训练过的原始目标模型结构当中随机删除一层;LR operator: Randomly delete a layer from the original target model structure that has not been trained;

LA算子:从未训练过的原始目标模型结构当中随机增加一层;LA operator: Randomly add a layer to the original target model structure that has never been trained;

和/或AFR算子:从未训练过的原始目标模型结构当中随机删除一层的所有激活功能。And/or AFR operator: Randomly remove all activation functions of a layer from the original target model structure that was not trained.

进一步地,所述模型级变异算子包括:Further, the model-level mutation operator includes:

GF算子:基于训练好的原始目标模型结构中权值的高斯分布,在训练过的原始目标模型结构当中改变权值的数值;GF operator: Based on the Gaussian distribution of weights in the trained original target model structure, change the value of the weight in the trained original target model structure;

WS算子:在训练过的原始目标模型结构当中随机选择神经元,并将其连接的权值置换到前一层;WS operator: Randomly select neurons in the trained original target model structure, and replace the weights of their connections to the previous layer;

NEB算子:在训练过的原始目标模型结构当中随机选取一层所有神经元的权值置为0;NEB operator: Randomly select the weights of all neurons in a layer from the trained original target model structure and set them to 0;

NAI算子:在训练过的原始目标模型结构当中随机选取一层所有神经元的权值置为负值;NAI operator: in the trained original target model structure, the weights of all neurons in a layer are randomly selected and set to negative values;

和/或NS算子:在训练过的原始目标模型结构当中随机选取一层内20%神经元的权值进行随机交换。And/or NS operator: Randomly select the weights of 20% of the neurons in a layer from the trained original target model structure for random exchange.

进一步地,所述GF算子中,在训练过的原始目标模型结构当中改变权值的数值,改后的数值的取值范围为[w-3σ,w+3σ];其中,σ是于训练好的原始目标模型结构中权值的高斯分布的标准差。Further, in the GF operator, the value of the weight value is changed in the trained original target model structure, and the value range of the changed value is [w-3σ, w+3σ]; where σ is used in training The standard deviation of the Gaussian distribution of weights in a good original target model structure.

进一步地,所述获取的训练数据和测试数据为图像数据,原始目标模型为图像分类网络模型,通过如下方法训练获得:Further, the acquired training data and test data are image data, and the original target model is an image classification network model, which is obtained through training as follows:

以训练数据中每个训练样本作为输入,训练样本的预测类别作为输出,通过最小化输出与样本的真实标签的损失对构建的原始目标模型进行训练,获得训练好的原始模型。Taking each training sample in the training data as input and the predicted category of the training sample as output, the original target model constructed is trained by minimizing the loss between the output and the true label of the sample to obtain the trained original model.

进一步地,所述步骤4)中,还包括:Further, in said step 4), it also includes:

计算每个变异测试模型m′∈M′在测试数据T′上的平均错误率用来衡量每种变异模型的整体行为差异效应AveErrorRate(T′,M′):Calculate the average error rate of each mutation test model m'∈M' on the test data T' to measure the overall behavior difference effect of each mutation model AveErrorRate(T', M'):

Figure BDA0003790286520000041
Figure BDA0003790286520000041

其中,∑m′∈M′ErrorRate(T′,m′)表示测试数据T′在变异测试模型m′测试中的错误率;Among them, ∑ m'∈M' ErrorRate(T', m') represents the error rate of the test data T' in the mutation test model m'test;

排除其中整体行为差异效应AveErrorRate(T′,M′)高的一种或几种变异测试模型M′。Exclude one or several variant test models M' in which the overall behavioral difference effect AveErrorRate(T', M') is high.

进一步地,所述步骤6)中,图结构指标包括:特征路径长度、度和/或聚类系数。Further, in the step 6), the graph structure index includes: characteristic path length, degree and/or clustering coefficient.

进一步地,所述步骤6)中,还包括:采用一种或多种对抗性攻击方法对重构的模型进行对抗攻击,以评价加固模型鲁棒性能的提升。Further, in the step 6), it also includes: adopting one or more adversarial attack methods to conduct an adversarial attack on the reconstructed model, so as to evaluate the improvement of the robust performance of the reinforced model.

本发明方法可应用于各类神经网络模型的构建及加固,尤其适用于基于K分类的图像分类网络模型的构建和加固,具体地,一种基于图结构变异测试的图像分类网络模型的构建方法,包括:The method of the present invention can be applied to the construction and reinforcement of various neural network models, and is especially suitable for the construction and reinforcement of image classification network models based on K classification, specifically, a method for constructing image classification network models based on graph structure variation testing ,include:

1)获取训练数据和测试数据,训练数据和测试数据均为图像;1) Obtain training data and test data, both of which are images;

2)构建原始目标模型,并基于获取的训练数据训练原始目标模型:以训练数据中每个训练样本作为输入,训练样本的预测类别作为输出,通过最小化输出与样本的真实标签的损失对构建的原始目标模型进行训练,获得训练好的原始模型。2) Construct the original target model and train the original target model based on the acquired training data: each training sample in the training data is used as input, and the predicted category of the training sample is used as the output, and is constructed by minimizing the loss pair between the output and the real label of the sample The original target model is trained to obtain the trained original model.

3)构建所述原始目标模型的变异测试模型,所述变异测试模型包括使用源级变异算子和/或模型级变异算子生成的变异测试模型;3) constructing a mutation test model of the original target model, the mutation test model comprising a mutation test model generated using a source-level mutation operator and/or a model-level mutation operator;

4)基于测试数据对变异测试模型进行变异检测,计算每种变异测试模型的变异得分;4) Carry out variation detection to variation testing model based on test data, calculate the variation score of every kind of variation testing model;

5)构建原始目标模型及每种变异测试模型的图结构;5) Construct the graph structure of original target model and every kind of variation test model;

6)计算原始目标模型及每种变异测试模型的图结构对应的图结构指标,根据每种变异测试模型的图结构指标随变异得分的变化趋势,重新调整原始目标模型的图结构的生长方向,最后将调整后的图结构重构回模型,实现模型加固。6) Calculate the graph structure index corresponding to the graph structure of the original target model and each variation test model, readjust the growth direction of the graph structure of the original target model according to the variation trend of the graph structure index of each variation test model with the variation score, Finally, the adjusted graph structure is reconstructed back to the model to achieve model reinforcement.

本发明的有益成果主要体现在:1)与传统模型测试方法相比,变异测试提出的变异算子用更细粒度的方式对模型进行了修改和测试,既保证了模型准确率不会产生明显的下降,又保留了实验的多样性。2)变异测试中模型级变异算子只对神经网络中少数神经元进行了操作,极大保留了神经网络的完整性。3)使用图结构指导的模型结构的鲁棒优化方法为未来模型鲁棒性研究提供了一种可参考的新方向。The beneficial results of the present invention are mainly reflected in: 1) Compared with the traditional model testing method, the mutation operator proposed by the mutation test modifies and tests the model in a more fine-grained manner, which not only ensures that the accuracy of the model will not produce obvious The decrease of , and the diversity of the experiment is preserved. 2) In the mutation test, the model-level mutation operator only operates on a small number of neurons in the neural network, which greatly preserves the integrity of the neural network. 3) The robust optimization method of model structure guided by graph structure provides a new direction for future model robustness research.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动前提下,还可以根据这些附图获得其他附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1是本发明方法的流程图。Figure 1 is a flow chart of the method of the present invention.

图2是本发明中变异测试工作流的示意图。Fig. 2 is a schematic diagram of the workflow of mutation testing in the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅仅用以解释本发明,并不限定本发明的保护范围。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.

参照图1~图2,一种基于图结构变异测试的深度模型加固方法,本实施例以图像分类模型加固为例,包括以下步骤:Referring to Figures 1 to 2, a deep model reinforcement method based on graph structure variation testing, this embodiment takes image classification model reinforcement as an example, including the following steps:

1)构建目标模型数据集1) Build the target model dataset

本实施例中,使用CIFAR-10数据集进行图像分类模型的类脑图的构筑和鲁棒性验证。CIFAR-10数据集共包含60000张RGB彩色图像,每张图像大小均为32*32,分为10类,每类共计6000张样本。其中训练样本50000张,测试样本10000张。本发明从CIFAR-10数据集的50000张训练样本里取全部50000张作为目标模型的训练集,从测试样本中取全部10000张作为目标模型的测试集。In this embodiment, the CIFAR-10 data set is used to construct and verify the robustness of the brain-like map of the image classification model. The CIFAR-10 data set contains a total of 60,000 RGB color images, each image size is 32*32, divided into 10 categories, and each category has a total of 6,000 samples. Among them, there are 50,000 training samples and 10,000 testing samples. The present invention takes all 50,000 of the 50,000 training samples of the CIFAR-10 data set as the training set of the target model, and takes all 10,000 of the test samples as the test set of the target model.

2)训练原始模型2) Train the original model

以每个训练样本作为输入,训练样本的类别作为输出,通过最小化输出与样本的真实标签的损失对构建的原始模型进行训练,获得训练好的原始模型。本实施例中,在CIFAR-10数据集使用3个具有512个隐藏单元的5层MLP作为图像分类模型的原始模型结构,MLP的输入是CIFAR-10图像(32*32*3)的3072维扁平向量,输出是10维的图像类别预测概率,每个MLP层有一个ReLU激活函数和一个BatchNorm正则化层。训练设置统一的超参数:训练epoch数为200、批次大小为128,采用随机梯度下降(SGD)、学习率设置为初始0.1的cos余弦学习率、损失函数在交叉熵函数基础上添加了正则化参数为λ、表示为以下公式,但不限于此:Taking each training sample as input and the category of the training sample as output, the constructed original model is trained by minimizing the loss between the output and the true label of the sample to obtain the trained original model. In this example, three 5-layer MLPs with 512 hidden units are used as the original model structure of the image classification model in the CIFAR-10 data set, and the input of the MLP is the 3072-dimensional CIFAR-10 image (32*32*3) Flat vector, the output is a 10-dimensional image category prediction probability, each MLP layer has a ReLU activation function and a BatchNorm regularization layer. The training set uniform hyperparameters: the number of training epochs is 200, the batch size is 128, stochastic gradient descent (SGD) is used, the learning rate is set to the initial cos cosine learning rate of 0.1, and the loss function is based on the cross entropy function. The parameter is λ, expressed as the following formula, but not limited thereto:

Figure BDA0003790286520000051
Figure BDA0003790286520000051

其中p(·)表示样本的真实标签,q(·)表示模型的预测概率,yi表示输入的样本,

Figure BDA0003790286520000052
表示模型参数,n是样本总数,λ表示正则化系数,||*||2是二范数。where p(·) represents the real label of the sample, q(·) represents the predicted probability of the model, y i represents the input sample,
Figure BDA0003790286520000052
Represents the model parameters, n is the total number of samples, λ represents the regularization coefficient, and ||*|| 2 is the second norm.

3)构建变异测试模型3) Build a variation test model

3.1)定义源级变异算子3.1) Define the source-level mutation operator

层移除(LR):LR算子在被删除层的输入输出结构相同的条件下,随机删除深度神经网络的一层。这里进行5次操作,每次从未训练过的原始模型结构当中随机选取与之前选取不重复的一层结构,删除之后构建5个变异测试模型

Figure BDA0003790286520000061
Layer removal (LR): The LR operator randomly deletes a layer of the deep neural network under the condition that the input and output structures of the deleted layer are the same. Here, 5 operations are performed. Each time, a layer structure that is not repeated with the previous selection is randomly selected from the original model structure that has never been trained, and 5 mutation test models are constructed after deletion.
Figure BDA0003790286520000061

层增加(LA):与LR算子相比,LA算子在深度神经网络结构上增加了一层。LA的重点是增加层,这里也进行5次操作,每次从未训练过的原始模型结构当中随机选取与之前选取不重复的一层结构,在该层结构之后添加一层新的网络以构建5个变异测试模型

Figure BDA0003790286520000062
Layer Addition (LA): Compared with the LR operator, the LA operator adds a layer to the deep neural network structure. The focus of LA is to add layers. Here, 5 operations are also performed. Each time, a layer structure that is not repeated with the previous selection is randomly selected from the original model structure that has never been trained, and a new layer of network is added after the layer structure to build 5 Variation Test Models
Figure BDA0003790286520000062

激活函数去除(AFR):由于深度神经网络具有较高的代表性,激活函数在深度神经网络的非线性中起着重要作用。AFR操作随机删除一个层的所有激活功能,以模拟开发人员忘记添加激活层的情况。这里同样进行5次操作,每次从未训练过的原始模型结构当中随机选取与之前选取不重复的一层结构,删除该层的所有激活功能以构建5个变异测试模型

Figure BDA0003790286520000063
Activation Function Removal (AFR): Due to the high representation of deep neural networks, activation functions play an important role in the nonlinearity of deep neural networks. The AFR operation randomly removes all activation features of a layer to simulate a situation where a developer forgets to add an activation layer. Here, 5 operations are also performed. Each time, a layer structure that is not repeated with the previous selection is randomly selected from the original model structure that has never been trained, and all activation functions of this layer are deleted to build 5 mutation test models.
Figure BDA0003790286520000063

3.2)定义模型级变异算子3.2) Define the model-level mutation operator

高斯模糊(GF):权值是深度神经网络的基本元素,它描述了神经元之间连接的重要性。权值也是深度神经网络决策逻辑的重要组成部分。改变权值的一种自然方法是模糊它的值,以改变它所代表的连接重要性。GF算子遵循高斯分布N(w,σ2)来改变给定的权值w,其中σ是用户可配置的标准偏差参数。GF算子大多将权重模糊到其附近的值范围(一般模糊值位于[w-3σ,w+3σ]的概率分布),这里进行5次操作,每次选取一个不同大小的σ在训练好的原模型上进行模糊操作构建5个变异测试模型

Figure BDA0003790286520000064
Gaussian blur (GF): Weights are the basic elements of deep neural networks, which describe the importance of connections between neurons. Weights are also an important part of the decision logic of deep neural networks. A natural way to change the weight is to blur its value in order to change the connection importance it represents. The GF operator follows a Gaussian distribution N(w, σ 2 ) to vary a given weight w, where σ is a user-configurable standard deviation parameter. Most of the GF operators blur the weight to the value range near it (generally the fuzzy value is located in the probability distribution of [w-3σ, w+3σ]). Here, 5 operations are performed, and each time a different size of σ is selected in the trained Perform fuzzy operations on the original model to build 5 mutation test models
Figure BDA0003790286520000064

权值变换(WS):神经元的输出通常是由前一层神经元决定的,每一层神经元都与权值相连。WS算子随机选择神经元,并将其连接的权值置换到前一层,随机选择神经元可以不变或者变为被置换的前一层的神经元的权值。这里进行5次操作,每次随机选取训练好的原模型中的一层取10%的神经元置换到前一层构建5个变异测试模型

Figure BDA0003790286520000065
Weight Transformation (WS): The output of a neuron is usually determined by the previous layer of neurons, and each layer of neurons is connected to a weight. The WS operator randomly selects neurons and replaces their connected weights to the previous layer. Randomly selected neurons can remain unchanged or become the weights of the replaced neurons in the previous layer. Here, 5 operations are performed, and each time a layer of the trained original model is randomly selected to replace 10% of the neurons to the previous layer to build 5 mutation test models
Figure BDA0003790286520000065

神经元效应阻断(NEB):当一个测试数据点被读入深度神经网络时,它被处理并通过不同权重的连接和神经元层传播,直到产生最终结果。每个神经元根据其连接强度在一定程度上对深度神经网络的最终决策有贡献。NEB算子通过将其下一层的连接权值重置为零来阻止神经元对下一层所有连接神经元的影响。这里进行5次操作,每次随机选取训练好的原模型中的一层将所有神经元置零构建5个变异测试模型

Figure BDA0003790286520000066
Neuron Effect Blocking (NEB): When a test data point is read into a deep neural network, it is processed and propagated through layers of connections and neurons with different weights until a final result is produced. Each neuron contributes to some extent to the final decision of the deep neural network according to the strength of its connections. The NEB operator prevents a neuron from influencing all connected neurons in the next layer by resetting its connection weights in the next layer to zero. Here, 5 operations are performed, and each time a layer of the trained original model is randomly selected to set all neurons to zero to build 5 mutation test models.
Figure BDA0003790286520000066

神经元逆激活(NAI):激活功能在形成神经元神经网络的非线性行为中起着关键作用。许多被广泛使用的激活函数(如ReLU等)根据其激活状态表现出不同的行为。NAI算子试图反转神经元的激活状态,在应用神经元的激活功能之前,可以通过改变神经元输出值的符号来实现。这有助于产生更多的变异神经元激活模式,每一种都可以显示出深度神经网络新的数学性质(如线性性质)。这里进行5次操作,每次随机选取训练好的原模型中的一层将层中所有神经元的权重取负值来实现神经元的逆激活,并借此构建5个变异测试模型

Figure BDA0003790286520000071
Neuronal Inverse Activation (NAI): The activation function plays a key role in shaping the nonlinear behavior of neuronal neural networks. Many widely used activation functions (such as ReLU, etc.) exhibit different behaviors according to their activation state. The NAI operator attempts to invert the activation state of a neuron, which can be achieved by changing the sign of the neuron's output value before applying the neuron's activation function. This helps generate more variant neuron activation patterns, each of which can exhibit new mathematical properties (such as linearity) of deep neural networks. Here, 5 operations are performed, and each time a layer of the trained original model is randomly selected, the weights of all neurons in the layer are negatively valued to realize the inverse activation of neurons, and 5 mutation test models are constructed.
Figure BDA0003790286520000071

神经元开关(NS):一个深度神经网络层的神经元通常对下一层连接的神经元有不同的影响。NS操作在一层内交换若干个神经元对下一层的作用和影响。这里进行5次操作,每次随机选取训练好的原模型中的一层取20%的神经元的权重进行随机交换构建5个变异测试模型

Figure BDA0003790286520000072
Neuron Switch (NS): Neurons in one deep neural network layer often have different effects on connected neurons in the next layer. The NS operation exchanges the role and influence of several neurons on the next layer in one layer. Here, 5 operations are performed, and each time a layer of the trained original model is randomly selected to take the weight of 20% of the neurons for random exchange to build 5 mutation test models
Figure BDA0003790286520000072

3.3)训练并保存模型3.3) Train and save the model

对于使用模型级变异算子生成的已经训练好的变异测试模型直接保存,对于使用源级变异算子生成的未训练过的变异测试模型使用步骤2)中的参数对其进行训练并保存。The trained mutation test model generated by the model-level mutation operator is directly saved, and the untrained mutation test model generated by the source-level mutation operator is trained and saved using the parameters in step 2).

4)变异检测4) Mutation detection

4.1)定义检测指标4.1) Define detection indicators

对于一个K分类问题,Z={x1,...,xK}为所有K类输入数据,xi表示其中的一类数据(i=1,2……K)。对于测试集T′当中的一个测试数据点t,如果满足以下条件,则说t杀死变异测试模型m′中的xi类数据(其中m′∈M′,M′为每种变异测试模型的集合,本实施例中包括8种变异测试模型,每种变异测试模型中包含5个变异测试模型):For a K classification problem, Z={x 1 , . . . , x K } are input data of all K classes, and xi represents one class of data (i=1, 2...K). For a test data point t in the test set T', if the following conditions are met, it is said that t kills the data of type xi in the mutation test model m' (where m'∈M', M' is each mutation test model The collection of, comprise 8 kinds of variation test models in the present embodiment, comprise 5 variation test models in every kind of variation test model):

I、t被原始模型m正确地分类为xiI, t is correctly classified as x i by the original model m;

II、t被变异测试模型m′没有正确地分类为xiII. t is incorrectly classified as x i by the mutation test model m'.

定义变异测试模型m′的变异得分MutationScore(T′,m′)如下,其中KilledClasses(T′,m′)是T′中测试数据杀死m′变异测试模型的类的集合:Define the mutation score MutationScore(T', m') of the mutation test model m' as follows, where KilledClasses(T', m') is the set of classes of the test data in T' that kills the m' mutation test model:

Figure BDA0003790286520000073
Figure BDA0003790286520000073

一般来说,很难精确预测变异算子带来的行为差异。因此,作为一优选方案,为了避免DL变异模型与原始模型之间引入太多行为差异,提出了DL变异模型质量控制程序。测量了每个变异测试模型m′在T′上的错误率。如果m′的错误率对于T′来说过高,就认为m′不是一个好的变异测试样本,因为它引入了很大的行为差异。从M′中排除了这些错误率高的变异测试模型,以便进一步分析。定义每个变异测试模型m′∈M′在T′上的平均错误率(AER)用来衡量所有变异算子引入的整体行为差异效应:In general, it is difficult to precisely predict the behavioral differences brought about by mutation operators. Therefore, as a preferred solution, in order to avoid introducing too many behavioral differences between the DL variant model and the original model, a quality control procedure for the DL variant model is proposed. The error rate on T' for each variation test model m' was measured. If the error rate of m' is too high for T', m' is not considered a good mutation test sample because it introduces large behavioral differences. These variant test models with high error rates were excluded from M′ for further analysis. Define the average error rate (AER) of each mutation test model m′∈M′ on T′ to measure the overall behavior difference effect introduced by all mutation operators:

Figure BDA0003790286520000081
Figure BDA0003790286520000081

其中,∑m′∈M′ErrorRate(T′,m′)表示测试数据T′在变异测试模型m′测试中的错误率;Among them, ∑ m'∈M' ErrorRate(T', m') represents the error rate of the test data T' in the mutation test model m'test;

4.2)排序变异测试模型4.2) Sorting Variation Test Model

使用4.1)中的定义对每一个保存的变异测试模型进行变异检测,首先排除掉平均错误率(AER)明显较高的几个异常变异测试模型,然后对剩下的变异测试模型根据其变异分数由高到低进行排序,一般情况下分数越高说明模型鲁棒性能越低。Use the definition in 4.1) to carry out mutation detection on each saved mutation test model, first exclude several abnormal mutation test models with significantly higher average error rate (AER), and then use the variation scores of the remaining mutation test models Sorting from high to low, in general, the higher the score, the lower the robustness of the model.

5)构建模型图结构5) Build the model graph structure

5.1)定义神经网络5.1) Define the neural network

定义图G=(V,E),其中y={v1,...,vn}为节点集合,

Figure BDA0003790286520000082
为边集合,且每个节点v有一个节点的特征向量Wv。Define graph G=(V, E), where y={v 1 ,...,v n } is a node set,
Figure BDA0003790286520000082
is a collection of edges, and each node v has a node feature vector W v .

5.2)定义模型的计算图5.2) Define the calculation graph of the model

利用前向传播算法,定义图节点集合V={v1,...,vn}为每一层的神经元,边集合

Figure BDA0003790286520000083
为每前后两层之间有传播关系的两个神经元节点vi,vj之间的连线,边的权重设为由前一层向后一层传播时对应节点的特征向量矩阵的分量,以全连接网络为例,用公式描述为:Using the forward propagation algorithm, define the graph node set V={v 1 ,...,v n } as the neurons and edge sets of each layer
Figure BDA0003790286520000083
is the connection between two neuron nodes v i and v j that have a propagation relationship between the two layers before and after each layer, and the weight of the edge is set to the component of the feature vector matrix of the corresponding node when it propagates from the previous layer to the next layer , taking the fully connected network as an example, the formula is described as:

Wv=[wi1,wi2,...,wij,...]W v =[w i1 , w i2 , . . . , w ij , . . . ]

其中对于每一个分量wij,i表示权重所连接的前一层网络当中神经元的下标即所在的位置,j表示权重所连接的后一层当中神经元的下标即所在的位置。一般全连接的网络中前一层网络的每一个神经元都存在与下一层所有神经元的连边即从第1个到第n个。Among them, for each component w ij , i represents the position of the subscript of the neuron in the previous layer network connected by the weight, and j represents the position of the subscript of the neuron in the subsequent layer connected by the weight. Generally, in a fully connected network, each neuron in the previous layer of the network has an edge with all neurons in the next layer, that is, from the 1st to the nth.

5.3)使用5..2)中定义的计算图方式为4.2)中每一个进行了变异测试的模型构建计算图结构。5.3) Use the calculation graph method defined in 5..2) to build a calculation graph structure for each model that has undergone a mutation test in 4.2).

6)在图结构指标引导下重构模型6) Refactor the model under the guidance of graph structure indicators

6.1)定义多种图结构指标,本实施例中采用如下三种图结构指标:6.1) Define a variety of graph structure indicators. In this embodiment, the following three graph structure indicators are used:

①特征路径长度:特征路径长度是衡量效率的指标,定义为网络的平均最短路径长度。用于计算最短路径的距离矩阵必须是一个连接长度矩阵,通常通过从权值到长度的映射获得。这里使用最常用的带权路径长度作为计算的标准,公式如下:①Characteristic path length: The characteristic path length is an index to measure efficiency, which is defined as the average shortest path length of the network. The distance matrix used to calculate the shortest path must be a connection length matrix, usually obtained by mapping from weights to lengths. Here, the most commonly used weighted path length is used as the calculation standard, and the formula is as follows:

Figure BDA0003790286520000084
Figure BDA0003790286520000084

其中wij是步骤3.2)中定义的边权重,l代表第l层,l=1,2,…L,L为神经网络的总层数,这里一般设置l为wij中下标i的神经元所处的层。Wherein w ij is the edge weight defined in step 3.2), l represents the lth layer, l=1, 2, ... L, L is the total number of layers of the neural network, here generally set l to be the neural network of subscript i in w ij The layer the element is in.

②度:一个点的度指图中与该点相连的边数。用Li表示第i个节点的度,定义如下:② Degree: The degree of a point refers to the number of edges connected to the point in the graph. Let L i represent the degree of the i-th node, which is defined as follows:

Li={vi:eij∈E∧eji∈E}L i = {v i : e ij ∈ E∧e ji ∈ E}

其中vi表示第i个节点,eij表示节点i和j之间的连边,∧表示逻辑交运算。Among them, v i represents the i-th node, e ij represents the edge between nodes i and j, and ∧ represents the logical intersection operation.

③聚类系数:聚类系数是用来描述一个图中的顶点之间结集成团的程度的系数。具体来说,是一个点的邻接点之间相互连接的程度。用CLi表示第i个节点的聚类系数,定义如下:③Clustering coefficient: The clustering coefficient is a coefficient used to describe the degree of clustering between vertices in a graph. Specifically, it is the degree of interconnection between adjacent points of a point. Let CL i represent the clustering coefficient of the i-th node, which is defined as follows:

Figure BDA0003790286520000091
Figure BDA0003790286520000091

其中每个e均表示两个节点之间的连边。where each e represents an edge between two nodes.

6.2)重构模型6.2) Refactoring model

使用6.1)中的方法计算每个原始目标模型及每种变异测试模型的图结构对应的各个图指标,并根据每种变异测试模型变异分数的排序观察指标的变化,找到有明显变化性趋势的指标指导图结构的生长方向,以特征路径长度为例,若观察到特征路径长度随每种变异测试模型变异分数的增大而增大,则说明特征路径长度越大,模型鲁棒性能越低,因而,将原始目标模型的特征路径长度往小调整固定,然后反算各个权值,获得新的图结构,最后将图结构重构回模型得到mnewUse the method in 6.1) to calculate each graph index corresponding to the graph structure of each original target model and each mutation test model, and observe the changes in the index according to the order of the mutation scores of each mutation test model, and find the ones with obvious variability trends The indicator guides the growth direction of the graph structure. Take the characteristic path length as an example. If it is observed that the characteristic path length increases with the variation score of each variation test model, it means that the larger the characteristic path length, the lower the robustness of the model. , therefore, adjust and fix the feature path length of the original target model to a small value, then back-calculate each weight value to obtain a new graph structure, and finally reconstruct the graph structure back to the model to obtain m new .

7)进一步地,对原始模型和重构模型进行对抗攻击,评价重构模型的鲁棒性7) Further, conduct adversarial attacks on the original model and the reconstructed model, and evaluate the robustness of the reconstructed model

采用了多种对抗性攻击方法,包括FGSM攻击、CW攻击和PGD攻击。每种攻击在每个数据集中随机选取1000张生成对抗样本进行攻击。三种攻击设置不同的参数,其中对于FGSM攻击,设置参数ε=2;对于CW攻击,使用L2范数的攻击,设置初始值c=0.01,置信度k=0,迭代次数epoch=200;对于PGD攻击,设置参数ε=2,步长α=ε/10,迭代次数epoch=20。Various adversarial attack methods are adopted, including FGSM attack, CW attack and PGD attack. For each attack, 1000 generated adversarial samples are randomly selected in each data set for attack. The three kinds of attacks set different parameters, among them, for FGSM attack, set parameter ε=2; for CW attack, use L 2 norm attack, set initial value c=0.01, confidence degree k=0, iteration number epoch=200; For PGD attack, set parameter ε=2, step size α=ε/10, iteration number epoch=20.

模型鲁棒性评价指标;在遭受对抗攻击时,模型常用准确率作为鲁棒性能的评价指标。Model robustness evaluation index; when subjected to adversarial attacks, the accuracy of the model is often used as an evaluation index of robust performance.

准确率:准确率表示对于给定的测试数据集,分类器正确分类的样本数与总样本数之比Accuracy rate: Accuracy rate indicates that for a given test data set, the ratio of the number of samples correctly classified by the classifier to the total number of samples

Figure BDA0003790286520000092
Figure BDA0003790286520000092

其中,TP表示正类判定为正类,FP表示负类被判定为正类,FN表示正类被判定为负类,TN表示负类被判定为负类,准确率越低,表明鲁棒性能越好,模型越稳固。经实验得到CIFAR-10数据集的重构模型在三种攻击下的准确率比原始模型平均提高了30.7%。Among them, TP means that the positive class is judged as a positive class, FP means that a negative class is judged as a positive class, FN means that a positive class is judged as a negative class, and TN means that a negative class is judged as a negative class. The lower the accuracy rate, it indicates robust performance. The better, the more robust the model. The accuracy of the reconstructed model of the CIFAR-10 dataset is improved by an average of 30.7% under the three attacks compared with the original model.

以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。The above-mentioned specific embodiments have described the technical solutions and beneficial effects of the present invention in detail. It should be understood that the above-mentioned are only the most preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, supplements and equivalent replacements made within the scope shall be included in the protection scope of the present invention.

Claims (8)

1.一种基于图结构变异测试的深度模型加固方法,其特征在于,包括:1. A deep model reinforcement method based on graph structure variation testing, characterized in that it comprises: 1)获取训练数据和测试数据;1) Obtain training data and test data; 2)构建原始目标模型,并基于获取的训练数据训练原始目标模型;2) Construct the original target model, and train the original target model based on the acquired training data; 3)构建所述原始目标模型的变异测试模型,所述变异测试模型包括使用源级变异算子和/或模型级变异算子生成的变异测试模型;其中,对于使用模型级变异算子生成的已经训练好的变异测试模型直接保存,对于使用源级变异算子生成的未训练过的变异测试模型基于获取的训练数据并使用与原始目标模型训练相同的参数进行训练并保存;3) Construct the mutation test model of the original target model, the mutation test model includes the mutation test model generated by using the source-level mutation operator and/or the model-level mutation operator; wherein, for the generated by using the model-level mutation operator The trained mutation test model is directly saved, and the untrained mutation test model generated by the source-level mutation operator is trained and saved based on the obtained training data and using the same parameters as the original target model training; 4)基于测试数据对变异测试模型进行变异检测,计算每种变异测试模型的变异得分:4) Carry out mutation detection to the variation test model based on the test data, and calculate the variation score of each variation test model:
Figure FDA0003790286510000011
Figure FDA0003790286510000011
其中,M′是每种变异测试模型的集合,m′是每种变异测试模型中变异测试模型的索引,K是测试数据的标签的种类数目,KilledClasses(T′,m′)表示测试数据T′中杀死变异测试模型m′的测试数据种类的集合;对于测试数据T′当中的一个测试数据点t,如果满足以下条件,则说t杀死变异测试模型m′中的xi类数据;Among them, M' is the set of each mutation test model, m' is the index of the mutation test model in each mutation test model, K is the number of types of labels of the test data, KilledClasses(T',m') represents the test data T The set of test data types in the 'killing mutation test model m'; for a test data point t in the test data T', if the following conditions are met, it is said that t kills the xi type data in the mutation test model m'; Ⅰ、t被原始目标模型m正确地分类为xiⅠ. t is correctly classified as x i by the original target model m; Ⅱ、t被变异测试模型m′没有正确地分类为xi;其中,测试数据基于标签的种类数目划分为K类,xi表示标签的种类为i的测试数据集合;Ⅱ. t is not correctly classified as xi by the mutation test model m′; wherein, the test data is divided into K categories based on the number of types of labels, and xi represents the test data set whose type of label is i; 5)构建原始目标模型及每种变异测试模型的图结构,其中,图结构中的节点对应每层神经网络的神经元,图结构中的边对应每前后两层之间有传播关系的两个神经元节点之间的连线,边的权重为由前一层向后一层传播时对应节点的特征向量矩阵的分量;5) Construct the graph structure of the original target model and each variation test model, wherein the nodes in the graph structure correspond to the neurons of each layer of neural network, and the edges in the graph structure correspond to two The connection between neuron nodes, the weight of the edge is the component of the eigenvector matrix of the corresponding node when propagating from the previous layer to the next layer; 6)计算原始目标模型及每种变异测试模型的图结构对应的图结构指标,根据每种变异测试模型的图结构指标随变异得分的变化趋势,重新调整原始目标模型的图结构的生长方向,最后将调整后的图结构重构回模型,实现模型加固。6) Calculate the graph structure index corresponding to the graph structure of the original target model and each variation test model, readjust the growth direction of the graph structure of the original target model according to the variation trend of the graph structure index of each variation test model with the variation score, Finally, the adjusted graph structure is reconstructed back to the model to achieve model reinforcement.
2.根据权利要求1所述的方法,其特征在于,所述源级变异算子包括:2. The method according to claim 1, wherein the source-level mutation operator comprises: LR算子:从未训练过的原始目标模型结构当中随机删除一层;LR operator: Randomly delete a layer from the original target model structure that has not been trained; LA算子:从未训练过的原始目标模型结构当中随机增加一层;LA operator: Randomly add a layer to the original target model structure that has never been trained; 和/或AFR算子:从未训练过的原始目标模型结构当中随机删除一层的所有激活功能。And/or AFR operator: Randomly remove all activation functions of a layer from the original target model structure that was not trained. 3.根据权利要求1所述的方法,其特征在于,所述模型级变异算子包括:3. The method according to claim 1, wherein the model-level mutation operator comprises: GF算子:基于训练好的原始目标模型结构中权值的高斯分布,在训练过的原始目标模型结构当中改变权值的数值;GF operator: Based on the Gaussian distribution of weights in the trained original target model structure, change the value of the weight in the trained original target model structure; WS算子:在训练过的原始目标模型结构当中随机选择神经元,并将其连接的权值置换到前一层;WS operator: Randomly select neurons in the trained original target model structure, and replace the weights of their connections to the previous layer; NEB算子:在训练过的原始目标模型结构当中随机选取一层所有神经元的权值置为0;NEB operator: Randomly select the weights of all neurons in a layer from the trained original target model structure and set them to 0; NAI算子:在训练过的原始目标模型结构当中随机选取一层所有神经元的权值置为负值;NAI operator: in the trained original target model structure, the weights of all neurons in a layer are randomly selected and set to negative values; 和/或NS算子:在训练过的原始目标模型结构当中随机选取一层内20%神经元的权值进行随机交换。And/or NS operator: Randomly select the weights of 20% of the neurons in a layer from the trained original target model structure for random exchange. 4.根据权利要求3所述的方法,其特征在于,所述GF算子中,在训练过的原始目标模型结构当中改变权值的数值,改后的数值的取值范围为[w-3σ,w+3σ];其中,σ是于训练好的原始目标模型结构中权值的高斯分布的标准差。4. The method according to claim 3, characterized in that, in the GF operator, the value of the weight is changed in the trained original target model structure, and the value range of the changed value is [w-3σ ,w+3σ]; where σ is the standard deviation of the Gaussian distribution of weights in the trained original target model structure. 5.根据权利要求1所述的方法,其特征在于,所述获取的训练数据和测试数据为图像数据,原始目标模型为图像分类网络模型,通过如下方法训练获得:5. The method according to claim 1, wherein the training data and test data obtained are image data, and the original target model is an image classification network model, which is obtained by training as follows: 以训练数据中每个训练样本作为输入,训练样本的预测类别作为输出,通过最小化输出与样本的真实标签的损失对构建的原始目标模型进行训练,获得训练好的原始模型。Taking each training sample in the training data as input and the predicted category of the training sample as output, the original target model constructed is trained by minimizing the loss between the output and the true label of the sample to obtain the trained original model. 6.根据权利要求1所述的方法,其特征在于,所述步骤4)中,还包括:6. method according to claim 1, is characterized in that, in described step 4), also comprises: 计算每个变异测试模型m′∈M′在测试数据T′上的平均错误率用来衡量每种变异模型的整体行为差异效应AveErrorRate(T′,M′):Calculate the average error rate of each mutation test model m'∈M' on the test data T' to measure the overall behavior difference effect of each mutation model AveErrorRate(T',M'):
Figure FDA0003790286510000021
Figure FDA0003790286510000021
其中,∑m′∈M′ErrorRate(T′,m′)表示测试数据T′在变异测试模型m′测试中的错误率;Among them, ∑ m'∈M' ErrorRate(T', m') represents the error rate of the test data T' in the mutation test model m'test; 排除其中整体行为差异效应AveErrorRate(T′,M′)高的一种或几种变异测试模型M′。Exclude one or several variant test models M' in which the overall behavior difference effect AveErrorRate(T', M') is high.
7.根据权利要求1所述的方法,其特征在于,所述步骤6)中,图结构指标包括:特征路径长度、度和/或聚类系数。7. The method according to claim 1, characterized in that, in the step 6), the graph structure index includes: characteristic path length, degree and/or clustering coefficient. 8.根据权利要求7所述的方法,其特征在于,所述步骤6)中,还包括:采用一种或多种对抗性攻击方法对重构的模型进行对抗攻击。8. The method according to claim 7, characterized in that, in the step 6), further comprising: using one or more adversarial attack methods to conduct an adversarial attack on the reconstructed model.
CN202210953766.6A 2022-08-10 2022-08-10 Depth model reinforcement method based on graph structure variation test Withdrawn CN115392434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210953766.6A CN115392434A (en) 2022-08-10 2022-08-10 Depth model reinforcement method based on graph structure variation test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210953766.6A CN115392434A (en) 2022-08-10 2022-08-10 Depth model reinforcement method based on graph structure variation test

Publications (1)

Publication Number Publication Date
CN115392434A true CN115392434A (en) 2022-11-25

Family

ID=84117864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210953766.6A Withdrawn CN115392434A (en) 2022-08-10 2022-08-10 Depth model reinforcement method based on graph structure variation test

Country Status (1)

Country Link
CN (1) CN115392434A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116361190A (en) * 2023-04-17 2023-06-30 南京航空航天大学 Deep learning variation test method based on neuron correlation guidance

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116361190A (en) * 2023-04-17 2023-06-30 南京航空航天大学 Deep learning variation test method based on neuron correlation guidance
CN116361190B (en) * 2023-04-17 2023-12-05 南京航空航天大学 A deep learning mutation testing method based on neuron correlation guidance

Similar Documents

Publication Publication Date Title
Dai et al. Adversarial attack on graph structured data
CN109492582B (en) Image recognition attack method based on algorithm adversarial attack
Połap et al. Meta-heuristic as manager in federated learning approaches for image processing purposes
Barbalau et al. Black-box ripper: Copying black-box models using generative evolutionary algorithms
EP3620990A1 (en) Capturing network dynamics using dynamic graph representation learning
Bui et al. Autofocus: interpreting attention-based neural networks by code perturbation
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
Ruan et al. Global robustness evaluation of deep neural networks with provable guarantees for the $ L_0 $ norm
Han et al. Backdooring multimodal learning
Wangperawong Attending to mathematical language with transformers
Rodrigues et al. Fitness landscape analysis of convolutional neural network architectures for image classification
CN113343225B (en) Poisoning defense method and device based on deep learning of neural pathway
CN117390407B (en) Fault identification method, system, medium and equipment for substation equipment
Hegazy et al. Dimensionality reduction using an improved whale optimization algorithm for data classification
CN112052933A (en) Security testing method and repair method of deep learning model based on particle swarm optimization
Hu et al. EAR: An enhanced adversarial regularization approach against membership inference attacks
CN115392434A (en) Depth model reinforcement method based on graph structure variation test
CN111882042A (en) Neural network architecture automatic search method, system and medium for liquid state machine
Stock et al. Lessons learned: defending against property inference attacks
CN114048837A (en) A Deep Neural Network Model Reinforcement Method Based on Distributed Brain-like Graph
CN115909027B (en) Situation estimation method and device
CN113468046B (en) Method for generating induction input of multi-target-oriented DNN model
Pan et al. The definitions of interpretability and learning of interpretable models
Kiritoshi et al. L1-Norm Gradient Penalty for Noise Reduction of Attribution Maps.
CN115203690A (en) A security reinforcement method of deep learning model based on abnormal deviation neuron

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221125

WW01 Invention patent application withdrawn after publication