CN109063775A - Instruction SDC fragility prediction technique based on shot and long term memory network - Google Patents

Instruction SDC fragility prediction technique based on shot and long term memory network Download PDF

Info

Publication number
CN109063775A
CN109063775A CN201810893739.8A CN201810893739A CN109063775A CN 109063775 A CN109063775 A CN 109063775A CN 201810893739 A CN201810893739 A CN 201810893739A CN 109063775 A CN109063775 A CN 109063775A
Authority
CN
China
Prior art keywords
instruction
sdc
fragility
feature
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810893739.8A
Other languages
Chinese (zh)
Inventor
刘云飞
李静
庄毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201810893739.8A priority Critical patent/CN109063775A/en
Publication of CN109063775A publication Critical patent/CN109063775A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a kind of instruction SDC fragility prediction techniques that is novel, being based on shot and long term memory network (LSTM), when obtaining Silent Data Corruption (SDC) fragility correlated characteristic of code each instruction, we no longer need a large amount of direct fault location that can find those of most fragile instruction in program by prediction model.The SDC fragility of previous instruction, which needs to carry out a large amount of direct fault location operation, can just obtain, this process is extremely time-consuming, the present invention is analyzed by dependence characteristics of the inherent feature and instruction to instruction itself in propagation path, it therefrom finds out and is associated with maximum feature with the SDC fragility of instruction, and combine the shot and long term memory network model for being good at processing sequence data, realize the identification to instruction SDC fragility, and a large amount of direct fault location is no longer needed to operate, save a large amount of time and resource.

Description

Instruction SDC fragility prediction technique based on shot and long term memory network
Technical field
It is of the invention a kind of pre- to the SDC fragility progress of instruction in the general LLVM intermediate code of distinct program language The method of survey.According to the inherent feature and dependence characteristics of the intermediate code instruction extracted, learnt by shot and long term memory network How the SDC fragility of instruction is predicted.
Background technique
The performance of processor, diminution processing are continuously improved by new technology by nearest decades, the designer of processor The size of device, but processor also becomes more and more fragile and unreliable, processor is than being easier that transient fault occurs in the past (transient faults).Mistake in transient fault and processor design is different, and transient fault is interruption, occurs It will restore after a period of time later, hardware circuit will not be damaged, this transient fault is referred to as soft error.Soft error will lead to The program being currently running breaks down, the change of the value of transmitting and storage by influencing signal, cause such as satellite out of control etc. The generation of accident.Soft error can influence the program being currently running in a manner of 3 kinds: (1) they will not be to the program being currently running Impact (benign/mask), (2) they may make program crashing or hang up (crash or hang), (3) they The output for leading to mistake be will lead to as a result, namely Silent Data Corruption (SDC) problem.Compared to collapsing for program It bursts and hangs up, SDC problem is more hidden, and once may result in serious consequence.In order to solve these soft errors Caused by SDC problem, designer has usually introduced to be made in the reinforcement means of hardware redundancy, such as memory (cache, memory) Detect this failure with ECC and parity check bit, but the cost too expensive of method that this hardware is reinforced, cost is too Height is not appropriate for desktop computer, notebook market.
Software-based redundancy reinforcement means provides one and consumes lower and more flexible selection, software-based In redundancy reinforcement means, selective redundancy reinforcing mode is most advantageous, it can be in the feelings that program SDC fragility is effectively reduced Reducing redundancy reinforcement means brings time and space consuming simultaneously under condition.Before the reinforcing for carrying out selectivity, in program The prediction of the fragility of the SDC of each instruction makes most critical, and the instruction for being only correctly found most fragile in program could body The advantage now selectively reinforced.Direct fault location is the simplest mode for finding the instruction of SDC fragility, although by intermediate code The direct fault location of the certain number of instruction progress and the SDC fragility for the available instruction of frequency for counting its SDC problem, this Kind mode very time-consuming, particularly with large program.Traditional machine learning (machine learning) is in such case Under the detection that be used to instruct SDC fragility, obtained by extracting with the relevant feature of SDC fragility and direct fault location is instructed Initial labels, classify using SDC fragility of the machine learning models such as support vector machines (SVM) and decision tree to instruction Or regression forecasting.But deep learning model effect the problem of a large amount of predictions are classified far had surpassed traditional engineering in recent years Learning method, wherein Recognition with Recurrent Neural Network (RNN) is a kind of neural network for processing sequence data and effect is very outstanding, But the problem of due to gradient explosion, traditional RNN can not handle long-term Dependence Problem, and shot and long term memory network (LSTM) is then It is the modified version of Recognition with Recurrent Neural Network (RNN), is born also for this problem is solved, and is due to instruction execution suitable Sequence, the ability value in terms of instruction SDC fragility classification prediction must be explored.
Summary of the invention
It is an object of the invention in the code write to distinct program language, by extracting consolidating for its LLVM intermediate command There are feature and dependence characteristics, completes to predict the classification of instruction SDC fragility using shot and long term memory network model, mainly include The following contents:
1) extraction of SDC Vulnerability Characteristics and fragility label obtain.LLFI direct fault location tool is to intermediate code command Destination register carry out direct fault location, by destination register, each SDC fragility takes the average SDC fragility as instruction Property, while the trace files of program execution are obtained, the Dynamic Execution number etc. of instruction therein is extracted by analyzing trace files Dynamic inherent feature.Different language has different characteristic and grammer, so if from the source code of program to program middle finger The cost that the feature of order extracts is very big, and LLVM compiler provides general intermediate generation to different program languages Code, solves the problems, such as this, so carrying out the extraction of direct fault location and feature to the LLVM intermediate code of distinct program to construct Data set.In the generation and communication process of SDC problem, instruct some features of itself that may have a certain impact, these Feature includes type, operand type, data width, the loop nesting depth etc. of instruction.For example the instruction of arithmetic types is being sent out The probability of raw SDC problem is apparently higher than address calculation instructions, so different types of instruction itself has different SDC fragilities Property.In a cycle, the depth of circulation is higher, and instruction therein is often more crucial, so the nested depth of round of instruction It has a certain impact to the SDC fragility of instruction.The spy that the propagation of SDC problem can be impacted in the propagation path of instruction Sign includes mask instruction, address calculation instructions, operand type etc., these features are referred to as dependence characteristics.Mask instruction refers to Logical operation and shift operation instruction, these instructions often have certain cover to make the instruction that mistake occurs or is propagating mistake With.And address calculation instructions are more likely to that collapse occurs if error or hang up (crash/hang).So in order to every The locating environment of item instruction is described in more detail, and the pass using LLVM is to instructing itself inherent feature and its local environment In dependence characteristics extract.
2) feature selecting.For classifier, including shot and long term memory network (LSTM), the feature the more not to represent More useful informations, at this moment the performance of classifier can decline instead with the increase of characteristic dimension.Cause under classifier performance The reason of drop, is in those high-dimensional features to contain extraneous features and redundancy feature.Therefore, it is desirable to train one efficiently Compact disaggregated model again also needs to select the feature of extraction, removes those before carrying out sorter model training The information low to SDC problem importance retains those information high to SDC problem importance.The method of feature selecting is divided into This 3 kinds of filter, wrapper, embedding, we select the single argument feature selecting (Univariate in filter Feature selection) method screens feature.Single argument feature selection approach is by being independently of sorting algorithm A kind of method based on statistics, it tests each feature, calculates some statistical indicator of each feature to measure spy It seeks peace the relationship predicted between classification.For classification task, single argument feature selecting between the feature and label of data set into The P-value value of each feature is calculated in row variance analysis (ANOVA), and according to Principle of Statistics, P-value is smaller then Illustrate that the importance of feature is stronger, it is possible to the importance scores of each feature be obtained by log function, and heavy according to this The property wanted score screens feature.
3) train classification models.The SDC fragility of instruction is substantially a classification problem, in order to obtain best classification Effect needs to select most suitable model.After distinct program is converted to LLVM intermediate code, the feature of each instruction Data are extracted, and obtain the data of static sequence instruction, and shot and long term memory network (LSTM) has on processing sequence data problem Big advantage, so shot and long term memory network (LSTM) is selected as training pattern.But shot and long term memory network possesses Different model parameters, these different model parameters all have a certain impact to final model prediction result, so we According to the different model of the network number of plies of most critical, cell number, droupout rate, this 4 parameter settings of learning rate, model is formed Data are used for each of these model sets model and are trained, finally select accuracy from these models by set Optimal models are as last prediction model.
Detailed description of the invention
Fig. 1 is method overall framework figure proposed by the present invention;
Fig. 2 is characterized the score chart of different characteristic after selection;
Fig. 3 is dataset construction flow chart;
Fig. 4 is prediction model training flow chart;
Fig. 5 is shot and long term memory network internal structure chart.
Specific embodiment
Specific introduction is done to the present invention below in conjunction with drawings and concrete examples.
The benchmark collection Mibench that the present invention will acquire is as test program, and therefrom selection a part represents Property strong program, be related to automobile and industry manufacture, consumer electronics, office automation, network, safety communicates six classes.From selecting Program LLVM intermediate code in extract and instruct the relevant inherent feature of SDC fragility and dependence characteristics.LLFI injection is used In the SDC fragility value of acquisition instruction, and according to injection result of the LLFI implantation tool on test program to the SDC of instruction Fragility is classified, and the building and division of data acquisition system are completed.Finally, crisp to the SDC of instruction by shot and long term memory network Weak property is learnt, and the instruction SDC fragility prediction model based on LSTM is obtained.The general frame of method of the invention such as Fig. 1 Shown, specific implementation process is as follows:
The extraction of step 1:SDC Vulnerability Characteristics and fragility label obtain
The acquisition of instruction SDC fragility label is obtained by LLFI direct fault location tool, program operation after injection As a result when different with the program operation result before injection, then it is assumed that SDC mistake occurs for program.To the destination register of instruction Each is injected, and the number of injection is Ti, SiFor the number that SDC mistake occurs in the number of injection, WiIt is posted for instruction purpose The data width of storage, using purpose calculator each SDC fragility mean value as the SDC fragility label of instruction, it is public Formula is as follows.
The feature extraction of the SDC fragility of instruction includes inherent feature and dependence characteristics.Traversal program intermediate code it is every One instruction carries out the information such as the basic block message of instruction, place function information, instruction type by LLVM compiler frame Obtain, obtain the static inherent feature of each instruction, such as instruction Dynamic Execution number of dynamic inherent feature in addition and The call number of function then is analyzed to obtain by the trace files for generating LLFI direct fault location tool where instruction.Equally Each instruction in traversal program, according to the characteristic of the Static Single Assignment of LLVM intermediate code, passes through the def- in LLVM Use chain obtains other instruction sets for using present instruction result, constantly iteration this operation, until reaching end instruction (store, br, call, because store is instructed and br instruction is all without destination register, call instruction can then generate new stack for this Frame, these instructions can all terminate the propagation of data), the instruction encountered in iterative process is all added in set, is ultimately formed The propagation path of every instruction, and pass through the dependence characteristics that LLVM compiler frame extracts instruction from path, as mask is instructed Number, be related to the number of instructions of address calculation, the dependence characteristics such as type of operand.Construction process such as Fig. 3 institute of data set Show.
Step 2: feature selecting
Feature selecting, after carrying out feature extraction, the redundancy or nothing of extraction are carried out to the SDC Vulnerability Characteristics of instruction It excessively will lead to the performance decline of classifier with feature.Single argument feature selection approach is by being to be based on independently of sorting algorithm A kind of method of statistics, for classification task, single argument feature selecting carries out variance point between the feature and label of data set It analyses (ANOVA), initially sets up hypothesis, it is believed that it is not related between characteristic variable and label, inspection level is set, and default value is general It is 0.05.Then, the sum of sguares of deviation from mean SS that always makes a variation calculated according to sample data set and capable from mean square and MS, further according to In group between group from mean square and F-value value being calculated.Finally, being distributed according to obtained F-value value and corresponding F Probability density function find corresponding P-value value, when P-value value be less than setting inspection level when, then can refuse It is not related between the characteristic variable done and label it is assumed that i.e. this feature is important label before absolutely.Meanwhile it obtaining P-value value it is smaller, illustrate that the importance of feature is stronger, it is possible to pass through log function in following formula and mapping letter Number will obtain the importance scores of each feature and map that [0,1], and be carried out according to this importance scores to feature Screening.
The importance scores of each feature are as shown in Figure 2 after feature selecting.
score′i=-log10(P-valuei)
Step 3: train classification models
After distinct program is converted to LLVM intermediate code, the characteristic of each instruction is extracted, and is obtained quiet The data of state sequence instruction, shot and long term memory network (LSTM) have big advantage on processing sequence data problem,.Shot and long term Memory network possesses different model parameters, these different parameter settings can obtain different models, and can be to training result Certain influence is generated, so we are according to the network number of plies of most critical, cell number, droupout rate, learning rate this 4 parameters Different models is set, forms candidate family set, obtained data set is used for each of these candidate family set Model and the accuracy rate on test set for recording each model, finally according to the accuracy rate of model from these candidate families Optimal models are selected as last prediction model.
For data set D={ X of the feature selecting after processed1, X2..., Xi..., Xd, each of them data are Xi ={ x1, x2..., xn, y }, training set and test set are constructed by 5: 1, before training by data set according to time series Timestep is divided into k equal part, and timestep*k=d is trained as follows:
Step 1. is according to this 4 parameters of different model parameters such as the network number of plies, cell number, droupout rate, learning rate Different model set M={ m is set1, m2..., mg}。
Step 2. takes out a model from model set, carries out the initialization of network parameter.
Timestep data input network in training set is trained by step 3., takes the defeated of the last one time step Out as the output of hidden layer, and final classification results are exported after softmax function as the input of full articulamentum.Most Afterwards, it gives output result to cross entropy loss function and calculates penalty values.
If step 4. penalty values are not converged, repeatedly step 3, and according to the net of the continuous iteration of learning rate more new model Network parameter reaches convergence until penalty values.
Obtained convergent model is used for training set by step 5., the accuracy rate being recorded on training set, if Models Sets Closing M, there are also remaining models not to use, then goes to step 2.
Step 6. picks out the highest model of accuracy rate as optimal models from model set M.
Training prediction model flow chart is as shown in Figure 4.

Claims (6)

1. the instruction SDC fragility prediction technique based on shot and long term memory network, it is characterised in that:
1) instruction that this method is directed to is the general LLVM intermediate code instruction of program;
2) the SDC fragility of instruction is defined, and passes through LLVM based Fault Injection (LLFI) direct fault location work Tool obtains the SDC fragility of every instruction in program;
3) feature relevant to instruction SDC fragility is extracted from program intermediate code instruction itself and propagation path;
4) after extracting instruction features, feature selecting is carried out to the SDC Vulnerability Characteristics of instruction;
5) this method is shot and long term memory network applying in instruction SDC fragility classification prediction for the first time.
2. the instruction SDC fragility prediction technique based on shot and long term memory network as described in claim 1, it is characterised in that this The instruction that method is directed to is that the general LLVM intermediate code instruction of program has in the detection research of SDC fragility in face of program Source code, also have in face of program assembly code, however, being program source code or assembly code, their type is too many And it is complicated, very big trouble is brought to the detection of SDC, with the help of LLVM compiler, any language can be converted to General LLVM intermediate code, this brings great convenience to the detection of SDC.Compared to the SDC of basic block and function fragility Property prediction, the SDC fragility prediction of instruction-level more can obtain to fine granularity the fragile information of program.
3. the instruction SDC fragility prediction technique based on shot and long term memory network as described in claim 1, it is characterised in that fixed The SDC fragility of justice instruction, and every SDC fragility instructed in program is obtained by LLFI direct fault location, for program In each LLVM intermediate code instruct Ii, it SDC fragility instruction destination register in each injection obtain The SDC fragility of the average value of SDC fragility, instruction is defined as follows.
Wherein, WiRepresent instruction IiDestination register bit wide, SiIt represents and carries out direct fault location using LLFI direct fault location tool The number of SDC mistake, T occur afterwardsiIndicate the number of progress direct fault location.
4. the instruction SDC fragility prediction technique based on shot and long term memory network as described in claim 1, it is characterised in that from Feature relevant to instruction SDC fragility, the intrinsic spy of instruction are extracted in program intermediate code instruction itself and propagation path Size, instruction type, the instruction Dynamic Execution number inherent feature of sign such as instruction place basic block can be to a certain extent Reflect the SDC fragility of instruction, instructs the feature on propagation path to be known as dependence characteristics, they have on the path of propagation can More serious mistake can be covered or caused to the SDC mistake that instruction occurs, number, behaviour's institute's number including mask instruction Type, address calculation instructions number, address calculation instructions operand type etc..
5. the instruction SDC fragility prediction technique described in claim 1 based on shot and long term memory network, it is characterised in that extract After instruction features, feature selecting, after carrying out feature extraction, the redundancy of extraction are carried out to the SDC Vulnerability Characteristics of instruction Or useless feature excessively will lead to the performance decline of classifier.Monotropic measure feature choosing is carried out to the data set that feature extraction obtains It selects, variance analysis (ANOVA) is carried out to data set, the relationship between feature and class label is measured according to statistic, is calculated The F-value of each feature simultaneously obtains P-value according to it, then calculates the importance point of each feature according to the following formula It counts and is mapped to [0,1].
score′i=-log10(P-valuei)
Wherein, F-value is the ratio of Mean squares between groups and Mean squares within group, and F-value obeys F distribution, and P-value is for determining spy The parameter of relevance between sign and label is obtained by inquiring F distribution table, when it is lower than inspection level, then it is assumed that feature It is important to label, and P-value is smaller then more important.
6. the instruction SDC fragility prediction technique described in claim 1 based on shot and long term memory network, it is characterised in that we Method is for the first time application of the shot and long term memory network in instruction SDC fragility classification prediction, and the instruction SDC proposed in recent years is fragile Property prediction technique is based primarily upon traditional machine learning method, including support vector machines, support vector regression, post-class processing Deng, but based on being put forward for the first time when the instruction SDC fragility prediction mode of shot and long term memory network in deep learning.Feature is selected Select the data set D={ X after processing1, X2..., Xi..., Xd, each of them data are Xi={ x1, x2..., xn, y }, Training set and test set are constructed by 5: 1, data set is divided into k equal part according to time series timestep before training, Timestep*k=d is trained as follows:
Step 1. is according to this 4 parameter settings of different model parameters such as the network number of plies, cell number, droupout rate, learning rate Different model set M={ m1, m2..., mg}。
Step 2. takes out a model from model set, carries out the initialization of network parameter.
Timestep data input network in training set is trained by step 3., and the output of the last one time step is taken to make For the output of hidden layer, and final classification results are exported after softmax function as the input of full articulamentum.Finally, will Output result gives cross entropy loss function and calculates penalty values.
If step 4. penalty values are not converged, repeatedly step 3, and are joined according to the network of the continuous iteration of learning rate more new model Number reaches convergence until penalty values.
Obtained convergent model is used for training set by step 5., the accuracy rate being recorded on training set, if model set M is also There is remaining model not use, then goes to step 2.
Step 6. picks out the highest model of accuracy rate as optimal models from model set M.
CN201810893739.8A 2018-08-03 2018-08-03 Instruction SDC fragility prediction technique based on shot and long term memory network Pending CN109063775A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810893739.8A CN109063775A (en) 2018-08-03 2018-08-03 Instruction SDC fragility prediction technique based on shot and long term memory network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810893739.8A CN109063775A (en) 2018-08-03 2018-08-03 Instruction SDC fragility prediction technique based on shot and long term memory network

Publications (1)

Publication Number Publication Date
CN109063775A true CN109063775A (en) 2018-12-21

Family

ID=64678709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810893739.8A Pending CN109063775A (en) 2018-08-03 2018-08-03 Instruction SDC fragility prediction technique based on shot and long term memory network

Country Status (1)

Country Link
CN (1) CN109063775A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159011A (en) * 2019-12-09 2020-05-15 南京航空航天大学 Instruction vulnerability prediction method and system based on deep random forest
CN112765609A (en) * 2020-12-31 2021-05-07 南京航空航天大学 Multi-bit SDC fragile instruction identification method based on single-class support vector machine
CN112965854A (en) * 2021-04-16 2021-06-15 吉林大学 Method, system and equipment for improving reliability of convolutional neural network
CN113610154A (en) * 2021-08-06 2021-11-05 吉林大学 GPGPU program SDC error detection method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984632A (en) * 2014-05-29 2014-08-13 东南大学 SDC vulnerable instruction recognition method based on error propagation analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984632A (en) * 2014-05-29 2014-08-13 东南大学 SDC vulnerable instruction recognition method based on error propagation analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FANG WU等: "Vulnerability Detection with Deep Learning", 《2017 3RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS》 *
张倩雯等: "基于机器学习的指令SDC脆弱性分析方法", 《小型微型计算机系统》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159011A (en) * 2019-12-09 2020-05-15 南京航空航天大学 Instruction vulnerability prediction method and system based on deep random forest
CN111159011B (en) * 2019-12-09 2022-05-20 南京航空航天大学 Instruction vulnerability prediction method and system based on deep random forest
CN112765609A (en) * 2020-12-31 2021-05-07 南京航空航天大学 Multi-bit SDC fragile instruction identification method based on single-class support vector machine
CN112765609B (en) * 2020-12-31 2022-06-07 南京航空航天大学 Multi-bit SDC fragile instruction identification method based on single-class support vector machine
CN112965854A (en) * 2021-04-16 2021-06-15 吉林大学 Method, system and equipment for improving reliability of convolutional neural network
CN112965854B (en) * 2021-04-16 2022-04-29 吉林大学 Method, system and equipment for improving reliability of convolutional neural network
CN113610154A (en) * 2021-08-06 2021-11-05 吉林大学 GPGPU program SDC error detection method and device
CN113610154B (en) * 2021-08-06 2023-12-29 吉林大学 GPGPU program SDC error detection method and device

Similar Documents

Publication Publication Date Title
Li et al. Incremental learning imbalanced data streams with concept drift: The dynamic updated ensemble algorithm
CN109063775A (en) Instruction SDC fragility prediction technique based on shot and long term memory network
CN112579477A (en) Defect detection method, device and storage medium
US10698697B2 (en) Adaptive routing to avoid non-repairable memory and logic defects on automata processor
US10521210B1 (en) Programming language conversion
CN114238100A (en) Java vulnerability detection and positioning method based on GGNN and layered attention network
CN101404033A (en) Automatic generation method and system for noumenon hierarchical structure
CN108804558A (en) A kind of defect report automatic classification method based on semantic model
CN105893876A (en) Chip hardware Trojan horse detection method and system
CN105095756A (en) Method and device for detecting portable document format document
US20170270424A1 (en) Method of Estimating Program Speed-Up in Highly Parallel Architectures Using Static Analysis
CN111400713B (en) Malicious software population classification method based on operation code adjacency graph characteristics
CN110955892B (en) Hardware Trojan horse detection method based on machine learning and circuit behavior level characteristics
Yu et al. Deep learning-based hardware Trojan detection with block-based netlist information extraction
CN116361788A (en) Binary software vulnerability prediction method based on machine learning
Chaudhuri et al. Functional criticality analysis of structural faults in AI accelerators
CN112035345A (en) Mixed depth defect prediction method based on code segment analysis
Hamerly et al. Using machine learning to guide architecture simulation.
CN101487876B (en) Optimization method and apparatus for verification vectors
CN111738290B (en) Image detection method, model construction and training method, device, equipment and medium
CN113626034A (en) Defect prediction method based on combination of traditional features and semantic features
CN115599698B (en) Software defect prediction method and system based on class association rule
Gunasekara et al. On natural language processing applications for military dialect classification
Hashemi et al. Graph centrality algorithms for hardware trojan detection at gate-level netlists
Brzezinski et al. Structural XML classification in concept drifting data streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181221