CN114626106A - Hardware Trojan horse detection method based on cascade structure characteristics - Google Patents

Hardware Trojan horse detection method based on cascade structure characteristics Download PDF

Info

Publication number
CN114626106A
CN114626106A CN202210159164.3A CN202210159164A CN114626106A CN 114626106 A CN114626106 A CN 114626106A CN 202210159164 A CN202210159164 A CN 202210159164A CN 114626106 A CN114626106 A CN 114626106A
Authority
CN
China
Prior art keywords
gate
trojan horse
neural network
cascade structure
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210159164.3A
Other languages
Chinese (zh)
Inventor
刘鸿瑾
陈嘉伟
张绍林
施博
李天文
李宾
王巧凤
王红霞
白星
李亚妮
李康
史江义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sunwise Space Technology Ltd
Original Assignee
Beijing Sunwise Space Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sunwise Space Technology Ltd filed Critical Beijing Sunwise Space Technology Ltd
Priority to CN202210159164.3A priority Critical patent/CN114626106A/en
Publication of CN114626106A publication Critical patent/CN114626106A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/76Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in application-specific integrated circuits [ASIC] or field-programmable devices, e.g. field-programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Abstract

The invention discloses a hardware Trojan horse detection method based on cascade structure characteristics, relates to the technical field of hardware safety, and solves the problems of high false judgment rate and complex work of the existing hardware Trojan horse detection technology, and comprises the following steps: step S1: extracting a characteristic vector of the cascade structure characteristic of the hardware Trojan horse circuit from a gate-level netlist file based on Verilog; step S2: constructing and training a neural network model; step S3: using the trained neural network model to perform Trojan horse detection; the method has the advantage of high Trojan horse detection accuracy of the hardware circuit.

Description

Hardware Trojan horse detection method based on cascade structure characteristics
Technical Field
The invention relates to the technical field of hardware safety, in particular to the technical field of hardware Trojan horse detection methods based on cascade structure characteristics.
Background
Recently, with the globalization of the integrated circuit industry, hardware security has become an urgent issue. Typically, to reduce development costs and shorten the marking time, developers will use intellectual property cores and EDA tools provided by third parties. However, the services provided by third party providers are not necessarily reliable. A malicious vendor may implant a hardware trojan in their IP. The existence of the hardware trojan can cause the integrated circuit chip to have wrong functions, reduce the performance, leak confidential information and even be damaged. A hardware trojan is typically composed of a trigger and a payload. When the trigger is activated, the payload will perform a malicious function. The hardware trojan is generally designed at a node with low triggering probability in order to avoid detection. This makes it difficult to detect a hardware trojan in the circuit.
In recent years, machine learning is widely applied to the Trojan detection of the gate-level netlist, wherein the more common machine learning models are: SVM, K-means, random forest, etc. Although the accuracy of the Trojan detection of the gate-level netlist is greatly improved by the addition of the machine learning, the missing detection rate and the misjudgment rate of the traditional machine learning model are still high. The problems and defects of the prior art are as follows: in the prior art, the missing detection rate or the false judgment rate of the hardware Trojan horse detection technology is too high, namely when a Trojan horse circuit is completely detected, many normal circuits can be judged by mistake; when all normal circuits are detected as normal, many trojan circuits are missed; the existing Trojan horse detection method based on the neural network is large in required feature quantity, and the feature extraction work is complicated.
The method solves the problems and defects in the prior art, and has certain difficulty because the hardware Trojan horse circuit has extremely small scale and extremely low triggering rate, avoids the traditional detection method by using the node with small triggering probability, and hides the node in a normal circuit by using the advantage of small scale; in addition, the requirement of the machine learning method on the characteristics is high, the same model is trained by using different characteristics, a great effect can be achieved, and the characteristics are selected to accord with the characteristics of both the model and the hardware Trojan horse.
Disclosure of Invention
The invention aims to: the problem of current hardware Trojan horse detection technology misjudgment rate height and work loaded down with trivial details is solved. In order to solve the technical problem, the invention provides a hardware Trojan horse detection method based on cascade structure characteristics.
The invention specifically adopts the following technical scheme for realizing the purpose:
a hardware Trojan horse detection method based on cascade structure characteristics comprises the following steps:
step S1: extracting a characteristic vector of a cascade structure of a hardware Trojan horse circuit from a gate-level netlist file based on Verilog;
step S2: constructing and training a neural network model;
step S3: and (5) using the trained neural network model to perform Trojan detection.
Preferably, in step S1, the extracting the feature vector of the cascade structure of the hardware trojan horse circuit includes:
classifying gate-level modules of the netlist according to functions, and setting the size of the feature vector according to the number of classified types;
and traversing through a depth-first search algorithm to obtain the feature vector of the cascade structure.
Preferably, the classifying the gate-level modules of the netlist according to functions and setting the size of the feature vector according to the number of classified types includes:
classifying the gate-level modules according to the functional information of the gates to obtain the classification types of the gate-level modules;
combining the classifications in pairs to obtain a static structure characteristic type;
and setting the size of the feature vector according to the number of the feature types of the static structure.
Preferably, the method for obtaining the feature vector of the cascade structure through traversal by the depth-first search algorithm includes:
step S401: initializing, selecting a target gate, setting the target gate as an access gate, and setting the depth as an initial value 0;
step S402: moving to an unaccessed adjacent door of the access door, wherein the adjacent door becomes a new access door, adding 1 to the depth, counting the connection type, and adding 1 to the number of the connection type vectors at the corresponding positions;
step S403: judging whether the access door has an adjacent door which is not accessed, if so, returning to the step S402; if not, the maximum depth is judged to be reached, the previous gate is returned and serves as an access gate, the depth is reduced by 1, then whether the access gate has an adjacent gate which is not accessed is judged, if yes, the step S402 is returned, and if not, the process is ended to obtain the feature vector of the target gate.
Preferably, in the step S2, the neural network model includes a fully-connected neural network and a decision tree;
the fully-connected neural network comprises an input layer, a hidden layer and an output layer; the input layer is used for inputting the feature vectors into a neural network; the hidden layer and the output layer are used for acquiring and outputting a calculation result;
and the decision tree is used for judging whether the gate-level netlist contains the Trojan horse or not according to the calculation result.
Preferably, in step S3, the performing Trojan horse detection using the trained neural network model includes:
performing feature extraction on the netlist to be processed to obtain a feature vector of the whole netlist;
putting the feature vectors into the trained neural network model one by one to obtain model output;
and (4) detecting the output result of the model, judging that the Trojan horse is contained if the output result is 1, and judging that the Trojan horse is normal if the output result is 0.
The invention has the following beneficial effects:
the performance of hardware Trojan horse detection is improved, and the safety of integrated circuit chip design is enhanced; a new idea is provided for the application of the neural network in the field of hardware Trojan horse detection; the feature extraction method is very simple and can be realized only by the most basic depth-first search, the feature vector contains the gate-level connection structure features of the circuit near the target gate, the gate-level fan-in and fan-out information is abandoned, but the fan-in and fan-out information is contained in the feature vector in a phase-change manner due to the small search depth; the characteristic method of the invention omits the redundant information of other characteristic methods, so that a better neural network model is obtained under the condition that the input characteristic data is smaller; the feature extraction method can be used for extracting and detecting features of any gate-level module of the gate-level netlist, so that the module position of the Trojan horse circuit in the gate-level netlist can be determined.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a symbolic illustration of a gate level module;
FIG. 3 is a schematic diagram of a flip-flop circuit of embodiment 1;
fig. 4 is a schematic diagram of the neural network of embodiment 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a hardware Trojan horse detection method based on a cascade structure feature, including the following steps:
step S1: extracting a characteristic vector of a cascade structure of a hardware Trojan horse circuit from a gate-level netlist file based on Verilog;
in the step, the local cascade structure characteristics of the gate-level netlist can be extracted, and the structure characteristic information is changed into a characteristic vector form, so that the training and learning of the neural network are facilitated.
Step S2: constructing and training a neural network model;
through the steps, a neural network model can be constructed, then the neural network model is trained through the known Trojan horse netlist, and then each weight value in the neural network model is determined, so that the neural network model can distinguish the characteristics in the hardware Trojan horse feature vector, and the trained model can detect the hardware Trojan horse on the unknown netlist.
Step S3: and (5) using the trained neural network model to perform Trojan detection.
In this embodiment, preferably, in the step S1, the extracting the feature vector of the cascade structure of the hardware trojan horse circuit includes:
classifying gate-level modules of the netlist according to functions, and setting the size of the feature vector according to the number of classified types;
and traversing through a depth-first search algorithm to obtain the feature vector of the cascade structure.
As a preferred scheme of this embodiment, the classifying the gate-level modules of the netlist according to functions and setting the size of the feature vector according to the number of types of classification includes:
classifying the gate-level modules according to the functional information of the gates to obtain the classification types of the gate-level modules;
combining the classifications in pairs to obtain a static structure characteristic type;
and setting the size of the feature vector according to the number of the feature types of the static structure.
In the present embodiment, the gate level module types are classified into 14 types according to the function information of the gate, specifically referring to table 1, and the symbolic expression of the commonly used gate level module refers to fig. 2;
TABLE 1 netlist gate level module functional classification
Figure BDA0003512825920000051
Figure BDA0003512825920000061
Then, after the classification, the static structure feature types are expressed, such as NOR-NOR, AND-NOR AND AND-OR, namely the static structure feature types are combined in pairs in the table 1, so that the static structure feature types are 196 types in total;
the size of the feature vector is set to 196, which represents the number of elements in the feature vector.
In addition, the method for obtaining the feature vector of the cascade structure through traversal by the depth-first search algorithm does not include:
step S401: initializing, selecting a target gate, setting the target gate as an access gate, and setting the depth as an initial value 0;
step S402: moving to an unaccessed adjacent door of the access door, wherein the adjacent door becomes a new access door, adding 1 to the depth, counting the connection type, and adding 1 to the number of the connection type vectors at the corresponding positions;
step S403: judging whether the access door has an adjacent door which is not accessed, if so, returning to the step S402; if not, the maximum depth is judged to be reached, the previous gate is returned and serves as an access gate, the depth is reduced by 1, then whether the access gate has an adjacent gate which is not accessed is judged, if yes, the step S402 is returned, and if not, the process is ended to obtain the feature vector of the target gate.
Taking the trigger circuit of HT in s35932-T100 as an example, as shown in FIG. 3. Assuming that the netlist gate-level modules are classified into only four types of AND, NOR, INV AND OTHER according to the functional classification, the size of the feature vector is set to be 16. The maximum depth of the depth-first search algorithm is set to 2, a nor gate in the circuit diagram is taken as a target gate, a feature vector of the gate is extracted, and the feature vector is shown in table 2.
Let the feature vector of a single sample be x ═ x1,x2,x3,…,xn]Wherein x isiRepresents the number corresponding to a certain feature type of the sample, and n represents the number of the feature types, i.e. the size of the feature vector. The feature vector of table two can be expressed as x ═ 0,8,0,0,2,0,0,0,1,0,0,0,0,0]。
TABLE 2 feature vectors
Type of feature Feature vector
AND-AND 0
AND-NOR 8
AND-INV 0
AND-OTHER 0
NOR-AND 2
NOR-NOR 0
NOR-INV 0
NOR-OTHRE 0
INV-AND 1
INV-NOR 0
INV-INV 0
INV-OTHER 0
OTHER-AND 0
OTHER-NOR 0
OTHER-INV 0
OTHER-OTHER 0
Referring to fig. 4, as a preferred solution, in the step S2, the constructing and training the neural network model includes:
in step S2, the neural network model includes a fully-connected neural network and a decision tree;
the fully-connected neural network comprises an input layer, a hidden layer and an output layer; the fully-connected neural network comprises an input layer, a hidden layer and an output layer; the input layer is used for inputting the feature vectors into a neural network; the hidden layer and the output layer are used for acquiring and outputting a calculation result; and the decision tree is used for judging whether the gate-level netlist contains the Trojan horse or not according to the calculation result. The forward propagation calculation process is as follows:
Figure BDA0003512825920000081
Figure BDA0003512825920000082
c=(yj·j (2))+b(2),j∈(1,2,3,…,n)
yo=Sigmoid(c)
wherein
Figure BDA0003512825920000083
Is the weight of the hidden layer or layers,
Figure BDA0003512825920000084
for hiding the bias value of the layer, Wj (2)As a weight of the output layer, b(2)Is the bias value of the output layer. The training of these values is achieved by back propagation.
Since the Sigmoid function is:
Figure BDA0003512825920000085
it can be seen that since the output of Sigmoid is (0, 1), the calculation result yoThe value of (A) can be taken as the probability that the input sample is a Trojan circuit, i.e., yoThe closer the value of (d) is to 1, the greater the probability that the sample is a Trojan horse.
After the probability is obtained, a threshold value needs to be determined to classify the sample. This is done here using a decision tree. Will yoAnd inputting the input into a decision tree, and classifying the decision tree to obtain output y. y can only be equal to 0 or 1,0 representing the gate as a normal circuit; 1 represents the gate as a trojan circuit.
The model training process of this embodiment may be as follows:
step one, using several known Trojan netlist, namely netlist of known Trojan position, extracting characteristics of the Trojan netlist to obtain samples, wherein Trojan label is 1, normal label is 0, and sample balance processing is carried out on the samples to ensure that the proportion of the Trojan sample and the normal sample is approximate; training by using the extracted samples, so that the neural network can output a probability result close to 1 when encountering the Trojan horse sample; and step three, forming a new sample by using the probability result output by the trained full-connection layer network and the corresponding label, and then training the decision tree by using the new sample, so that a proper threshold value is trained, and the Trojan horse sample and the normal sample are distinguished. The output of the decision tree is 0 or 1,0 is a normal circuit, and 1 is a Trojan circuit.
Finally, in step S3, the performing the Trojan horse detection using the trained neural network model may include:
performing feature extraction on the netlist to be processed to obtain a feature vector of the whole netlist;
putting the feature vectors into the trained neural network model one by one to obtain model output;
and (4) detecting the output result of the model, judging that the Trojan horse is contained if the output result is 1, and judging that the Trojan horse is normal if the output result is 0.
Taking the reference circuit of Trust-HUB as an example, 15 reference circuits are taken as tests from Trust-HUB, and the reference circuit information is shown in the following table 3:
TABLE 3 Trust-HUB reference Circuit information
Netlist name Normal circuit Trojan horse circuit
RS232-T1000 202 13
RS232-T1100 204 12
RS232-T1200 202 14
RS232-T1300 204 9
RS232-T1400 202 13
RS232-T1500 202 14
RS232-T1600 202 12
s15850-T100 2155 27
s35932-T100 5426 15
s35932-T200 5422 16
s35932-T300 5426 36
s38417-T100 5329 12
s38417-T200 5329 15
s38417-T300 5329 44
s38584-T100 6473 9
Using 14 netlists as training set and 1 netlist as test set. The true positive rate TPR and the true negative rate TNR were used as evaluation indices. The TPR and TNR are calculated as follows:
the classification results can be classified into true negative TN, false positive FP, false negative FN and true positive TP. TN is the number of normal circuits correctly identified as normal circuits. FP is the number of normal circuits that are misidentified as trojan circuits. FN is the number of trojan circuits that are misidentified as normal circuits. TP is the number of correctly identified Trojan horse circuits. The calculation formulas of the true positive rate TPR and the true negative rate TNR are respectively as follows:
TPR=TP/(TP+FN);
TNR=TN/(TN+FP)。
this example uses the Keras library of python to construct a model. The model is trained using 14 of the netlists as a training set. The trained model is tested with the remaining 1 netlist as a test set. The final experimental results are shown in table 4:
TABLE 4 results of the experiment
Netlist names TPR TNR
RS232-T1000 100 99.5
RS232-T1100 100 100
RS232-T1200 100 100
RS232-T1300 100 99.5
RS232-T1400 100 99.5
RS232-T1500 100 100
RS232-T1600 100 99
s15850-T100 100 91.1
s35932-T100 93.3 100
s35932-T200 87.5 100
s35932-T300 94.4 100
s38417-T100 83.3 100
s38417-T200 73.3 100
s38417-T300 86.4 100
s38584-T100 66.7 95.3
Mean value of 92.3 98.9
As can be seen from Table 4, the TPR of the hardware Trojan horse detection method of the present embodiment is between 66.7% and 100%, and the TNR is between 91.1% and 100%. The mean TPR was 92.3% and the mean TNR was 98.9%. On the premise of repeated training of the neural network model, the method has good hardware Trojan horse detection capability.

Claims (6)

1. A hardware Trojan horse detection method based on cascade structure characteristics is characterized by comprising the following steps:
step S1: extracting a characteristic vector of the cascade structure characteristic of the hardware Trojan horse circuit from a gate-level netlist file based on Verilog;
step S2: constructing and training a neural network model;
step S3: and (5) using the trained neural network model to perform Trojan detection.
2. The method according to claim 1, wherein in step S1, the extracting the feature vector of the cascaded structure of the hardware Trojan horse circuit comprises:
classifying gate-level modules of the netlist according to functions, and setting the size of the feature vector according to the number of classified types;
and traversing through a depth-first search algorithm to obtain the feature vector of the cascade structure.
3. The hardware Trojan horse detection method based on the cascade structure features as claimed in claim 2, wherein the step of classifying the gate-level modules of the netlist according to functions and setting the size of the feature vectors according to the number of classified types comprises:
classifying the gate-level modules according to the functional information of the gates to obtain the classification types of the gate-level modules;
combining the gate-level modules in a pairwise manner to obtain a static structure feature type;
and setting the size of the feature vector according to the number of the feature types of the static structure.
4. The hardware Trojan horse detection method based on the cascade structure features as claimed in claim 2, wherein the method for obtaining the feature vector of the cascade structure by traversing through a depth-first search algorithm comprises:
step S401: initializing, selecting a target gate, setting the target gate as an access gate, and setting the depth as an initial value 0;
step S402: moving to an unaccessed adjacent door of the access door, wherein the adjacent door becomes a new access door, adding 1 to the depth, counting the connection type, and adding 1 to the number of the connection type vectors at the corresponding positions;
step S403: judging whether the access door has an adjacent door which is not accessed, if so, returning to the step S402; if not, the maximum depth is judged to be reached, the previous gate is returned and serves as an access gate, the depth is reduced by 1, then whether the access gate has an adjacent gate which is not accessed is judged, if yes, the step S402 is returned, and if not, the process is ended to obtain the feature vector of the target gate.
5. The hardware Trojan horse detection method based on cascade structure characteristics as claimed in claim 1, wherein in the step S2, the neural network model comprises a fully connected neural network and a decision tree;
the fully-connected neural network comprises an input layer, a hidden layer and an output layer; the input layer is used for inputting the feature vectors into a neural network; the hidden layer and the output layer are used for acquiring and outputting a calculation result;
and the decision tree is used for judging whether the gate-level netlist contains the Trojan horse or not according to the calculation result.
6. The hardware Trojan horse detection method based on cascade structure features of claim 1, wherein in the step S3, the Trojan horse detection using the trained neural network model comprises:
performing feature extraction on the netlist to be processed to obtain a feature vector of the whole netlist;
putting the feature vectors into the trained neural network model one by one to obtain model output;
and (4) detecting the output result of the model, judging that the Trojan horse is contained if the output result is 1, and judging that the Trojan horse is normal if the output result is 0.
CN202210159164.3A 2022-02-21 2022-02-21 Hardware Trojan horse detection method based on cascade structure characteristics Pending CN114626106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210159164.3A CN114626106A (en) 2022-02-21 2022-02-21 Hardware Trojan horse detection method based on cascade structure characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210159164.3A CN114626106A (en) 2022-02-21 2022-02-21 Hardware Trojan horse detection method based on cascade structure characteristics

Publications (1)

Publication Number Publication Date
CN114626106A true CN114626106A (en) 2022-06-14

Family

ID=81900993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210159164.3A Pending CN114626106A (en) 2022-02-21 2022-02-21 Hardware Trojan horse detection method based on cascade structure characteristics

Country Status (1)

Country Link
CN (1) CN114626106A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984633A (en) * 2023-03-20 2023-04-18 南昌大学 Gate-level circuit component identification method, system, storage medium and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984633A (en) * 2023-03-20 2023-04-18 南昌大学 Gate-level circuit component identification method, system, storage medium and equipment

Similar Documents

Publication Publication Date Title
WO2020073664A1 (en) Anaphora resolution method and electronic device and computer-readable storage medium
CN106709349B (en) A kind of malicious code classification method based on various dimensions behavioural characteristic
CN109359439A (en) Software detecting method, device, equipment and storage medium
CN103618744B (en) Intrusion detection method based on fast k-nearest neighbor (KNN) algorithm
CN111581092B (en) Simulation test data generation method, computer equipment and storage medium
CN109388944A (en) A kind of intrusion detection method based on KPCA and ELM
CN107590313A (en) Optimal inspection vector generation method based on genetic algorithm and analysis of variance
CN109740348B (en) Hardware Trojan horse positioning method based on machine learning
CN109359551A (en) A kind of nude picture detection method and system based on machine learning
CN109960727A (en) For the individual privacy information automatic testing method and system of non-structured text
Chowdhury et al. ReIGNN: State register identification using graph neural networks for circuit reverse engineering
Brewer et al. The impact of proton-induced single events on image classification in a neuromorphic computing architecture
Yu et al. Deep learning-based hardware trojan detection with block-based netlist information extraction
CN114626106A (en) Hardware Trojan horse detection method based on cascade structure characteristics
CN112380534B (en) Hardware Trojan horse detection method based on circuit structure analysis
Tebyanian et al. SC-COTD: Hardware trojan detection based on sequential/combinational testability features using ensemble classifier
Hasegawa et al. Empirical evaluation and optimization of hardware-trojan classification for gate-level netlists based on multi-layer neural networks
Hao Evaluating attribution methods using white-box LSTMs
CN111738290B (en) Image detection method, model construction and training method, device, equipment and medium
CN109858246B (en) Classification method for control signal type hardware trojans
CN110929301B (en) Hardware Trojan horse detection method based on lifting algorithm
CN116975881A (en) LLVM (LLVM) -based vulnerability fine-granularity positioning method
CN106991171A (en) Topic based on Intelligent campus information service platform finds method
CN113821840A (en) Bagging-based hardware Trojan detection method, medium and computer
CN113486347B (en) Deep learning hardware Trojan horse detection method based on semantic understanding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination