CN108880781A - It is a kind of to add cover protection encryption equipment without mask neural network attack method - Google Patents

It is a kind of to add cover protection encryption equipment without mask neural network attack method Download PDF

Info

Publication number
CN108880781A
CN108880781A CN201810614068.7A CN201810614068A CN108880781A CN 108880781 A CN108880781 A CN 108880781A CN 201810614068 A CN201810614068 A CN 201810614068A CN 108880781 A CN108880781 A CN 108880781A
Authority
CN
China
Prior art keywords
training
attack
energy
neural network
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810614068.7A
Other languages
Chinese (zh)
Inventor
王燚
吴震
杜之波
王敏
向春玲
黄洁
王恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tian Rui Xin An Technology Co Ltd
Chengdu Xinan Youlika Information Technology Co Ltd
Chengdu University of Information Technology
Original Assignee
Chengdu Tian Rui Xin An Technology Co Ltd
Chengdu Xinan Youlika Information Technology Co Ltd
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tian Rui Xin An Technology Co Ltd, Chengdu Xinan Youlika Information Technology Co Ltd, Chengdu University of Information Technology filed Critical Chengdu Tian Rui Xin An Technology Co Ltd
Priority to CN201810614068.7A priority Critical patent/CN108880781A/en
Publication of CN108880781A publication Critical patent/CN108880781A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/002Countermeasures against attacks on cryptographic mechanisms
    • H04L9/003Countermeasures against attacks on cryptographic mechanisms for power analysis, e.g. differential power analysis [DPA] or simple power analysis [SPA]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/26Testing cryptographic entity, e.g. testing integrity of encryption key or encryption algorithm

Abstract

The invention belongs to cryptographic algorithm analysis detection fields, disclose it is a kind of to add cover protection encryption equipment without mask neural network attack method and system, be not required to it is to be understood that training equipment mask in the case where, the neural network template of target intermediate combination value is covered in training about nothing, and then attacks plus cover the key of protection encryption equipment.Before training neural network, Partial Feature PCA pretreatment is carried out to the feature vector of training energy mark:Retain the higher preceding n feature of related coefficient, PCA processing is carried out to remaining feature;Merge the principal component feature of keeping characteristics and PCA.Use alpha cross entropy as the loss function of neural metwork training in neural metwork training, to make neural network template that there is higher attack efficiency.In phase of the attack, to the similar encryption equipment of unknown key, to guess the joint probability of key using neural network formwork calculation without median is covered as target of attack, to identify the correct key of encryption equipment.

Description

It is a kind of to add cover protection encryption equipment without mask neural network attack method
Technical field
The invention belongs to cryptographic algorithm technical field of analysis and detection more particularly to it is a kind of to add cover protection encryption equipment nothing Mask neural network attack method.
Background technique
The basic skills of various Tuber yields is that the non-linear and linear transformations taken turns are carried out to plaintext more.Every wheel conversion In, by last round of output(It is exactly civilization for the first round)It is divided into multiple portions, respectively with round key Corresponding sub-key carries out xor operation, then non-linear conversion is carried out respectively, then will be non- The result of linear transformation carries out linear transformation as a whole(Row displacement and column are obscured), obtain the output of epicycle.Usually add The intermediary operation of close calculating process, such asWithEnergy consumption will be with electric current, voltage or electromagnetism The mode of radiation leaks, and the operand for the energy consumption and operation revealed(Referred to as median)Or its Hamming weight has centainly Linear relationship, thus attacker can identify operand according to energy consumption, and then judge the sub-key that equipment uses, and final Go out master key using multiple sub-key inverses.This attack is known as Attacks or energy spectrometer.
Add cover protection be in order to fight to encryption equipment carry out energy spectrometer a kind of safeguard procedures.As previously mentioned, energy The operand of the energy consumption leakage and the operation of certain intermediary operation in analysis and utilization Encryption Algorithm ciphering process(Abbreviation median) The key of equipment is attacked with linear dependence.Add cover protection be the operand of intermediary operation is carried out using random mask it is different Or processing, make the energy consumption randomization of leakage, and no longer there is direct correlation with operand, to prevent DPA from attacking.Example Such as, DPA usually attacks input or the output operand of SBOX operation.Use d maskInput to SBOX Operand, which adds, to be covered:, use other dMask pair The output operand of SBOX, which adds, to be covered:
Higher difference energy spectrometer can be divided into the attack for the Encryption Algorithm for having plus covering protection at present(High-order DPA)With Template attacks two types.Technology of the invention belongs to latter type.The advantage of high-order DPA is not required to it is to be understood that equipment is covered Code, can to add cover encryption equipment directly attack.But the disadvantage is that attack needs much energy marks, and attack computation complexity It is very high.The advantage of template attack is that energy mark energy successful implementation on a small quantity is only needed to attack, and attack efficiency is high.But the disadvantage is that it is attacked Condition harshness is hit, is not easy to reach.This is because template attack needs to train attack template in advance.In the training stage, attacker is needed The training equipment that can be manipulated completely is wanted, to understand random mask used in every training energy mark.In most cases Under, attacker can not have such training equipment.Further, since training equipment needs to export the mask used, rank is trained The code certainty different from that encrypted code used in section and real equipment use, this will cause the template obtained the training stage It fails when attacking real equipment.The present invention overcomes attacker in template training allow for completely manipulate training equipment this Rigors, when training, only need the equipment of a well-known key without understanding the random mask that it is used, to make pair Add the template attack more practicable for covering protection encryption equipment.And due to training equipment with it is complete by the encrypted code of attack equipment It is exactly the same, it can be ensured that the validity of template.It is described briefly below that adding, the high-order DPA for covering protection encryption equipment is attacked and template is attacked The prior art.
The differential power attack for protecting encryption equipment is covered for adding(DPA)Referred to as " high-order DPA attack ".Its principle is, though So using plus cover the energy consumption that single median after processing operates leakage with it no longer there is linear dependence, but certain medians Combined value(Intermediate combination value,comb)The combined value of leakage energy consumption corresponding with them(Energy cost combination value,pre)Between still deposit In certain linear dependence(I.e.), can be attacked accordingly.To using d mask that the encryption covered is added to realize, need using d The combined value of a median is attacked(Referred to as d rank DPA).The order of d is higher, and the attack energy mark number needed is more.In reality In attack, since attacker can not have found that the accurate energy consumption of each median reveals position by statistical means, need to energy mark The energy consumption of upper multiple any positions is combined test, therefore calculation amount is very big, needs to take considerable time.Although can adopt The method for accelerating energy cost combination to calculate with FFT, but still cannot fundamentally change the low problem of high-order DPA attack efficiency.
Template attack is another type of side-channel attack method.The basic fundamental of template attack is introduced first below, Then the prior art that template attack has plus cover the encryption equipment of protection in attack is introduced.
Template attack is divided into two stages:Template training stage and phase of the attack.Training stage can using an attacker With the training equipment manipulated completely(I.e. attacker will be seen that each median that ciphering process generates), built using statistical method Vertical and keykThe probabilistic model Pr (e | v (x, k)) of the leakage energy consumption e of relevant median v (x, k), i.e. template.It is sharp when attack Conjecture key is calculated with templatekProbability:Pr(k)= Pr(e|v(x,k)).
The application method for having two classes different when attack has plus covers the encryption equipment of protection is applied in template attack.One kind is High-order DPA is assisted to attack using template attack.Another kind is to carry out template attack to mask and key respectively.
Attacking auxiliary high-order DPA attack using template, there are three types of forms.First method is to identify multiple bands using template Median is covered, energy cost combination value is substituted with this and carries out high-order DPA attack.Its biggest advantage is that no longer needing to find accurate energy Consumption leakage position, so that the computation complexity of high-order DPA be effectively reduced.In addition, because can be using complicated median combination Method and median Hamming weight combined method effectively improve the theoretical related coefficient of the two, to reduce high-order DPA institute The attack energy mark number needed.This method has clearly a need for understanding used mask when establishing band and covering median template.Second Method is to attack mask first with template.Training stage establishes the template Pr (p | r) about mask according to known mask.It attacks The mask of attack energy mark is obtained when hitting using the template.By abandoning the energy mark of the specific mask in part, make covering in attack energy mark Code distribution is uneven, causes median that cannot be covered completely by mask, so as to use single order DPA to attack device keys. Attack energy mark number needed for this method is greater than first method, and obvious this method is also required to understand in the training stage and be used Mask.The third method covers intermediate combination value using template attack nothing.Training stage Energy in use combined value pre establishes nothing and covers The template of intermediate combination value.Phase of the attack identifies that the nothing of energy mark covers intermediate combination value using the template, then calculateWith conjecture keykCorresponding conjecture Without the related coefficient for covering intermediate combination value comb (k), thus judicious keyk.Due to this method Energy in use combined value Intermediate combination value is covered as judgement nothingFoundation, and energy cost combination value be can on mark multiple specific position energy consumptions certain Kind combination, can lose the bulk information of remaining position leakage, therefore right when calculating energy cost combination valueJudgement very Inaccuracy.The advantages of this method is to be not required to the mask it is to be understood that using the training stage, the disadvantage is that can mark required for success attack Number has been more than simple using energy mark number required for high-order DPA, therefore template absolutely not brings due benefit.
Template attack adds the another kind of application method for covering protection encryption equipment to be to attack completely using template in attack.It is a kind of Attack pattern is to establish<Median, mask>Pair template Pr (e | v, r), wherein:eFor energy consumption, v, r are median and mask. It is needed when attack while guessing key and mask:, Wherein v (x, k) is according in plain textxWith conjecture keykThe nothing of calculating covers median.Due to<k,r>Number of combinations it is very more, therefore It can only implement the attack of step-by-step.And the weakness of step-by-step attack is, when attack theiWhen position, the energy consumption pair of the data generation of remaining TheiIt is exactly noise for position(Referred to as algorithm noise).This causes the signal-to-noise ratio of leakage energy consumption lower, it is therefore desirable to more can mark It could success attack.In addition, this method is also required in the mask used known to the training stage.Another way is first using template Mask is first attacked, then ciphering process is carried out according to mask to cover operation, another set of template is reused and implements single order attack.It can Establish the template SVM (e | r) about mask with support vector machines, when for attacking identification attack can mark mask:.Also the template of neural network mask can be used, obtain Higher mask judging nicety rate.
In conclusion it is existing to have plus cover protection encryption equipment attack technology the problem is that:
(1) prior art attacked using template is in the training stage it should be understood that the mask that training energy mark uses.This requires attack Person, which grasps, fully controls power to training equipment, can arbitrarily modify the code of encryption equipment.In most cases, attacker Can not have this condition, the attack based on template can not be implemented.Even if the specific such condition of attacker, but due to training The encrypted code certainty different from of real equipment, energy consumption leakage are endless when the encrypted code of Shi Xunlian equipment use and attack It is exactly the same, to cause template failure or not accurate enough in attack.
(2) although high-order DPA attack can implement attack in the case where not knowing about mask, a large amount of attack energy are needed Mark, computation complexity is very high, and attack efficiency is not high.
It is proposed by the present invention to cover encryption equipment and without mask neural network attack method solve above-mentioned technical problem to adding. Its significance lies in that:Firstly, reducing to the threshold covered protection encryption equipment and implement template attack is added, and it ensure that template is being attacked The validity in stage.Attacker uses random mask without understanding training equipment in the training stage, directly covers intermediate combination to nothing Value establishes attack template.In this way, attacker without grasping the control to training equipment completely, adds without to training equipment Close code carries out specially treated.Secondly, part principal component analysis to feature vector of this method by invention(PCA)Pre- place Reason, alpha cross entropy loss function, the training method standardized in conjunction with L2, solve the neural network mould under big noise situations The training problem of plate ensure that the quality of attack template.Minimum energy mark number is far less than height needed for successful attack device keys Rank DPA attack, and attack efficiency is attacked much higher than high-order DPA.Therefore, method of the invention to add cover protection encryption equipment formed More serious threat, to the safety of equipment, more stringent requirements are proposed.
Summary of the invention
It is right in the case where the purpose of the present invention is realizing the random mask used in attacker can not understand trained equipment With the method for adding the encryption equipment for covering protection to implement template attack.By inventing and using a series of technology, it ensure that and attack The quality of template is hit, so as to implement efficient attack to the encryption equipment plus cover with protection.
To realize that above-mentioned target, the present invention are used without intermediate combination value is covered as target of attack, make attacker without understanding The random mask of training equipment.By training neural network as attack template, and part PCA pretreatment, alpha friendship are invented Entropy loss function is pitched, the technologies such as L2 standardization training are used, effectively prevents over-fitting occur in training neural network, from And ensure that the quality of neural network template, reach very high attack efficiency.
Technical solution of the invention includes two stages:Template training stage and cipher key attacks stage.To equipment, master is close The attack of key uses divide-and-conquer strategy:The template of each sub-key of equipment is respectively trained in the template training stage, in phase of the attack Multiple sub-keys of equipment are obtained, then inverse goes out master key.Overall technological scheme is as shown in Figure 1.The wherein template training stage Including:
T1. acquisition training can mark collection
Protection encryption equipment is covered using adding for well-known key, one group is encrypted in plain text at random, acquires the energy consumption of ciphering process Curve set(Training can mark collection), for training template.
Determine the template number for needing training
According to the Encryption Algorithm that equipment uses, attack wheel number R needed for determining inverse master key and every respective loops quantity S, then Needing trained template number is R*S.
The template of R*S sub-key of training
Template attack needs to train the template of each sub-key for needing to attack.The training process of each template is shown in the " instruction in Fig. 2 Practice the stage ".
In Fig. 1, the cipher key attacks stage of the technical program includes:
A1. acquisition attack can mark collection
Equipment using being attacked encrypts one group of random plaintext, and acquires the energy consumption curve collection of ciphering process(Attack can mark collection), For attacking equipment sub-key.
Attack R*S sub-key
It is attacked out respectively according to wheel number R and every respective loops quantity S that the needs that T2 is determined are attacked using the template of T3 training Each sub-key." phase of the attack " in Fig. 2 is seen to the attack process of sub-key.
Inverse equipment master key is simultaneously verified
The S of each wheel sub- cipher key combinations are gone out according to the specific Encryption Algorithm of equipment using corresponding algorithm inverse for round key The master key of equipment.The close of any one attack energy mark is calculated using Encryption Algorithm identical with equipment using the master key Text is compared with the ciphertext of the energy mark of equipment output.If ciphertext is identical, success attack.Otherwise increasing attack can mark Number repeats A1 to A3, until success.
Core of the invention technology is the attack method shown in Fig. 2 to single sub-key.Wherein be divided into the training stage and Phase of the attack.Training stage is directed to the specific template of some sub-key for training;Phase of the attack uses the specific template, Attack corresponding sub-key.The detailed process in the two stages is:
S1. the training stage
Training stage using training energy mark collection, trains the neural network template for some sub-key.Its specific steps includes:
S11. determine that target of attack-nothing covers intermediate combination value
With adding the energy consumption for the equipment for covering protection not cover the direct leakage of median to single nothing, only exist in covering to multiple nothings Between the indirect leakage of certain combination that is worth.Specifically, it is assumed that multiple nothings of ciphering process cover median and are, a combination thereof value is.The wherein order that d is indicated plus covered, i.e., it is encrypted D random mask is used in journey.The combination of these intermediate combination valuesTo different Encryption Algorithm and in fact It is existing, different plus cover realization and can be different.For example, encrypting (d=2) to single order, the form of intermediate combination value be can be:, either:。 These nothings cover the corresponding leakage energy consumption of median and are, energy cost combination value isThe absolute value of the difference of each energy consumption is generally used, such as, or Centralization product, whereinIt is the later energy consumption of centralization.This step requires choosing The nothing selected covers intermediate combination value with the linearly dependent coefficient of corresponding energy cost combination value not equal to 0.I.e.:
To the different implementations of different Encryption Algorithm, selectable nothing covers median and combinations thereof mode may be different. For example, generally can choose the input of SBOX of the first run or end wheel in second order attack, the exclusive or value of output valve is used as nothing to cover centre Combined value.Energy cost combination mode is generally using the product of each leakage position energy consumption:
It was found that revealing position and extracting energy mark feature vector
Due to without the related coefficient covered between intermediate combination value and energy cost combination value, can according to the target of attack determined in S11 To use related coefficient detection leakage position.Specific method is:
1)The average energy mark for training energy mark collection is calculated, average energy trace curve figure is drawn, the changing rule of energy consumption in observation figure, visually Change the rough calculation time range that ground determines intermediate combination value, i.e. leakage sample range.Referring to fig. 4.
2)According to the well-known key of training equipment, nothing when calculating the plaintext of each training energy mark of encryption covers intermediate combination value
3)It calculates any in leakage sample rangedA sample positionThe combined value of energy consumptionIntermediate combination value is covered with nothingPerson related coefficient.WhereiniIndicate the of leakage sample rangei Kind energy consumption position grouping.
4)Screen the threshold value that related coefficient is greater than a settingEnergy cost combination value, to its corresponding sample position group Union is taken, to leakage position collection:
Wherein reveal positionInformation leakage weightFor:Energy cost combination value comprising the sample positionThe maximum correlation coefficient of intermediate combination value is covered with nothing:
5)If step 4)N leakage position is filtered out, leakage position is extracted from each training energy mark and concentrates in each sample position Energy consumption obtains the feature vector of energy mark.If training energy mark collection includes N item energy mark, obtain Training feature vector collection is oneMatrix
The pretreatment of training feature vector collection
1)The zscore of feature vector is pre-processed
Training feature vector collection will be used for the training of neural network template.According to the requirement of neural metwork training, first to training Set of eigenvectors carries out zscore Regularization, i.e., each dimension of feature vector in matrix E is mapped as mean value and is equal to 0, variance Equal to 1.The mean value and variance of training feature vector are calculated first:
Then Regularization is carried out to feature vector:
2)The part PCA of energy consumption characters matrix is handled
PCA processing in part is an important feature of the invention.The purpose is in the effective information feature for retaining leakage as far as possible While, by PCA dimension-reduction treatment, feature of noise therein is reduced, to prevent the excessively quasi- of neural network caused by noise characteristic It closes.Over-fitting refers to as trained gos deep into, the distinctive feature of training set(Rather than the feature that training set and verifying collection share)Quilt For advanced optimizing the fitting to target value, cause while training precision continues to rise, lose continuous decrease, verifying collection Appearance instead loss rise, accuracy decline the phenomenon that(Referring to the precision of the training set of " no anti-over-fitting " in Fig. 6 and verifying collection With loss curve).Due to not only containing noise of equipment in energy consumption characters, but also a large amount of numbers for containing mask generation are made an uproar Sound, signal-to-noise ratio is very low, and the feature of noise is largely used to fit object value in training set(Nothing covers intermediate combination value), thus Occurs serious over-fitting in training, the precision for causing verifying to collect cannot effectively improve, and the generalization ability of model is excessively poor. PCA eliminates the correlation of each dimension data by the way that high dimensional data to be mapped on lower dimensional space, reduces noise and redundancy, so as to To have the function that prevent over-fitting.However due to including that effective information is considerably less in energy consumption, the same of noise is being eliminated in PCA processing When also destroy the feature of effective information to a certain extent.Although occurring intending in training as a result, can be effectively prevented It closes, but the precision of training set and verifying collection is all unable to get and effectively improves.Since trained final goal is to improve verifying collection Precision carries out PCA processing to training vector merely and is helpless to reach this target.In this regard, the invention proposes Partial Feature PCA The method of processing.
According to the description in S12 to the method for finding leakage position, each position of revealing includes a leakage weight accordingly.It is bigger to reveal weight, indicates that the effective information for including in the energy consumption of the position is more.To retain effective leakage as far as possible Information, the leakage biggish feature of weight should not be handled using PCA.Therefore, the method for part PCA processing is:
Select a threshold value, information leakage weight is greater than the threshold valuepA energy consumption characters are retained without PCA processing. If this Partial Feature is.PCA processing is carried out to remaining energy consumption characters, retains PCA transition matrix M, obtains To dimensionality reduction after feature be:
Merge this two parts feature, obtains the eigenmatrix for neural metwork training.This The dimension of sample, original feature vector isnDimension is reduced to after Partial Feature PCA processingqDimension.
Training neural network template
Using alpha cross entropy as loss function, to be standardized using L2 and train nerve net without intermediate combination value is covered as training objective Network is as attack template.
1)Neural network structure
There are many structures, including feedforward neural network, convolutional neural networks, recurrent neural network etc. for artificial neural network.It is existing Document and experiment of the invention all show for the Classification and Identification based on energy consumption characters, simple list hidden layer it is preceding Godwards There is best learning effect through network.Neural network structure is as shown in figure 3, include an input layer, a hidden layer and one Output layer.
The input layer of neural network includesqA neuron, for inputtingqTie up energy consumption characters vector e.Hidden neuron quantity It is influenced, is needed by factors such as the signal-to-noise ratio of input data, the feature quantity of input, the classification quantity of output, the sizes of study collection It is determined by experiment optimal number.Hidden neuron uses tanh activation primitive function.Output layer neuron quantity, which is equal to, divides Class number, i.e., without the value number for covering intermediate combination value.The output of output layer is converted to nothing and covers middle groups after softmax is handled The probability distribution of conjunction value
2)The loss function of neural metwork training:Alpha cross entropy
Alpha cross entropy loss function is a kind of new neural metwork training loss function proposed in the present invention, is the present invention An important feature.It introduces common cross entropy loss function first below, then introduces coming for alpha cross entropy loss function Source.
Neural metwork training is to reduce specified loss with gradient descent algorithm.Intersect entropy loss and is widely used in training point The neural network of class identification(It or is pattern recognition neural network).Cross entropy describes the probability distribution and data of model prediction The difference of true probability distribution.If the probability distribution of network output is, can mark feature vector it is corresponding true It is distributed as in fact, then cross entropy is defined as:
Wherein,Indicate theiA energy mark feature vector,It indicates without covering intermediate combination value,NIt can mark collection for training The quantity of middle feature vector.Front portion is known as just true cross entropy (True Positive) in cross entropy formula, and rear portion claims Be negative pseudochiasma entropy (False Negative).
Due to containing much noise in energy consumption, all kinds of features is not obvious.It is obtained with the training of cross entropy loss function Neural network prediction probability distribution in, it is all kinds of(I.e. each nothing covers intermediate combination value)Probability it is very close.In fact, final The difference of all kinds of probability exported after optimization is most of within 1%.In this case, the very little of all kinds of probability of model prediction Error all may cause the mistake in Classification and Identification.And from the point of view of attack, the present invention wishes to improve prediction as far as possible generally The joint probability of correct classification in rate distribution, because this is the foundation of correct judgment key.Accordingly, the invention proposes one kind to repair Positive cross entropy loss function:Alpha cross entropy.
IfNA energy consumption characters vectorCorrect classification be, network is defeated The probability of correct classification out is, joint probability is:.The mean value of the negative logarithm of joint probability is:
The joint probability for the correct classification for exporting model is as big as possible, seeks to keep its negative logarithm as small as possible.It changes Sentence is talked about, exactly should be using the negative logarithm of the joint probability of correct classification as a part lost in training, in training process In reduce it as far as possible.
It may be noted that the negative logarithm of joint probability is exactly the just true cross entropy part of cross entropy loss function.This is because Actually i-th can encode corresponding to mark without the one-hot for covering intermediate combination value:
Therefore cross entropy loss function just true cross entropy part can simplify for:
It is exactly the mean value of the negative logarithm of the joint probability of correct classification on the right side of equal sign in above formula.In this way, can be in cross entropy Just true cross entropy part increases a weight α more than or equal to 1 in loss function, expresses to the correct negative logarithm of classification joint probability Penalty factor.Alpha cross entropy is thus obtained:
3)Training neural network
In the training of the neural network template of energy consumption, the present invention proposes three main points:First is that must make in each iteration Use total data;Second is trained using L2 normalization method;Third is using early ending.
In modern neuro network training, since data scale is huge, while over-fitting in order to prevent, often use mini- The training method of batch.I.e. each iteration merely enters partial data, updates network weight with the loss that this partial data generates Weight.But it is demonstrated experimentally that this mode is not suitable for the neural metwork training for adding the energy consumption data for covering protection encryption equipment.This be because To add the digital noise for containing a large amount of masks in the energy consumption for covering protection encryption equipment and generating, signal-to-noise ratio is very low.Use mini- When batch training, the meeting oscillation on large scale of training set precision, or even cannot restrain.Verifying collection precision also has to effectively improve.Only Having when each iteration is using total data, optimizer could find the correct direction that gradient declines from global angle, thus Guarantee the stability and convergence of training.
L2 standardization is a kind of measure that over-fitting is prevented in neural metwork training.Its method is increased in loss function The variance for adding connection weight tends to model simply, to prevent over-fitting to prevent connection weight from excessively breaking up.Using L2 standardization after loss function be:
Wherein, α H is alpha cross entropy, and λ is the coefficient of weight L2 standardization.λ is bigger, and model is simpler, is less susceptible to occur Over-fitting, but the capability of fitting of model is also poorer.λ is smaller, and model is more complicated, and capability of fitting is stronger, but generate over-fitting can It can be also bigger.Optimum value need to measure in an experiment.
Eearly ending is also to prevent network training from a kind of method of excessive over-fitting occur.Its method is to work as verifying When collection loss continuous several times rise(There is the initial stage of over-fitting), training is terminated in advance.But in practice, the present invention is adopted With slightly different method:Using fixed maximum training the number of iterations, but saves verifying collection precision in training and reach maximum Network model when value.This is because verifying precision often still when in a period of time that fainter over-fitting occurs in training It can be further up.
Phase of the attack
The template obtained in phase of the attack using the training stage, the son for extracting the similar of unknown key plus covering protection encryption equipment are close Key.Attack to any one sub-key, the attack energy mark collection extracted using A1(Fig. 1).
Extracting attack set of eigenvectors, and pre-processed
1)The feature vector of position collection extracting attack energy mark is revealed according to obtained in training stage S12;
2)Using training stage S13 it(2)Obtained in training feature vector mean value and variance(Participate in T), to attack signature Vector carries out zscore Regularization;
3)Using training stage S13 it(3)Obtained in PCA transition matrix, to attack feature vector carry out part PCA processing.
Finally, attack signature vector set is obtained:.Wherein, M indicates that attack can mark number.
It calculates each attack energy mark and intermediate combination value is covered to the nothing of conjecture sub-key
The combination that intermediate combination value is covered according to the nothing of training stage S11 selection, to conjecture sub-keyk, calculate each attack energy The plaintext of markConjecture without covering intermediate combination value
Calculate the probability of conjecture sub-key
Using the neural network that training obtains in S14, each attack signature vector is calculatedTo without the probability for covering intermediate combination value Distribution:
The probability of conjecture sub-key is equal to the conjecture of each attack energy mark without the joint probability for covering intermediate combination value:
S24. most probable sub-key is selected
The highest conjecture sub-key of select probability is as the correct sub-key of most probable:
In conclusion advantages of the present invention and good effect are:
Of the invention focuses on proposing a kind of new template attack method plus cover with protection encryption equipment.Its skill Art effect is shown:
(1)Attack method of the invention was not required to it is to be understood that encrypting the random mask that equipment uses in the training stage, it is only necessary to one The encryption equipment of well-known key is as training equipment.In contrast, existing template attack technology requires to understand in the training stage The random mask that training energy mark uses.This requires attackers can fully control trained equipment, to modify encryption equipment It realizes code, its output is made to encrypt used random mask every time.This requirement is for most of attackers can not Can have, and due to training when equipment encrypted code and attack real equipment encrypted code it is different, Ke Nengzao At die plate failure or second-rate when attack.Therefore, method of the invention reduce it is real to having plus covering protection encryption equipment While applying the threshold of template attack, the stability of attack template is also ensured.
(2)Technical solution of the present invention to add the encryption equipment for covering protection have very high attack efficiency.
In a particular embodiment, using attack method of the invention, the minimum of successful attack encrypted device keys can mark number Only 1, the attack energy mark number for reaching 100% success attack rate is only 8.In comparative experiments, set for identical encryption It is standby, it is attacked again without the high-order DPA for understanding mask, the energy mark number of minimum success attack is 100, reaches 100% success rate Required attack energy mark number is 1500.Table 1 is the comparison of attack energy the mark number and success rate of two kinds of attack methods:
The no mask neural network attack of the invention of table 1 and the success rate of high-order DPA attack compare
Attack can mark number 1 5 10 50 100 500 1000 1500
High-order DPA success attack rate 0.00% 0.00% 0.00% 0.00% 8.00% 52% 85% 100%
The method of the present invention success attack rate 18.4% 95.6% 100% 100% 100% 100% 100% 100%
It can be seen that the present invention is effectively guaranteed while reducing to the threshold for covering protection encryption equipment template attack is added The efficiency of template attack.
Detailed description of the invention
Fig. 1 is the overview flow chart of template attack provided in an embodiment of the present invention.
Fig. 2 is the template training and attack flow chart of specific sub-key provided in an embodiment of the present invention.
Fig. 3 is that the embodiment of the present invention provides the neural network structure figure of template.
Fig. 4 is mean value energy mark and the operation sample areas figure wherein respectively taken turns in the embodiment of the present invention.
Fig. 5 is that the nothing provided in an embodiment of the present invention using related coefficient discovery covers the intermediate combination value leakage location drawing (part).
Fig. 6 is the neural metwork training effect picture of part PCA+L2 weight specification chemoprevention over-fitting provided in an embodiment of the present invention.
Fig. 7 is the relationship of attack energy mark number and success attack rate in no mask neural network attack provided in an embodiment of the present invention Figure.
Fig. 8 (Figure of abstract) is template training and attack flow chart of the invention.
In embodiment, encryption equipment is an Atmel ATMega-163 smart card, and encryption is realized using rotation SBOX add cover (RSM) as plus cover the AES-256 Encryption Algorithm of protection.When training using the equipment to 40,000 it is random in plain text into Row encryption, and obtaining 40,000 using electromagnetic surveying can mark.Wherein 30,000 for training, neural network, 10,000 for testing Demonstrate,prove the training effect of neural network.Using the identical equipment realized but key is different when attack, acquisition 10,000 are surveyed for attacking Examination.
The key length of AES-256 Encryption Algorithm used in equipment is 256, totally 32 byte.Every wheel includes 16 Sbox, round key be 16 bytes, 128.Preceding two-wheeled round key is exactly the master key of equipment.When attack, before need to only attacking out All sub-keys of two-wheeled, combine and have just obtained the master key of equipment.The overall process of attack is as shown in Figure 1.
Below with the attack to the 1st the 1st sub-key of wheel, the template shown in Fig. 2 for specific sub-key is described in detail Trained and attack process.In the following description, it usesxIndicate part corresponding with the 1st sub- key in plaintext(One word Section),skIndicate the 1st sub-key of the first run.
:Training stage
S11:Selection target of attack-nothing covers intermediate combination value
It selects the nothing of Sbox to cover the exclusive or value output and input in the present embodiment and covers intermediate combination value as the nothing of attack.Sbox's It is without input is covered:.The nothing of Sbox covers output and is:.Target of attack is that nothing covers input With the exclusive or value of output:
It should be pointed out that setting band and covering the input and output of Sbox and be respectively:,, wherein for mask.The input and output exclusive or value that band is covered is:.It can be with Find out, no matter Sbox input, output or the two exclusive or value covered by mask.
:It was found that extracting training feature vector collection without the leakage position for covering intermediate combination value
1)According to related coefficient discovery without the leakage position for covering intermediate combination value.Detailed process is:
The mean value energy mark for calculating training energy mark collection tells the sample areas of the 1st wheel cryptographic operation with visualization method(Such as figure Shown in 4).
According to training can mark plaintext and known key, calculate training can mark concentrate it is each can mark nothing cover intermediate combination value
Any two sample position is calculated in the sample areas of the 1st wheelUpper energy cost combination value, wherein e1,e2It is sample position t respectively1,t2On energy consumption centralization value.Calculate energy cost combination value with Without the related coefficient for covering intermediate combination value:
The threshold value that related coefficient is arranged is, the related coefficient generated is selected to be greater thanEnergy cost combinationSample position pair, their union is sought, leakage position collection is obtained:.It is total to obtain 193 A leakage position.Part of sample position and corresponding related coefficient are shown in Fig. 5.Key=108 are the trained equipment first runs in Fig. 5 The value of 1st sub-key, corr indicate that related coefficient, t1 and t2 indicate two sample positions.
2)Extracting can mark set of eigenvectors
Energy consumption can be extracted on 193 leakage positions of mark from every training, form the feature vector of 193 dimensions.From 30,000 trained energy Mark obtains 30000 × 193 training feature vector matrix, from 10,000 verifyings can marks obtain 10000 × 193 verifying feature to Moment matrix.
The pretreatment of training feature vector collection
1)Zscore regularization pretreatment
Regularization be the energy consumption of each feature is converted to mean value be 0, variance 1.The equal of training feature vector is calculated first Value and variance:
Then Regularization is carried out to training feature vector and verifying feature vector accordingly:
2)The part PCA of energy consumption characters vector matrix is handled
If information leakage weight threshold is 0.08, retain the feature that information leakage weight is greater than the threshold value(Totally 40), to residue 153 features carry out PCA processing.PCA processing when, retain covering variance 80% principal component, transition matrix M be 153 ×, make 153 Feature Dimension Reductions are to 17 PCA principal components.Merge the principal component feature of keeping characteristics and PCA, total 57 dimension of feature quantity. I.e. training feature vector matrix conversion is 30000 × 57, and verifying eigenvectors matrix is converted to 10000 × 57.
Training is without the neural network template for covering intermediate combination value
This stage using L2 using alpha cross entropy as loss function, to be standardized and be trained without intermediate combination value is covered as training objective Neural network is as attack template.
1)Neural network structure
Neural network structure is as shown in Figure 3.Three-layer neural network is selected, structure is 57 × 100 × 163.I.e.:Input layer includes 57 neurons, can mark feature vector for inputting one;Hidden neuron quantity is 100, activation primitive tanh;It is defeated Layer neuronal quantity is 163 out, that is, trained targetValue quantity(Classification number).The output of output layer passes through After softmax processing, all kinds of probability is converted to
2)Training neural network
The loss function of neural network uses alpha cross entropy.L2 weight normalization method is used when training.The two combines Loss function be:
Wherein, α=4, L2 weight standardization coefficient lambda=0.01.N=30000 when training, N=10000 when verifying.W is that network respectively connects Weight, M is the connection quantity of network.For without the one-hot coding for covering intermediate combination value.
The classification and recognition of neural network is:
Wherein,It is neural network to attack feature vector eiClassification prediction probability distribution,For to should be able to the nothing of mark cover the correct classification of intermediate combination value.N=30000 when training, N=10000 when verifying.
Network training iteration 10000 times, network parameter when verifying precision highest is saved as final template.Network instruction Practice and is realized using tensorflow.Fig. 5 is demonstrated by using anti-over-fitting measure(Part PCA characteristic processing and the standardization of L2 weight) Compared with the training process for not using anti-over-fitting measure.
Fig. 5 is made of 4 subgraphs, is respectively indicated with trained progress(Horizontal axis indicates train epochs), the essence of training set Degree and loss(The subgraph of top two)With the precision and loss of verifying collection(The subgraph of lower section two)Situation of change.Due to training set It is completely independent with verifying collection, the precision of the two and loss are also completely independent.Specifically, due in training to the excellent of model Change is merely for training set data, and the data characteristics that neural network is extracted in the training process is totally independent of verifying collection number According to.If the feature extracted has universality, the generalization ability of model is strong, to new data(That is verifying collection data)Identification energy Power is also corresponding relatively strong.Its performance is declined as the precision of training set rises and loses, and the precision for verifying collection also rises with it, Loss declines therewith.Here it is " part PCA processing plus the anti-over-fitting of L2 " the showed features of experiment in Fig. 5.On the contrary, In " no anti-over-fitting " experiment, although the precision of training set be continuously improved with trained, loses continuouss decrease, verifying The loss that the precision of collection then began to decline after training initial stage slightly rises, verified collection obviously rises, and shows complete with training set Complete opposite trend.Here it is the typical performances of over-fitting.From the point of view of training result, in " no anti-over-fitting " experiment, verifying collection Precision only reaches 3% or so, and in " part PCA adds the anti-over-fitting of L2 " experiment, verifying collection precision reaches 18% or so.
Phase of the attack
The attack energy mark collection that 10000 independent energy marks are used as is acquired altogether.Attack is using wherein every timeMItem energy mark is attacked 50 times altogether, To calculate the success rate of attack.It is as follows to an attack process of the 1st sub-key of the first run:
S21 prepares attack signature vector set
Using the leakage position found in S12, attack energy mark collection is converted to the attack signature vector matrix of M × 163.
Using mean μ and variances sigma obtained in the training set in S13 (1)2, zscore is being carried out just to attack feature vector Then change processing.
It, can the progress Partial Feature PCA processing of mark collection to attack using the method for (2) S13.Obtain the attack signature of M × 57 Vector matrix.
It calculates each attack energy mark and intermediate combination value is covered to the nothing of conjecture sub-key
If sub-key sk=0 ... of conjecture, 255, which calculate nothing according to the plaintext of each attack energy mark, covers intermediate combination value
Calculate the probability of conjecture sub-key
Using neural network trained in S14, each attack energy mark e is calculatediConjecture without covering the probability of intermediate combination value.Guess sub-keyskProbability be equal toMItem attack can mark conjecture without covering middle groups The joint probability of conjunction value:
Select correct sub-key
The correct sub-key of most probable is the conjecture sub-key with maximum probability:
Fig. 7 is using different attack energy mark numbersMWhen, the success attack rate of the 1st sub-key of the first run.From figure 7 it can be seen that working asM= When 1, success attack rate is 18%.It is possible to successfully obtain the key of equipment only with 1 attack energy mark.Success attack Attack energy mark number needed for rate reaches 100% is only 8, illustrates that no mask neural network attack has very high attack efficiency.
In the above-described embodiments, Encryption Algorithm can be any existing Tuber yield, such as AES-128, DES, SM1, SM2, SM4 etc..The nothing of selection, which covers intermediate combination value, can be any median with information leakage and combinations thereof mode.Add and covers Order be not limited to 2 ranks, can be any order.Neural network structure is not limited to the three layer feedforward neural networks in embodiment, It can be feedforward neural network or convolutional neural networks with more layers(CNN), recurrent neural network(RNN)Deng.Hidden layer can To use different activation primitives, such as Relu, Sigmoid.The method that neural network output is converted to class probability is not limited to Softmax can also lead etc. calculation methods using member.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (4)

1. it is a kind of to add cover protection encryption equipment without mask neural network attack method, which is characterized in that with ciphering process Multiple nothings cover the combined value of median(Referred to as nothing covers intermediate combination value)For target of attack, set in the training stage without understanding training The random mask used when standby encryption every time, using neural network without covering intermediate combination value to the probabilistic model of energy consumption, i.e., Attack template;In phase of the attack, the key for the same category of device attacked efficiently is obtained using attack template;Its key step Have:
(1) training stage:
One group is encrypted in plain text at random using training equipment (equipment of well-known key), the energy acquired in ciphering process disappears Consumption or the energy curve (referred to as can mark) of radiation, as training can mark collection, training neural network is as attack template;Specific packet It includes:
(1-1) selection without the combination comb () of intermediate combination value and the combination pre () of energy consumption is covered, makes between them Linear dependence be not equal to 0;
(1-2) using energy cost combination value and without the linear dependence for covering intermediate combination value, discovery is without the information for covering intermediate combination value Reveal position;
(1-3) extracts the feature vector of training energy mark according to leakage position, and using Partial Feature principal component analysis (PCA) to energy Mark feature vector is pre-processed;
(1-4) is to be training loss function with Alpha cross entropy, be standardized using L2 without intermediate combination value is covered as training objective Method training neural network, obtain without covering intermediate combination value to the probabilistic model of energy consumption;
(2) phase of the attack:
Using the equipment (equipment of unknown key) attacked one group is encrypted in plain text at random, acquires the energy in ciphering process The energy of amount consumption or radiation, can mark collection as attack;It is close to correspond to conjecture using the neural network prediction that the training stage obtains The conjecture of key is without the joint probability for covering intermediate combination value, in this, as the foundation that judgement conjecture key is set up, to filter out just True device keys.
2. according to claim 1 to add cover protection encryption equipment without mask neural network attack method, which is characterized in that The step of training stage (1-1), finds that the specific method without the information leakage position for covering intermediate combination value is using correlation:
(1) using the average energy mark of training energy mark collection, judge that encryption equipment calculates nothing and covers intermediate combination value with visualization method In each median substantially calculating actual range, i.e., leakage sample range;
(2) centralization processing is carried out to the energy consumption in sample position each in range, i.e., reflected the energy consumption mean value in each sample position Penetrate is 0;
(3) in computer capacity in any two or multiple sample positions energy consumption product, as energy cost combination value;
(4) it calculates each energy cost combination value and without the person related coefficient for covering intermediate combination value, takes the related coefficient greater than setting Sample position corresponding to the energy cost combination value of threshold value using their union as information leakage position collection, and retains each leakage Weight of the corresponding maximum correlation coefficient as its information leakage on position.
3. according to claim 1 to add cover protection encryption equipment without mask neural network attack method, which is characterized in that PCA processing in part is carried out to energy mark feature vector in training stage step (1-2), specific method is:
According to without the leakage position for covering intermediate combination value, energy is trained from training energy mark that the energy consumption extracted on leakage position is concentrated to be used as The set of eigenvectors of mark;Energy consumption on each leakage position is known as a feature of energy mark, and each feature includes that its correspondence is let out Reveal the information leakage weight on position;
Retain the preceding n feature that information leakage weight is greater than some given threshold, PCA dimension-reduction treatment is carried out to remaining feature; The principal component for merging the feature retained and obtaining after PCA dimensionality reduction, forms new energy mark feature vector.
4. according to claim 1 to add cover protection encryption equipment without mask neural network attack method, it is characterised in that: Use alpha cross entropy as the loss function of neural metwork training in training stage step (1-3);The definition of alpha cross entropy For:
Wherein, Q () is that nothing covers intermediate combination valuecombIdealized probability be distributed (one-hot coding), Pr () be neural network PredictioncombProbability distribution;α indicates to increase correctcombThe negative logarithm of joint probability weight, alpha >=1;Work as alpha When=1, it is common cross entropy that alpha cross entropy, which is degenerated,;NFor the quantity of training energy mark.
CN201810614068.7A 2018-06-14 2018-06-14 It is a kind of to add cover protection encryption equipment without mask neural network attack method Pending CN108880781A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810614068.7A CN108880781A (en) 2018-06-14 2018-06-14 It is a kind of to add cover protection encryption equipment without mask neural network attack method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810614068.7A CN108880781A (en) 2018-06-14 2018-06-14 It is a kind of to add cover protection encryption equipment without mask neural network attack method

Publications (1)

Publication Number Publication Date
CN108880781A true CN108880781A (en) 2018-11-23

Family

ID=64338394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810614068.7A Pending CN108880781A (en) 2018-06-14 2018-06-14 It is a kind of to add cover protection encryption equipment without mask neural network attack method

Country Status (1)

Country Link
CN (1) CN108880781A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347636A (en) * 2018-12-05 2019-02-15 中国信息通信研究院 A kind of key recovery method, system, computer equipment and readable medium
CN109981252A (en) * 2019-03-12 2019-07-05 中国科学院信息工程研究所 A kind of artificial intelligence process device safety enhancing system and method based on critical path encryption
CN110008714A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 The method, apparatus and electronic equipment of data encryption based on confrontation neural network
CN110037683A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 The improvement convolutional neural networks and its training method of rhythm of the heart type for identification
CN110048827A (en) * 2019-04-15 2019-07-23 电子科技大学 A kind of class template attack method based on deep learning convolutional neural networks
CN110572251A (en) * 2019-08-13 2019-12-13 武汉大学 Template attack method and device template attack resistance evaluation method
CN111464304A (en) * 2019-01-18 2020-07-28 江苏实达迪美数据处理有限公司 Hybrid encryption method and system for controlling system network security
CN111565189A (en) * 2020-04-30 2020-08-21 衡阳师范学院 Side channel analysis method based on deep learning
CN111586071A (en) * 2020-05-19 2020-08-25 上海飞旗网络技术股份有限公司 Encryption attack detection method and device based on recurrent neural network model
CN111934852A (en) * 2020-08-10 2020-11-13 北京邮电大学 AES password chip electromagnetic attack method and system based on neural network
CN111970280A (en) * 2020-08-18 2020-11-20 中南大学 Attack detection method of continuous variable quantum key distribution system
CN112000972A (en) * 2020-08-21 2020-11-27 中国人民解放军陆军工程大学 Encryption equipment security evaluation method
CN112131563A (en) * 2019-06-24 2020-12-25 国民技术股份有限公司 Template attack testing method, device, equipment and storage medium
CN112615714A (en) * 2020-12-29 2021-04-06 清华大学苏州汽车研究院(吴江) Side channel analysis method, device, equipment and storage medium
CN112887323A (en) * 2021-02-09 2021-06-01 上海大学 Network protocol association and identification method for industrial internet boundary security
CN113158179A (en) * 2021-03-17 2021-07-23 成都信息工程大学 Learning side channel attack method for automatically discovering leakage model and encryption equipment
CN113919395A (en) * 2021-10-12 2022-01-11 大连理工大学 Water supply pipe network leakage accident diagnosis method based on one-dimensional convolutional neural network
CN116094815A (en) * 2023-02-03 2023-05-09 广州万协通信息技术有限公司 Data encryption processing method and device based on flow self-adaptive control adjustment
CN112131563B (en) * 2019-06-24 2024-04-26 国民技术股份有限公司 Template attack testing method, device, equipment and storage medium

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347636A (en) * 2018-12-05 2019-02-15 中国信息通信研究院 A kind of key recovery method, system, computer equipment and readable medium
CN109347636B (en) * 2018-12-05 2021-09-24 中国信息通信研究院 Key recovery method, system, computer equipment and readable medium
CN111464304A (en) * 2019-01-18 2020-07-28 江苏实达迪美数据处理有限公司 Hybrid encryption method and system for controlling system network security
CN111464304B (en) * 2019-01-18 2021-04-20 江苏实达迪美数据处理有限公司 Hybrid encryption method and system for controlling system network security
CN110008714A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 The method, apparatus and electronic equipment of data encryption based on confrontation neural network
CN109981252B (en) * 2019-03-12 2020-07-10 中国科学院信息工程研究所 Artificial intelligence processor security enhancement system and method based on key path encryption
CN109981252A (en) * 2019-03-12 2019-07-05 中国科学院信息工程研究所 A kind of artificial intelligence process device safety enhancing system and method based on critical path encryption
CN110037683A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 The improvement convolutional neural networks and its training method of rhythm of the heart type for identification
CN110048827A (en) * 2019-04-15 2019-07-23 电子科技大学 A kind of class template attack method based on deep learning convolutional neural networks
CN110048827B (en) * 2019-04-15 2021-05-14 电子科技大学 Class template attack method based on deep learning convolutional neural network
CN112131563A (en) * 2019-06-24 2020-12-25 国民技术股份有限公司 Template attack testing method, device, equipment and storage medium
CN112131563B (en) * 2019-06-24 2024-04-26 国民技术股份有限公司 Template attack testing method, device, equipment and storage medium
CN110572251A (en) * 2019-08-13 2019-12-13 武汉大学 Template attack method and device template attack resistance evaluation method
CN111565189A (en) * 2020-04-30 2020-08-21 衡阳师范学院 Side channel analysis method based on deep learning
CN111565189B (en) * 2020-04-30 2022-06-14 衡阳师范学院 Side channel analysis method based on deep learning
CN111586071B (en) * 2020-05-19 2022-05-20 上海飞旗网络技术股份有限公司 Encryption attack detection method and device based on recurrent neural network model
CN111586071A (en) * 2020-05-19 2020-08-25 上海飞旗网络技术股份有限公司 Encryption attack detection method and device based on recurrent neural network model
CN111934852A (en) * 2020-08-10 2020-11-13 北京邮电大学 AES password chip electromagnetic attack method and system based on neural network
CN111970280B (en) * 2020-08-18 2022-05-06 中南大学 Attack detection method of continuous variable quantum key distribution system
CN111970280A (en) * 2020-08-18 2020-11-20 中南大学 Attack detection method of continuous variable quantum key distribution system
CN112000972A (en) * 2020-08-21 2020-11-27 中国人民解放军陆军工程大学 Encryption equipment security evaluation method
CN112000972B (en) * 2020-08-21 2024-04-19 中国人民解放军陆军工程大学 Encryption equipment security assessment method
CN112615714A (en) * 2020-12-29 2021-04-06 清华大学苏州汽车研究院(吴江) Side channel analysis method, device, equipment and storage medium
CN112615714B (en) * 2020-12-29 2022-07-12 清华大学苏州汽车研究院(吴江) Side channel analysis method, device, equipment and storage medium
CN112887323A (en) * 2021-02-09 2021-06-01 上海大学 Network protocol association and identification method for industrial internet boundary security
CN112887323B (en) * 2021-02-09 2022-07-12 上海大学 Network protocol association and identification method for industrial internet boundary security
CN113158179A (en) * 2021-03-17 2021-07-23 成都信息工程大学 Learning side channel attack method for automatically discovering leakage model and encryption equipment
CN113158179B (en) * 2021-03-17 2022-07-22 成都信息工程大学 Learning side channel attack method for automatically discovering leakage model and encryption equipment
CN113919395A (en) * 2021-10-12 2022-01-11 大连理工大学 Water supply pipe network leakage accident diagnosis method based on one-dimensional convolutional neural network
CN116094815A (en) * 2023-02-03 2023-05-09 广州万协通信息技术有限公司 Data encryption processing method and device based on flow self-adaptive control adjustment
CN116094815B (en) * 2023-02-03 2023-12-22 广州万协通信息技术有限公司 Data encryption processing method and device based on flow self-adaptive control adjustment

Similar Documents

Publication Publication Date Title
CN108880781A (en) It is a kind of to add cover protection encryption equipment without mask neural network attack method
CN107005404A (en) The whitepack of reinforcing realizes 1
CN106375079A (en) Chaotic encryption method for voice information
CN106030668A (en) Methods and systems for multi-key veritable biometric identity authentication
CN103258312A (en) Digital image encryption method with rapid key stream generative mechanism
CN109635530A (en) A kind of intelligent password guess method based on groups of users attribute
CN107241324A (en) Cryptochannel power consumption compensation anti-bypass attack method and circuit based on machine learning
CN109726565A (en) Whitepack is used in anti-leakage primitive
Liu et al. Chaos-based color image encryption using one-time keys and Choquet fuzzy integral
CN111934852A (en) AES password chip electromagnetic attack method and system based on neural network
Abd et al. Classification and identification of classical cipher type using artificial neural networks
CN109361830A (en) A kind of image encryption method based on plaintext
CN106778520A (en) A kind of fuzzy safety box encryption method of finger vena
Ghandali et al. Profiled power-analysis attacks by an efficient architectural extension of a CNN implementation
CN103888245A (en) S box randomized method and system for smart card
CN105827632B (en) Cloud computing CCS fine-grained data control method
CN104618092A (en) Information encryption method and system
Choi et al. EEJE: Two-step input transformation for robust DNN against adversarial examples
Ergun Privacy preserving face recognition in encrypted domain
CN114095182B (en) Dynamic response and security authentication method and system based on strong PUF
Jadon et al. Application of binary particle swarm optimization in cryptanalysis of DES
Dworak et al. Cryptanalysis of SDES using modified version of binary particle swarm optimization
Han et al. A biometric encryption approach incorporating fingerprint indexing in key generation
Tanaka et al. On the transferability of adversarial examples between encrypted models
CN109660695B (en) Color image encryption method based on genetic simulated annealing algorithm and chaotic mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181123

WD01 Invention patent application deemed withdrawn after publication