CN112966611A - Energy trace noise self-adaption method of DWT attention mechanism - Google Patents

Energy trace noise self-adaption method of DWT attention mechanism Download PDF

Info

Publication number
CN112966611A
CN112966611A CN202110256287.4A CN202110256287A CN112966611A CN 112966611 A CN112966611 A CN 112966611A CN 202110256287 A CN202110256287 A CN 202110256287A CN 112966611 A CN112966611 A CN 112966611A
Authority
CN
China
Prior art keywords
channel
feature
features
low
dwt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110256287.4A
Other languages
Chinese (zh)
Inventor
胡红钢
金敏慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110256287.4A priority Critical patent/CN112966611A/en
Publication of CN112966611A publication Critical patent/CN112966611A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses an energy trace noise self-adaption method of a DWT attention mechanism, which reduces the learning of a convolutional neural network model on noise by dividing features into a high-frequency part and a low-frequency part by using DWT and giving weights to the two features by using the attention mechanism. Meanwhile, the method also improves the robustness of the convolutional neural network model to noise in the energy track in the learning process, and reduces the influence brought by the noise under the condition that the energy track does not need to be preprocessed, so that the adaptability of the convolutional neural network on the noisy energy track is improved.

Description

Energy trace noise self-adaption method of DWT attention mechanism
Technical Field
The invention relates to the technical field of password security, in particular to an energy trace noise self-adaption method of a DWT attention mechanism.
Background
The side channel attack is proposed for the first time in 1996, and the key information in the cryptographic chip is recovered by utilizing physical leakage information such as time, power consumption, electromagnetism, even sound and the like existing in the cryptographic calculation process of the cryptographic device.
An attacker collects power consumption or electromagnetic energy traces in the encryption process of the password equipment and recovers the key information of the password equipment through methods such as modeling or statistical analysis. With the development of side channel attacks, a specific attack, namely a Profiled side channel attack, is gradually formed.
The Profiled side channel attack has a strong assumption that an attacker has a cryptographic Device (Device1) in his hand which is identical to the Device to be attacked (Device2), and the attacker can fully control the cryptographic Device including the key information for setting the internal cryptographic algorithm. The Profile side channel attack is mainly divided into a modeling stage and an attack stage, in the modeling stage, an attacker collects a large amount of energy traces on a Device1 password Device, and the collected energy traces are used for modeling to generate an attack model. Then, in an attack stage, an attack energy trace is collected on the Device2 to be attacked, and key recovery is carried out by using an attack model obtained in a modeling stage.
After that, the convolutional neural network model is introduced into the side channel attack as a Profiled attack model. As a side channel attack model, the whole attack process is divided into two stages: a modeling phase and an attack phase. In the modeling phase, Device1 Device energy traces are collected and a label is generated for each energy trace. And training a CNN model by using the energy trace and the label, then acquiring the energy trace of the Device2 to be attacked in an attack stage, and recovering the key information by using the trained CNN model.
Although the convolutional neural network model is an effective attack model, there are certain problems in combination with side channel attacks, one of which is the problem of noise in the energy trace. When an energy track in the encryption process of the password equipment is collected, environmental noise or electronic noise is inevitable, and the track generally contains a large amount of electronic noise. The existence of electronic noise can seriously interfere the learning effect of the neural network model, and the key point for successful attack is to obtain a good network model. Therefore, how to reduce the influence of noise in the energy trajectory on the neural network learning process is an urgent problem to be solved at present.
In terms of noise processing, the denoising method used at first is to average the energy traces, and the basis of the method is that the electronic noise existing in the energy traces satisfies a gaussian distribution and the mean value of the distribution is 0, so that the average can effectively remove the noise, but a large number of traces are used. Le et al remove noise from the energy trace using fourth order cumulants during the preprocessing stage, which mainly uses the characteristics of the effective signal quantity with non-gaussian distribution and the noise quantity with gaussian distribution in the energy trace. The noise of the gaussian distribution is 0 in the cumulative amount greater than the second order, i.e. the cumulative amount of the useful signal and the fourth order of the gaussian noise is equal to the cumulative amount of the useful signal. Wei et al remove Gaussian noise during the preprocessing stage using low pass filtering to a frequency of 60 MHz. Other methods are to remove noise from the aspect of characteristic engineering, and select some interest points containing useful information from the energy trace by using Principal Component Analysis (PCA), Kernel Discriminant Analysis (KDA) and CPA analysis of known ciphertext, so as to reduce the introduction of noise of other sampling points in the energy trace. Wu et al use a self-encoder in deep learning as a denoising model in a preprocessing stage, where the input of the self-encoder is an energy trace with noise and the output is a denoised energy trace. The process is mainly divided into two stages: the encoding stage compresses the input energy trace, the compressed energy trace has the maximum information content, and then decoding is carried out, and the output trace has the same dimensionality as the input trace. However, the self-encoder method also requires a large number of tracks in the training process, which will certainly increase the complexity in the energy acquisition process. In summary, the existing noise processing methods mainly have the following disadvantages:
1. a large number of tracks are needed in the preprocessing denoising stage, and a large number of tracks are needed in the training of the convolutional neural network, so that the workload of the track acquisition stage is large.
2. The preprocessing denoising method can affect the correlation between energy traces, and for some encryption algorithms with protective measures, the attack capability of the network can be weakened.
3. The preprocessing denoising method needs a lot of time, and complexity in the whole attack process is increased.
Disclosure of Invention
The invention aims to provide an energy trace noise self-adaption method of a DWT attention mechanism, and provides a feasible scheme for improving the robustness of a convolutional neural network model to energy trace noise in side channel attack.
The purpose of the invention is realized by the following technical scheme:
an energy trace noise adaptation method for a DWT attention mechanism, comprising:
converting the features in the energy track extracted after the convolution operation into high-frequency feature components and low-frequency feature components through discrete wavelet transform, and combining the high-frequency feature components and the low-frequency feature components according to channel dimensions; the high-frequency characteristic component is a detail characteristic component in the energy trace characteristic, and the low-frequency characteristic component is an approximate characteristic component in the energy trace characteristic;
and for the combined features, enhancing the non-noise feature information through a channel attention module, and enhancing the feature information of the non-noise position through a space attention module, so as to obtain enhanced feature information.
According to the technical scheme provided by the invention, the characteristics are divided into a high-frequency part and a low-frequency part by using DWT, and the two characteristics are weighted by using an attention mechanism, so that the learning of the convolutional neural network model to noise is reduced. Meanwhile, the method also improves the robustness of the convolutional neural network model to noise in the energy track in the learning process, and reduces the influence brought by the noise under the condition that the energy track does not need to be preprocessed, so that the adaptability of the convolutional neural network on the noisy energy track is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic diagram of an energy trace noise adaptive method of a DWT attention mechanism according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a basic structure of a convolutional neural network model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a DWT layer provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a channel attention module provided in an embodiment of the present invention;
fig. 5 is a schematic diagram of a space attention module according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides an energy trace noise adaptive method for a DWT attention mechanism, as shown in fig. 1, which mainly includes: converting the features in the energy track extracted after the convolution operation into high-frequency feature components and low-frequency feature components through Discrete Wavelet Transform (DWT), and combining the high-frequency feature components and the low-frequency feature components according to the channel dimension; for the merged feature, the non-noise feature information is enhanced by a Channel Module (Channel Module), and the feature information of the non-noise position is enhanced by a Spatial Module (Spatial Module), so as to obtain the enhanced feature information.
In the above scheme provided by the embodiment of the present invention, because the high-frequency feature component and the low-frequency feature component obtained by DWT operation are merged through the channel dimension, the channel attention mechanism in the attention mechanism assigns a weight to the feature of the channel dimension, and the assignment of the weight is continuously adjusted in the learning process of the network, the adjustment can be performed according to the influence of different features on the prediction result; therefore, the learning of noise by the convolutional neural network model is reduced by dividing the features into a high frequency part and a low frequency part using DWT and weighting the two features using an attention mechanism.
The method has the advantages that the robustness of the convolutional neural network model to noise in the energy track in the learning process can be improved, and the influence caused by the noise is reduced under the condition that the energy track does not need to be preprocessed, so that the adaptability of the convolutional neural network on the noisy energy track is improved. Certainly, in the technology itself, the above scheme can improve the attack effect of the model, but the invention does not protect the subsequent application of the model, and the user can select the application direction of the model according to the actual situation, for example, a scientific research institution can be used for a specific test task and the like.
For ease of understanding, the present invention is described in detail below.
Firstly, designing and realizing a convolutional neural network model.
The above scheme of the embodiment of the present invention is applied to the learning process of the convolutional neural network model, and therefore, a relevant description is first made for the convolutional neural network model.
As shown in fig. 2, a structure of a convolutional neural network model is provided, which includes a plurality of residual blocks connected in sequence, a flattening layer (flatten layer), a full connection layer (FC) and a softmax layer(s) are sequentially arranged after the last residual block, adjacent residual blocks are connected by a basic block, and the basic block is an activation function (σ) and a pooling layer (δ) which are sequentially arranged; each residual block is provided with a plurality of convolution layers (gamma), an activation function is arranged between the convolution layers, and the input and the output of the residual block are connected in a shortcut connection mode.
In the embodiment of the invention, the softmax layer outputs the prediction probability that the energy trace belongs to each category. Illustratively, the output of the S-box can be used as a class label, and thus there are 256 classes.
In the embodiment of the present invention, the architecture of the convolutional neural network model may be represented by the following formula:
Figure BDA0002967408690000041
the formula omits the operation of a flatten layer; n is1,n2,n3Representing the number of fully connected layers, the number of base blocks and the number of residual blocks;
Figure BDA0002967408690000051
representing the connections between the layers;
Figure BDA0002967408690000052
representing a shortcut link; x is the input energy trace; λ denotes a fully connected layer.
The following exemplary parameterizations of the convolutional neural network model are given:
a) the number of fully connected layers is set to 2.
b) Each residual block is composed of two convolutional layers plus an activation function, the activation function ReLU used.
c) F of the first residual block part is convolution operation and average pooling operation, and the subsequent f is convolution operation, so that the input and the output of the residual block have the same dimension, and the shortcut connection operation is facilitated.
d) For the convolutional layer, the size of the filter is set to 11, with a step size of 1.
e) For the pooling layer, average pooling is used, the pooling step is set to 2, and the pooling window is also 2.
And secondly, designing and realizing a DWT attention mechanism.
As shown in fig. 1, the feature in the energy trace processed by the discrete wavelet transform is the output of the last convolutional layer in the first residual block, and the output of the first residual block is the enhanced feature information. Specifically, the DWT attention mechanism mainly includes three parts: a DWT layer, a channel attention module, and a spatial attention module. Firstly, the features output by the rolling machine layer are decomposed into high-frequency feature components and low-frequency feature components through the DWT layer, and the two types of features are decomposed and combined according to channel dimensions, so that the method is more favorable for learning key information in the next step. In the next step, the learning of noise is reduced by selectively learning high-frequency features, i.e. noise components. The subsequent channel attention module can selectively learn the features, the weight corresponding to the important features is relatively large, the weight of the unimportant features is relatively small, and the key feature information is learned by giving weight information to different features. And finally, a spatial attention module which mainly positions the spatial position of the key information, and since not every point in the feature is useful information, the spatial position of the feature is weighted to reduce feature learning of irrelevant positions, thereby further strengthening the feature. Through the three parts, the learning of the convolutional neural network model on the noise in the energy track in the learning process can be reduced spontaneously, and the adaptability of the convolutional neural network on the noisy energy track is greatly improved.
The implementation principle of the DWT attention mechanism is described in detail below.
1. Design and implementation of DWT.
As shown in fig. 3, the features output from the reel layer can be divided into a detail feature component representing a high-frequency signal represented in the feature, in which a noise portion is generally located, and an approximate feature component representing a signal component in the feature using discrete wavelet transform.
In the embodiment of the invention, the wavelet basis used in the discrete wavelet transform is haar, and the high-frequency characteristic component and the low-frequency characteristic component respectively pass through the high-pass filter hhighAnd a low-pass filter hlowGenerating, for example, a high-pass filter and a low-pass filter of haar wavelet base can be obtained by using PyWavelets library, and a high-frequency transformation matrix and a low-frequency transformation matrix are generated based on the two filtersAnd changing the matrix to obtain the high-frequency characteristic component and the low-frequency characteristic component. And the space dimensions of the high-frequency characteristic component and the low-frequency characteristic component are half of the space dimensions of the input energy trace characteristics.
The high-pass filter allows high-frequency information to pass, which corresponds to local fluctuation values, which change relatively quickly, and in which noise is generally located. While the low-pass filter allows low-frequency information to pass, which corresponds to a local average value, which varies slowly. In haar wavelet basis, the high-pass filter hhighAnd a low-pass filter hlowEach is represented as:
Figure BDA0002967408690000061
2. design and implementation of attention mechanism
In the embodiment of the present invention, the Attention mechanism used is a Convolutional Block Attention Module (CBAM) proposed by Woo et al, and the CBAM is mainly divided into two parts, and features are emphasized from two dimensions: a channel dimension and a spatial dimension. The channel dimension is mainly used to emphasize useful feature information, while the spatial dimension is used to locate the position of important features in the energy trace.
1) The channel is noted as a module.
As shown in fig. 4, the operation process of the channel attention module includes: for each channel feature F in the combined features, independently using Global Average Pooling operation (GAP) and Global maximum Pooling operation (GMP) to obtain Average features and maximum features, respectively inputting the Average features and the maximum features to a sharing layer, then using an activation function ReLU to generate channel weights, wherein the channel weights are weights assigned to the features by a model in a learning process, and then multiplying the channel weights by corresponding channel features to obtain channel weighted features; illustratively, the shared layer may be composed of an MLP (multi-layer perceptron) with one hidden layer.
The working process of the channel attention module is expressed by the following formula:
Figure BDA0002967408690000062
wherein M isc(F) The channel weighting characteristics obtained after the channel characteristics F are input into the channel attention module are represented, GAP (good GAP) and GMP (good manufacturing practice) respectively represent global average pooling operation and global maximum pooling operation, MLP represents a sharing layer, and sigma represents an activation function.
2) The space attention module.
As shown in fig. 5, the working process of the space attention module includes: generating an average feature sample vector and a maximum feature sample vector using an average-pooling (AP) and a maximum-pooling (MP) operation, respectively, along a spatial dimension of the input feature Q; then concatenating the two vectors and using a convolutional layer and an activation function ReLU to generate spatial weight information, which includes weight information for each spatial position in the feature; and finally, multiplying each spatial position of the input feature Q by corresponding weight information to obtain a spatial weighting feature.
The working process of the space attention module is expressed by the following formula:
Figure BDA0002967408690000071
wherein M issAnd (Q) represents a spatial weighting characteristic obtained after the input characteristic Q is processed by the channel attention module, wherein the input characteristic Q is the channel weighting characteristic output by the channel attention module, AP (.) and MP (.) respectively represent average pooling operation and maximum pooling operation, gamma represents a convolutional layer, and sigma represents an activation function.
The above is a description of the relevant principles involved in the invention, and the model performance is described below with reference to the judgment index.
Rank was used as an index to describe model performance. If given NaAnd (3) each attack energy trace, wherein for each energy trace, the probability vector of the output of the corresponding model is p ═ { p ═ p1,p2,…,p|k|},Where | k | is the space for key guessing, i.e., the model class space 256 class. If the model uses the output of the S-box as a class, the plaintext is known, and therefore the probability of 0-255 key guesses can be obtained. Then to NaProbability of strip energy trace using log-form maximum likelihood method to obtain key guess vector g ═ { g1,g2,…,g|k|The guess vectors are ranked in decreasing order of probability, where g1Representing the final guessed key. The calculation method of the key guessing probability is as follows:
Figure BDA0002967408690000072
wherein,
Figure BDA0002967408690000073
represents the predicted probability of the jth attack energy trace key candidate i (a class of keys).
Rank indicates the position of the correct key k in the guessed key vector g (i.e. the index position corresponding to the correct key is found in the guessed key vector), and when Rank equals 0, that is, the probability of the correct key is greater than that of other guessed keys, it indicates that the attack is successful. The aim is to recover the correct key using the least energy trace.
Through the above description of the embodiments, it is clear to those skilled in the art that the above embodiments can be implemented by software, and can also be implemented by software plus a necessary general hardware platform. With this understanding, the technical solutions of the embodiments can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the system is divided into different functional modules to perform all or part of the above described functions.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. An energy trace noise adaptation method for a DWT attention mechanism, comprising:
converting the features in the energy track extracted after the convolution operation into high-frequency feature components and low-frequency feature components through discrete wavelet transform, and combining the high-frequency feature components and the low-frequency feature components according to channel dimensions; the high-frequency characteristic component is a detail characteristic component in the energy trace characteristic, and the low-frequency characteristic component is an approximate characteristic component in the energy trace characteristic;
and for the combined features, enhancing the non-noise feature information through a channel attention module, and enhancing the feature information of the non-noise position through a space attention module, so as to obtain enhanced feature information.
2. The energy trace noise adaptation method for DWT attention mechanism of claim 1, characterized by, that the method is applied to the learning process of the convolutional neural network model; the convolutional neural network model comprises a plurality of sequentially connected residual blocks, a flattening layer, a full connection layer and a softmax layer are sequentially arranged behind the last residual block, and adjacent residual blocks are connected with a pooling layer through sequentially arranged activation functions; each residual block is provided with a plurality of convolution layers, an activation function is arranged between the convolution layers, and the input and the output of the residual block are connected in a shortcut link mode;
the feature in the energy trace processed by the discrete wavelet transform is the output of the last convolution layer in the first residual block, and the output of the first residual block is the enhanced feature information.
3. The energy trace noise adaptation method for DWT attention mechanism of claim 1, characterized in that, the wavelet basis used in the discrete wavelet transform is haar, and the high frequency characteristic component and the low frequency characteristic component are respectively passed through a high pass filter hhighAnd a low-pass filter hlowGenerating, a high-pass filter hhighAnd a low-pass filter hlowEach is represented as:
Figure FDA0002967408680000011
and the space dimensions of the high-frequency characteristic component and the low-frequency characteristic component are half of the space dimensions of the input energy trace characteristics.
4. The energy trace noise adaptation method for DWT attention mechanism of claim 1, characterized in that, the working process of the channel attention module comprises:
for each channel feature F in the combined features, independently using global average pooling operation and global maximum pooling operation to obtain average features and maximum features, respectively inputting the average features and the maximum features to a sharing layer, then using an activation function to generate channel weights, and multiplying the channel weights by corresponding channel features to obtain channel weighted features; the working process is expressed by the formula:
Figure FDA0002967408680000012
wherein M isc(F) Representing the channel weighting characteristics obtained after the channel characteristics F are input into the channel attention module, wherein GAP (good GAP) and GMP (good manufacturing practice) respectively represent global average pooling operation and global maximum pooling operation, MLP represents a sharing layer, and sigma represents laserA live function.
5. The energy trace noise adaptation method for DWT attention mechanisms according to claim 1, characterized in that the working process of the spatial attention module comprises:
generating an average feature sampling vector and a maximum feature sampling vector using an average pooling operation and a maximum pooling operation, respectively, along the spatial dimension of the input feature Q; then concatenating the two vectors and using a convolutional layer and an activation function to generate spatial weight information, the spatial weight information including weight information for each spatial position in the feature; finally, multiplying each spatial position of the input feature Q by corresponding weight information to obtain a spatial weighting feature; the working process is expressed by the formula:
Figure FDA0002967408680000021
wherein M iss(F) The method comprises the steps of representing a spatial weighting characteristic obtained after an input characteristic Q is processed by a channel attention module, wherein the input characteristic Q is the channel weighting characteristic output by the channel attention module, AP (), MP (), respectively represent average pooling operation and maximum pooling operation, gamma represents a convolution layer, and sigma represents an activation function.
CN202110256287.4A 2021-03-09 2021-03-09 Energy trace noise self-adaption method of DWT attention mechanism Pending CN112966611A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110256287.4A CN112966611A (en) 2021-03-09 2021-03-09 Energy trace noise self-adaption method of DWT attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110256287.4A CN112966611A (en) 2021-03-09 2021-03-09 Energy trace noise self-adaption method of DWT attention mechanism

Publications (1)

Publication Number Publication Date
CN112966611A true CN112966611A (en) 2021-06-15

Family

ID=76277010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110256287.4A Pending CN112966611A (en) 2021-03-09 2021-03-09 Energy trace noise self-adaption method of DWT attention mechanism

Country Status (1)

Country Link
CN (1) CN112966611A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117040722A (en) * 2023-10-08 2023-11-10 杭州海康威视数字技术股份有限公司 Side channel analysis method based on multi-loss regularized noise reduction automatic encoder
CN117076858A (en) * 2023-08-18 2023-11-17 东华理工大学 Deep learning-based low-frequency geomagnetic strong interference suppression method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732076A (en) * 2015-03-12 2015-06-24 成都信息工程学院 Method for extracting energy trace characteristic of side channel
CN106709891A (en) * 2016-11-15 2017-05-24 哈尔滨理工大学 Image processing method based on combination of wavelet transform and self-adaptive transform
CN108270543A (en) * 2017-11-22 2018-07-10 北京电子科技学院 A kind of side-channel attack preprocess method based on small echo spatial domain correlation method
CN108537716A (en) * 2018-01-24 2018-09-14 重庆邮电大学 A kind of color image encryption embedding grammar based on discrete domain
CN111080541A (en) * 2019-12-06 2020-04-28 广东启迪图卫科技股份有限公司 Color image denoising method based on bit layering and attention fusion mechanism
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN111985411A (en) * 2020-08-21 2020-11-24 中国科学技术大学 Energy trace preprocessing method based on Sinc convolution noise reduction self-encoder
US20210042887A1 (en) * 2019-08-07 2021-02-11 Electronics And Telecommunications Research Institute Method and apparatus for removing compressed poisson noise of image based on deep neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732076A (en) * 2015-03-12 2015-06-24 成都信息工程学院 Method for extracting energy trace characteristic of side channel
CN106709891A (en) * 2016-11-15 2017-05-24 哈尔滨理工大学 Image processing method based on combination of wavelet transform and self-adaptive transform
CN108270543A (en) * 2017-11-22 2018-07-10 北京电子科技学院 A kind of side-channel attack preprocess method based on small echo spatial domain correlation method
CN108537716A (en) * 2018-01-24 2018-09-14 重庆邮电大学 A kind of color image encryption embedding grammar based on discrete domain
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
US20210042887A1 (en) * 2019-08-07 2021-02-11 Electronics And Telecommunications Research Institute Method and apparatus for removing compressed poisson noise of image based on deep neural network
CN111080541A (en) * 2019-12-06 2020-04-28 广东启迪图卫科技股份有限公司 Color image denoising method based on bit layering and attention fusion mechanism
CN111985411A (en) * 2020-08-21 2020-11-24 中国科学技术大学 Energy trace preprocessing method based on Sinc convolution noise reduction self-encoder

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CAGLAR AYTEKIN 等: "A Sub-band Approach to DeepFrequency-adaptive Loss for Perceptual Quality Denoising Wavelet Networks and a Frequency-adaptive Loss for Perceptual Quality", 《ARXIV:2102.07973V1》 *
DENGYONG ZHANG 等: "An ECG Signal De-Noising Approach Based on Wavelet Energy and Sub-Band Smoothing Filter", 《APPLIED SCIENCES》 *
GU RUIZHE 等: "Against deep learning side-channel attacks", 《中国科学技术大学学报》 *
MINHUI JIN 等: "An Enhanced Convolutional Neural Network in Side-Channel Attacks and Its Visualization", 《ARXIV:2009.08898V1》 *
刘赢 等: "基于多尺度残差网络和小波变换的LPI雷达信号识别", 《电讯技术》 *
韦子权 等: "基于多尺度小波残差网络的稀疏角度CT图像恢复", 《J SOUTH MED UNIV》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076858A (en) * 2023-08-18 2023-11-17 东华理工大学 Deep learning-based low-frequency geomagnetic strong interference suppression method and system
CN117076858B (en) * 2023-08-18 2024-06-04 东华理工大学 Deep learning-based low-frequency geomagnetic strong interference suppression method and system
CN117040722A (en) * 2023-10-08 2023-11-10 杭州海康威视数字技术股份有限公司 Side channel analysis method based on multi-loss regularized noise reduction automatic encoder
CN117040722B (en) * 2023-10-08 2024-02-02 杭州海康威视数字技术股份有限公司 Side channel analysis method based on multi-loss regularized noise reduction automatic encoder

Similar Documents

Publication Publication Date Title
Zhou et al. Adaptive genetic algorithm-aided neural network with channel state information tensor decomposition for indoor localization
Xiang et al. Open dnn box by power side-channel attack
Zhao et al. Learning salient and discriminative descriptor for palmprint feature extraction and identification
Chen et al. Learning a wavelet-like auto-encoder to accelerate deep neural networks
CN105843919A (en) Moving object track clustering method based on multi-feature fusion and clustering ensemble
CN112966611A (en) Energy trace noise self-adaption method of DWT attention mechanism
CN114970774B (en) Intelligent transformer fault prediction method and device
CN115588226A (en) High-robustness deep-forged face detection method
Wang et al. Privacy-preserving face recognition in the frequency domain
Zhu et al. Fingergan: a constrained fingerprint generation scheme for latent fingerprint enhancement
CN111985411A (en) Energy trace preprocessing method based on Sinc convolution noise reduction self-encoder
CN112800882A (en) Mask face posture classification method based on weighted double-flow residual error network
Yao et al. A recursive denoising learning for gear fault diagnosis based on acoustic signal in real industrial noise condition
Kwon et al. Improving non-profiled side-channel attacks using autoencoder based preprocessing
CN112861066A (en) Machine learning and FFT (fast Fourier transform) -based blind source separation information source number parallel estimation method
Xie et al. A new cost function for spatial image steganography based on 2d-ssa and wmf
Hu et al. A multi-grained based attention network for semi-supervised sound event detection
Pan et al. Disentangled representation and enhancement network for vein recognition
CN113537120B (en) Complex convolution neural network target identification method based on complex coordinate attention
CN115037437A (en) Side channel attack method and system based on deep learning by using SpecAugment technology
CN115733673B (en) Data anomaly detection method based on multi-scale residual error classifier
Noushath et al. Multimodal biometric fusion of face and palmprint at various levels
CN115270891A (en) Method, device, equipment and storage medium for generating signal countermeasure sample
Wang et al. ECLIPSE: Expunging clean-label indiscriminate poisons via sparse diffusion purification
Li et al. Online alternate generator against adversarial attacks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210615

RJ01 Rejection of invention patent application after publication