CN114358268B - Software and hardware combined convolutional neural network model intellectual property protection method - Google Patents

Software and hardware combined convolutional neural network model intellectual property protection method Download PDF

Info

Publication number
CN114358268B
CN114358268B CN202210018007.0A CN202210018007A CN114358268B CN 114358268 B CN114358268 B CN 114358268B CN 202210018007 A CN202210018007 A CN 202210018007A CN 114358268 B CN114358268 B CN 114358268B
Authority
CN
China
Prior art keywords
neural network
network model
convolutional neural
accelerator
subnet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210018007.0A
Other languages
Chinese (zh)
Other versions
CN114358268A (en
Inventor
张吉良
廖慧芝
伍麟珺
洪庆辉
陈卓俊
关振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202210018007.0A priority Critical patent/CN114358268B/en
Publication of CN114358268A publication Critical patent/CN114358268A/en
Application granted granted Critical
Publication of CN114358268B publication Critical patent/CN114358268B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a software and hardware combined convolutional neural network model intellectual property protection method, which constructs a subnet and a non-subnet by retraining a neural network model twice, and modifies the circuit structure of an accelerator computing unit according to the distribution of the subnet and the non-subnet. And establishing a unique key corresponding to the hardware by using the DRAM PUF, generating different input signals according to the correctness of the key, and controlling the subnet weight of the accelerator computing unit circuit selection model to participate in computation if the key is correct, wherein the computation result is correct. Otherwise, the generated input signal controls the accelerator calculation unit circuit to select all weights of the model to participate in calculation, and the calculation result is wrong. The weight selection does not need extra selection time, the DRAM carried in the accelerator is used as a PUF verification key, a specific decryption process is not needed, the hardware cost is extremely low, and the weight intellectual property protection of the neural network model with high efficiency, low cost and high safety can be realized.

Description

Software and hardware combined convolutional neural network model intellectual property protection method
Technical Field
The invention relates to the technical field of information, in particular to a software and hardware combined convolutional neural network model intellectual property protection method.
Background
CNN has wide application in the fields of character recognition, face recognition, voice recognition, image classification, etc. The success of the CNN model directly benefits from high quality data sets. Many business datasets are often proprietary in that they contain business secrets of businesses or customer privacy, etc. The collection and processing of data sets requires significant human and physical overhead. Furthermore, training a high performance CNN model often requires expensive training resources. Such as accelerators used for model training (TPU, GPU, FPGA, etc.), require high energy consumption and require certain human resources and time resources during training. If the model is subjected to parameter adjustment, experienced parameter adjustment technicians are required to perform parameter adjustment by utilizing own knowledge and experience. Model providers rely on selling the usage rights of CNNs to gain profits, and if the IP of the CNN model is not protected, once a bad user or malicious attacker eavesdrop or purchase the IP of the CNN model with weights and offsets, the attacker may replicate and distribute to unauthorized end users, not only reducing the profits of the enterprise, market share, but also potentially compromising brand reputation. Therefore, protection of IP of the CNN model is required.
The IP protection work for the neural network model mainly has three directions: training data set IP protection, accelerator IP protection, model parameters IP protection. The model provider, the accelerator provider, is typically trusted, and the neural network model is provided from the model provider to the user, and not trusted until the user makes model reasoning with the accelerator provided by the accelerator provider, i.e., model weight parameters may be stolen during the model provider distributing the model to the user, or during the accelerator reasoning. Resulting in the thief being free to use the corresponding neural network model.
Thus, if the model parameters IP are to be protected, the weights of the model need to be encrypted before the model provider is provided to the user. If the user needs to use the model normally, the correct key must be obtained from the model vendor for decryption. In existing works, there are mainly two methods for encrypting the model. The first most common is to encrypt the weights using conventional encryption algorithms. When an encrypted model is required, the recovery weights are decrypted using the key in the model reasoning stage. In general, accelerators used for model reasoning are considered as protected by SGX schemes, and run-phase attackers cannot directly access or operate inside the accelerator. However, if all the weights are decrypted before training, the decrypted model data may still be stolen by an attacker when the accelerator reads the external weight data. If the encrypted weights are all decrypted inside the accelerator, the operation overhead of the accelerator is greatly increased, and the performance of the accelerator is reduced. The second method is to use the confusion algorithm to carry out confusion on the weights and exchange weight positions to protect the intellectual property rights of the weights, and compared with the traditional encryption algorithm, the method reduces the time and hardware expenses possibly caused in the encryption and decryption processes of the weights, but the confusion algorithm is relatively simple, and the more complex confusion algorithm also causes larger time and space expenses.
Disclosure of Invention
In order to solve the problems, the invention provides a software and hardware combined convolutional neural network model intellectual property protection method, which constructs a subnet and a non-subnet by retraining the neural network model twice, modifies the circuit structure of part of accelerator computing units according to the distribution of the subnet and the non-subnet, and the modified accelerator computing units can determine the weight or weight computing result participating in computation according to a certain input signal. And establishing a unique key corresponding to the hardware by using the DRAM PUF, generating different input signals according to the correctness of the key, controlling a modification part of a calculation unit circuit of the accelerator by the generated input signals if the key is correct, selecting the weight of a sub-network part of the model to participate in calculation, and using the model normally, wherein the obtained result is a correct result. Otherwise, the generated input signal controls the modification part of the accelerator calculation unit circuit to select all weights of the model to participate in calculation, the obtained result is an error result, and the model can not be normally used. The method binds the neural network model with the specific hardware of the accelerator, effectively improves the security of the weight data of the convolutional neural network model, and has the advantages of no need of specific decryption process, and extremely low time cost and hardware cost. The method can realize the protection of high efficiency, low cost and high safety of the weight intellectual property of the neural network model.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a software and hardware combined convolutional neural network model intellectual property protection method comprises the following steps:
step one, a convolutional neural network model provider side obtains a correct training data set D and an incorrect training data set D2;
Secondly, a convolutional neural network model provider performs first retraining on the convolutional neural network model by adopting a correct training data set D, and divides the convolutional neural network model into a subnet part and a non-subnet part, wherein the weights of the non-subnet parts are all set to be 0; training the subnet part by using the correct training data set D to obtain weight data of the subnet part, and obtaining a correct trained convolutional neural network model;
Step three, the convolutional neural network model provider carries out second retraining on the convolutional neural network model by adopting an error training data set D2, the weight of the subnet part is kept unchanged in the training process, the weight data of the non-subnet part is obtained by changing the weight of the non-subnet part, an error trained convolutional neural network model is obtained, and the error trained convolutional neural network model outputs an error expected result;
Dividing a DRAM region in the accelerator to serve as a DRAM PUF region, respectively starting the accelerator to electrify the DRAM PUF region, and measuring DRAM starting initial values of different address ranges, wherein the DRAM address range is used as an excitation C of the DRAM PUF, and the DRAM initial value obtained by corresponding to the excitation C is a response R; obtaining a plurality of C-R pairs, called CRP for short, as secret keys;
Setting an accelerator calculation module, so that when an input secret key is correct, the accelerator calculates and outputs a correct result by adopting weight data of a subnet part of the convolutional neural network model after correct training and input data of the convolutional neural network model; otherwise, calculating weight data of a non-subnet part and a subnet part in the error trained convolutional neural network model and input data of the convolutional neural network model, and outputting an error result;
Step six, a user purchases the convolutional neural network model and the corresponding accelerator from a provider side, and a key is obtained from the convolutional neural network model provider side, wherein the key comprises a plurality of CRPs;
Step seven, a user inputs any CRP in the obtained secret key into an accelerator, the accelerator is started, the input CRP comprises an excitation C and a corresponding response R, the excitation C corresponds to a DRAM area to obtain a starting value, the responses R ', R' and R are obtained to calculate the similarity, if the similarity is not smaller than a preset threshold sigma, the secret key is considered to be correct, the accelerator calculates the input data of the subnet part of the convolutional neural network model after correct training by adopting the weight data of the subnet part of the convolutional neural network model, and a correct result is output; otherwise, the accelerator calculates the weight data of the subnet part and the weight data of the non-subnet part in the error trained convolutional neural network model and the input data of the convolutional neural network model, and outputs an error result.
Further improvements, the erroneous training data set D2 is obtained by re-labeling the correct training data set D with errors.
Further improvements are made in that the correct training data set D and the incorrect training data set D2 are both image data sets.
Further improved, the method for calculating the similarity between R' and R is as follows: Wherein J (R, R ') represents the similarity between R' and R.
Further improvement, σ=0.95.
In a further improvement, in the fifth step, when the input key pair CRP is set to be correct, the computing unit of the accelerator selects signal set 0, otherwise, the computing unit of the accelerator selects signal set 1.
The beneficial effects of the invention are as follows:
The invention constructs the subnet and the non-subnet by retraining the neural network model twice, and modifies the circuit structure of part of the accelerator computing unit according to the distribution of the subnet and the non-subnet. And establishing a unique key corresponding to the hardware by using the DRAM PUF, generating different input signals according to the correctness of the key, controlling a modification part of a calculation unit circuit of the accelerator by the generated input signals if the key is correct, selecting the weight of a sub-network part of the model to participate in calculation, obtaining a correct result, and using the model. Otherwise, the generated input signal controls the modification part of the accelerator calculation unit circuit to select all weights of the model to participate in calculation, the obtained result is an error result, and the model can not be normally used. The key correctness verification of the method adopts a lightweight security primitive: the DRAM PUF, the security of model and specific accelerator hardware bind, has improved the security. The method has the advantages that the method has little hardware cost for modifying the circuit of the accelerator, and can realize the neural network model weight intellectual property protection with high efficiency, low cost and high safety.
Drawings
FIG. 1 is a block diagram of a frame of the present invention;
FIG. 2 is a workflow diagram of the present invention;
FIG. 3 is a diagram of an example subnet training;
FIG. 4 is a diagram of a circuit modification example of an accelerator;
Figure 5 is a diagram of an example DRAM PUF.
Detailed Description
The technical scheme of the invention is specifically described below through the specific embodiments and with reference to the accompanying drawings.
The software and hardware combined CNN model IP protection method shown in fig. 1 and fig. 2 mainly comprises two major parts:
(1) Hardware architecture: modifying the working mode of a CNN accelerator calculation unit, and performing circuit level encryption by using CRP of a DRAM PUF;
(2) Software architecture: performing sparse-dense encryption training on the CNN model to modify CNN weight distribution.
The invention combines the software and hardware architecture to construct the neural network IP protection framework with the cooperation of the software and the hardware. The model provider distributes the keys to legitimate users who make model predictions using the accelerator and neural network models provided by the model provider. If the user is legal, the user has the correct key and runs the model on the specific accelerator, the whole neural network model reasoning normal running, and the correct model prediction result is given. Once a user is illegally or runs on an illegitimate accelerator, a key error may trigger a failure of the model preset, such as causing a decrease in the predictive effect or the model giving a specific misprediction result, the model being unusable.
The specific contents are as follows:
(1) Hardware architecture: modifying the working mode of a CNN accelerator calculation unit, and performing circuit level encryption by using CRP of a DRAM PUF;
The key distributed to the user is CRP of DRAM PUF, and the preset model fault is designed according to the circuit structure of the accelerator. Acquisition of CRP of DRAM PUF by demarcating a partial area as PUF area in the DRAM area inside the accelerator, stimulus C is the address range of DRAM, and response is the start value R demarcating the DRAM area at the start of the accelerator. Through multiple measurements, reliable, stable DRAM PUF CRPs are chosen, which are used as keys. As shown in the example diagram of a DRAM PUF of fig. 5, there are 8x8 DRAM arrays, each array being referred to as 1 DRAM bank. As shown, in one bank, DRAM regions of address range C 1,C2,C3, four DRAM cells per region, and three regions are selected as the three stimuli. The accelerator is activated and the DRAM cell is powered on to produce an initial value, the initial value distribution for the DRAM cell corresponding to C 1,C2,C3 is designated R 1,R2,R3.C1、R1 as an excitation response, designated CRP 1. Similarly, C 2、R2 is designated CRP 2,C3、R3 as CRP 3. When the user predicts the input data on the accelerator by using the model, the key is input, namely, CRP of a DRAM PUF is randomly fetched, stimulus C is input in the accelerator DRAM PUF area, and a response R' is obtained if The key is correct, an input signal 0 is generated, and the signal control calculation unit selects the model subnet weight for calculation. If J (R, R') is less than sigma, the key is considered as error, an input signal 1 is generated, the signal control calculation unit selects all model weights to calculate, an error prediction result is obtained, and the model is not available. Where J (R, R ') is Jaccard coefficient, the closer J (R, R ') is to 1 the closer R and R ', σ represents a specific threshold, which is in the range of 0 Σ 1, such as optional σ=0.95.
The circuit trigger mechanism for the accelerator to select the model subnet weight for calculation or the model total weight for calculation is designed as follows:
Accelerators are implemented primarily by addition and multiplication when processing convolution operations. In order to achieve partial addition of weights in the presence of the correct key, it is necessary to insert a Multiplexer (MUXs) into the calculation unit of the accelerator to select the weights to be calculated. The computation structure of the addition tree uses MUXs to insert the non-subnet weight computing part. In the MAC computation structure, a finite state machine may be used, which is counted to determine channels belonging to the subnet.
First, an acceleration calculation mode of a common adder tree needs to be described, such as a circuit modification example diagram of the accelerator in fig. 4: a multiplication and addition tree with four multipliers and three adders is denoted a-M-T. The four multipliers are each labeled M 1,M2,M3,M4 and the adder is labeled a 1,A2,A0. The input vector IM (i 1,i2,i3,i4) needs to be calculated, along with the convolution kernel vector KN (k 1,k2,k3,k4). The result should be w=i 1×k1+i2×k2+i3×k3+i4×k4. The acceleration calculation is performed here using a-M-T. The multiplication and addition tree A-M-T original acceleration mode is as follows: the IM vector distributes i 1,i2,i3,i4 to multipliers M 1,M2,M3,M4, respectively, and similarly, the convolution kernel vector KN distributes k 1,k2,k3,k4 to M 1,M2,M3,M4, respectively, and the four multipliers calculate i 1×k1,i2×k2,i3×k3,i4×k4 in parallel, resulting in a result M 1,m2,m3,m4. Next, m 1,m2 is input to adder a 1 and summed to obtain sum 1,m3,m4, and input to adder a 2 and summed to obtain sum 2. The final sum 1、sum2 is input to the last adder a 0 to obtain the result W.
As introduced above, a hardware modification to the adder tree may be to add two multiplexer MUXs 1,MUX2 between the multiplier and adder a 1,A2. The multiplexer MUX 1 selects whether the result of the multiplier calculation, such as m 2, is transmitted to the adder a 1, if so, the calculation result of a 1 is normal sum 1=m1+m2; if the selection is not incoming, the result is sum 1=m1.MUX2. If neither MUX 1,MUX2 chooses to pass in, the final input result w=m 1+m4.
The selection signal for the multiplexer depends on the magnitude relationship of J (R, R') and sigma. If the Jaccard coefficient is greater than or equal to sigma, the selection signal is set to 0, i.e. the signal control selector only selects part of the weight inputs, and if the Jaccard coefficient is less than sigma, the selection signal is set to 1, and all the weight inputs are selected.
(2) And performing sparse-dense retraining on the CNN model to modify CNN weight distribution so as to obtain a software encryption architecture.
Here, the sub-network of the CNN needs to be divided. That is, a part of the sub-network is drawn in the original neural network weight as the original neural network model. According to the related work of model pruning, removing part of the weight of the neural network does not cause a significant decrease in the performance of the neural network model, so that the neural network model is often pruned for better prediction efficiency. The invention requires pruning and retraining of the neural network to construct the subnetwork.
The construction of the subnetwork needs to be designed according to the hardware structure of the neural network accelerator. Each input feature map is denoted as X here, and then the graph is (W x,hx,cx), where W x represents the width of the input image, h x represents the height of the input image, c x represents the number of channels of the input image, and the parameter of the CNN model is W, then the shape of the CNN is expressed as: (w, h, c in,cout), where w represents width, h represents height, and c in,cout represents input and output channels.
The acceleration operation of the CNN model accelerator is mainly accelerated by the parallelization of computation, wherein the parallelization mode of the accelerator mainly comprises two modes: input channel parallelism and pixel parallelism. The input channels are calculated in parallel, i.e. different input channels are calculated in parallel, and finally the calculation result at that point is calculated in the same addition tree or multiplication tree (MAC). The pixel level parallelism is that for a convolution kernel with height h and width w, the same convolution kernel performs parallel computation in the same addition tree or multiplication tree.
Constructing a CNN subnet requires designing according to a parallelization mode of accelerator acceleration operation. As shown in the example graph of subnet training in fig. 3. If the accelerator is pixel-level parallel, then convolution kernel weights are chosen as the sub-network. That is, taking a convolution kernel of 3×3 as an example, taking the intersection of the convolution kernel as a subnet, 5 weights obtained by intersecting the middle row and the middle column form a subnet, where the subnet is partially gray filled, and the rest is a non-subnet, where the non-subnet is partially black filled. If the accelerator is channel-level parallel, then every n input channels first take the first i as a subnet and the last n-i as a non-subnet. The choice of i here is determined by the encryption performance, and a balance of performance and subnet concealment is required to make i as small as possible. As shown in the figure, the original weights of 6 channels take the first 1 channel as a subnet every 2 channels, as shown in the figure, the subnet portion is filled in gray and the non-subnet portion is filled in black.
The training process of the subnetwork is as follows:
the subnet shape is designed according to the accelerator hardware architecture. Pruning is carried out on the model according to the subnet, and the subnet is trained so that the subnet can achieve the prediction effect of the original model.
The trained subnet weights are kept unchanged, only the non-subnet weights are changed, the model is wrongly trained, prediction accuracy is reduced or the model is wrongly classified, and therefore the model is not available.
The specific method comprises the following steps:
Model provider side:
Step one of software stage: the data prepares a correct training data set D and a backup error training data set D2 of the training model;
Step two of software stage: and (3) training the model for the first time by using the training data set D, dividing the subnet, setting the non-subnet part to be zero, and training the subnet model to achieve the expected model effect by changing the weight of the subnet part. As in fig. 3, training of the subnetwork is a step Training the subnet weight part of the original weight by using the data set D, and setting the rest non-subnet regional weights to zero. After obtaining the subnet, performing a second training on the model by using the training data set D2, during the training process, keeping the weight of the trained subnet unchanged, changing the weight of the non-subnet, wherein the training obtained model is an error unavailable model, and the training stage of the weight of the non-subnet is a step/>, as shown in fig. 3Training non-subnet weight component using data set D 2, step/>, during trainingThe resulting subnet weights remain unchanged.
Hardware stage step one: dividing a DRAM area in an accelerator, which is used as a DRAM PUF area, starting the accelerator for a plurality of times to electrify the DRAM PUF area, and measuring DRAM starting initial values in different address ranges, wherein the DRAM address range is an excitation C of the DRAM PUF, the DRAM initial value obtained corresponding to the C is a response R, a plurality of CR pairs, called CRP for short, are obtained, and the CRP is used as a key;
The hardware stage step II: the circuit structure of the accelerator calculation module is modified. As shown in the accelerator (circuit modification) example diagram of fig. 4. If the key is correct, the signal in the modification module is set to 0, and when the calculation module calculates, part of the weights participate in calculation, wherein the part of the weights participating in calculation are all subnet weights, and the non-subnet weights and the calculation results thereof are not transmitted into the subsequent calculation steps, namely do not participate in calculation. If the key is wrong, the signal is set to 1, and when the calculation module performs calculation, all weights participate in calculation, and a model prediction result is wrong and can not be normally used.
The model uses the user side:
step one: the model and corresponding accelerator are purchased from the vendor side, and a key (CRP) is obtained from the model vendor side.
Step two: the neural network model and accelerators purchased are used, such as for pattern classification. The accelerator is started, and any CRP is input into the accelerator. Stimulus C, the corresponding DRAM region, determines the measured DRAM address range. And starting the accelerator, wherein C corresponds to the DRAM area to obtain a starting value, the starting value is that responses R ', R' and R corresponding to C are calculated by adopting Jaccard coefficients, if the obtained Jaccard coefficient value is close to 1 (not smaller than a threshold sigma), the key is considered to be correct, the model is normally used, otherwise, the key is considered to be incorrect, and the model is not usable.
And generating different input signals according to key results input by a user, if the key is correct, controlling a modification part of an accelerator computing unit circuit by the generated input signals, selecting a weight of a subnet part of the model to participate in computation, and using the model normally, wherein the obtained result is a correct result. Otherwise, the generated input signal controls the modification part of the accelerator calculation unit circuit to select all weights of the model to participate in calculation, the result is an error result, and the model is not available. The method effectively improves the safety of the weight data of the convolutional neural network model, and the realization of the method does not need a specific decryption process or too much hardware cost. The method can realize the protection of high efficiency, low cost and high safety of the weight intellectual property of the neural network model.
The foregoing is merely a specific guiding embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modification of the present invention by using the concept should be construed as infringement of the protection scope of the present invention.

Claims (6)

1. The software and hardware combined convolutional neural network model intellectual property protection method is characterized by comprising the following steps of:
step one, a convolutional neural network model provider side obtains a correct training data set D and an incorrect training data set D2;
Secondly, a convolutional neural network model provider performs first retraining on the convolutional neural network model by adopting a correct training data set D, and divides the convolutional neural network model into a subnet part and a non-subnet part, wherein the weights of the non-subnet parts are all set to be 0; training the subnet part by using the correct training data set D to obtain weight data of the subnet part, and obtaining a correct trained convolutional neural network model;
Step three, the convolutional neural network model provider carries out second retraining on the convolutional neural network model by adopting an error training data set D2, the weight of the subnet part is kept unchanged in the training process, the weight data of the non-subnet part is obtained by changing the weight of the non-subnet part, an error trained convolutional neural network model is obtained, and the error trained convolutional neural network model outputs an error result;
Dividing a DRAM region in the accelerator to serve as a DRAM PUF region, respectively starting the accelerator to electrify the DRAM PUF region, and measuring DRAM starting initial values of different address ranges, wherein the DRAM address range is used as an excitation C of the DRAM PUF, and the DRAM initial value obtained by corresponding to the excitation C is a response R; obtaining a plurality of C-R pairs, called CRP for short, as secret keys;
Setting an accelerator calculation module, so that when an input secret key is correct, the accelerator calculates and outputs a correct result by adopting weight data of a subnet part of the convolutional neural network model after correct training and input data of the convolutional neural network model; otherwise, calculating weight data of a non-subnet part and a subnet part in the error trained convolutional neural network model and input data of the convolutional neural network model, and outputting an error result;
Step six, a user purchases the convolutional neural network model and the corresponding accelerator from a provider side, and a key is obtained from the convolutional neural network model provider side, wherein the key comprises a plurality of CRPs;
Step seven, a user inputs any CRP in the obtained secret key into an accelerator, the accelerator is started, the input CRP comprises an excitation C and a corresponding response R, the excitation C corresponds to a DRAM area to obtain a starting value, the responses R ', R' and R are obtained to calculate the similarity, if the similarity is not smaller than a preset threshold sigma, the secret key is considered to be correct, the accelerator calculates the input data of the subnet part of the convolutional neural network model after correct training by adopting the weight data of the subnet part of the convolutional neural network model, and a correct result is output; otherwise, the accelerator calculates the weight data of the subnet part and the weight data of the non-subnet part in the error trained convolutional neural network model and the input data of the convolutional neural network model, and outputs an error result.
2. The method for protecting intellectual property of a convolutional neural network model combining software and hardware as claimed in claim 1, wherein the wrong training data set D2 is obtained by re-labeling the correct training data set D with errors.
3. The method for protecting intellectual property of a convolutional neural network model by combining software and hardware as claimed in claim 1, wherein the correct training data set D and the incorrect training data set D2 are both image data sets.
4. The intellectual property protection method for a convolutional neural network model combining software and hardware as claimed in claim 1, wherein the method for calculating the similarity between R' and R is as follows: Wherein J (R, R ') represents the similarity between R' and R.
5. The software and hardware combined convolutional neural network model intellectual property protection method of claim 1, wherein σ = 0.95.
6. The intellectual property protection method of a convolutional neural network model combining software and hardware as set forth in claim 1, wherein in the fifth step, when the input key pair CRP is set to be correct, the calculation unit selection signal of the accelerator is set to 0, otherwise, the calculation unit selection signal of the accelerator is set to 1.
CN202210018007.0A 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method Active CN114358268B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210018007.0A CN114358268B (en) 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210018007.0A CN114358268B (en) 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method

Publications (2)

Publication Number Publication Date
CN114358268A CN114358268A (en) 2022-04-15
CN114358268B true CN114358268B (en) 2024-04-19

Family

ID=81106842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210018007.0A Active CN114358268B (en) 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method

Country Status (1)

Country Link
CN (1) CN114358268B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018171663A1 (en) * 2017-03-24 2018-09-27 中国科学院计算技术研究所 Weight management method and system for neural network processing, and neural network processor
CN109002883A (en) * 2018-07-04 2018-12-14 中国科学院计算技术研究所 Convolutional neural networks model computing device and calculation method
WO2020012061A1 (en) * 2018-07-12 2020-01-16 Nokia Technologies Oy Watermark embedding techniques for neural networks and their use
CN112272094A (en) * 2020-10-23 2021-01-26 国网江苏省电力有限公司信息通信分公司 Internet of things equipment identity authentication method, system and storage medium based on PUF (physical unclonable function) and CPK (compact public key) algorithm
CN113361682A (en) * 2021-05-08 2021-09-07 南京理工大学 Reconfigurable neural network training with IP protection and using method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8402401B2 (en) * 2009-11-09 2013-03-19 Case Western University Protection of intellectual property cores through a design flow
US9628272B2 (en) * 2014-01-03 2017-04-18 William Marsh Rice University PUF authentication and key-exchange by substring matching
JP6882666B2 (en) * 2017-03-07 2021-06-02 富士通株式会社 Key generator and key generator
CN109685501B (en) * 2018-12-04 2023-04-07 暨南大学 Auditable privacy protection deep learning platform construction method based on block chain excitation mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018171663A1 (en) * 2017-03-24 2018-09-27 中国科学院计算技术研究所 Weight management method and system for neural network processing, and neural network processor
CN109002883A (en) * 2018-07-04 2018-12-14 中国科学院计算技术研究所 Convolutional neural networks model computing device and calculation method
WO2020012061A1 (en) * 2018-07-12 2020-01-16 Nokia Technologies Oy Watermark embedding techniques for neural networks and their use
CN112272094A (en) * 2020-10-23 2021-01-26 国网江苏省电力有限公司信息通信分公司 Internet of things equipment identity authentication method, system and storage medium based on PUF (physical unclonable function) and CPK (compact public key) algorithm
CN113361682A (en) * 2021-05-08 2021-09-07 南京理工大学 Reconfigurable neural network training with IP protection and using method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Jialong Zhang er al.Protecting Intellectual Property of Deep Neural Networks with Watermarking.《ASIACCS '18: Proceedings of the 2018 on Asia Conference on Computer and Communications Security》.2018,全文. *
基于混沌的公开可验证FPGA知识产权核水印检测方案;张吉良等;《中国科学:信息科学》;20131231;第43卷(第09期);全文 *
基于软件定义片上可编程系统的卷积神经网络加速器设计;苗凤娟;王一鸣;陶佰睿;;科学技术与工程;20191208(34);全文 *

Also Published As

Publication number Publication date
CN114358268A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
Bhasin et al. Mind the portability: A warriors guide through realistic profiled side-channel analysis
Wu et al. Remove some noise: On pre-processing of side-channel measurements with autoencoders
CN109787743B (en) Verifiable fully homomorphic encryption method based on matrix operation
CN112989368B (en) Method and device for processing private data by combining multiple parties
CN111400766A (en) Method and device for multi-party joint dimension reduction processing aiming at private data
CN112800444B (en) Color image encryption method based on two-dimensional chaotic mapping
CN112260818B (en) Side channel curve enhancement method, side channel attack method and side channel attack device
CN115276947B (en) Private data processing method, device, system and storage medium
Jebreel et al. Enhanced security and privacy via fragmented federated learning
Soykan et al. A survey and guideline on privacy enhancing technologies for collaborative machine learning
US20180034628A1 (en) Protecting polynomial hash functions from external monitoring attacks
JP2019153216A (en) Learning device, information processing system, method for learning, and program
Zhao et al. AEP: An error-bearing neural network accelerator for energy efficiency and model protection
CN114036581A (en) Privacy calculation method based on neural network model
CN114358268B (en) Software and hardware combined convolutional neural network model intellectual property protection method
CN111049644B (en) Rational and fair secret information sharing method based on confusion incentive mechanism
CN117134945A (en) Data processing method, system, device, computer equipment and storage medium
CN116132017B (en) Method and system for accelerating privacy protection machine learning reasoning
EP2363974A1 (en) Variable table masking for cryptographic processes
CN108632033B (en) Homomorphic encryption method based on random weighted unitary matrix in outsourcing calculation
Arora et al. Application of Artificial Neural Network in Cryptography
Wu et al. Efficient privacy-preserving federated learning for resource-constrained edge devices
He et al. IPlock: An effective hybrid encryption for neuromorphic systems IP core protection
Wang et al. A publicly verifiable outsourcing matrix computation scheme based on smart contracts
CN115834791B (en) Image encryption and decryption transmission method using matrix key and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant