CN115545154A - Convolutional neural network model intellectual property protection method based on PUF - Google Patents

Convolutional neural network model intellectual property protection method based on PUF Download PDF

Info

Publication number
CN115545154A
CN115545154A CN202211143537.4A CN202211143537A CN115545154A CN 115545154 A CN115545154 A CN 115545154A CN 202211143537 A CN202211143537 A CN 202211143537A CN 115545154 A CN115545154 A CN 115545154A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
network model
model
field programmable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211143537.4A
Other languages
Chinese (zh)
Inventor
李大伟
任阳坤
刘镝
李世中
关振宇
孙钰
崔剑
刘建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202211143537.4A priority Critical patent/CN115545154A/en
Publication of CN115545154A publication Critical patent/CN115545154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/18Legal services
    • G06Q50/184Intellectual property management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Technology Law (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Operations Research (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a convolutional neural network model intellectual property protection method based on PUF, which comprises the following steps: a physical unclonable function is implemented on an array of field programmable logic gates, providing a unique fingerprint bound to a particular field programmable gate array. And respectively training the convolutional neural network model by using the true data set and the pseudo data set to obtain a true model and a pseudo model, and responding to the two parameter matrixes on the same layer of the hybrid true model and the pseudo model by using a physical unclonable function through executing a confusion algorithm. After the aliasing is completed, a recovery convolutional layer depending on the response of the physical unclonable function is added into the model, so that the correct result can be calculated only when the convolutional neural network model runs on a specific field programmable logic gate array. Therefore, the physical unclonable function is used for IP protection of the convolutional neural network model, and the IP protection work is advanced from the time after the model is stolen to the time before the model is stolen.

Description

Intellectual property protection method of convolutional neural network model based on PUF
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a convolutional neural network model intellectual property protection method based on PUF.
Background
The most straightforward way to protect the property rights of a Convolutional Neural Network (CNN) model running in a cloud environment is to embed a watermark in the model, which the trainer can assert by verifying when the model is stolen. However, because the parameters of the model embedded watermark are different from the original model, the existence of the watermark is very easy to detect, and the method can only take effect when the owner of the model maintains the right after the model is stolen, so that the model cannot be stolen.
There are also many protection methods for intellectual property rights for CNN models that run on Field Programmable Gate Arrays (FPGAs) relying on the help of computational accelerators. Such as a pay-per-device method based on Physical Unclonable Functions (PUFs), an efficient scheme for smart distributed key authentication based on smart public key infrastructure and edge computing, and a method of embedding a key stored in a secure trusted environment into a model training process, these methods do not fully guarantee the security of model parameters, and it is easy to obtain a correct model if a thief colludes with an accelerator provider. Furthermore, the keys of these methods are at some risk of being compromised.
Disclosure of Invention
The application provides a convolutional neural network model intellectual property protection method, a convolutional neural network model intellectual property protection device, electronic equipment and a storage medium based on PUF, wherein the PUF is used for IP protection of a CNN model, and IP protection work is advanced from the time after the model is stolen to the time before the model is stolen.
The embodiment of the first aspect of the application provides a convolutional neural network model intellectual property protection method based on PUF, which comprises the following steps: utilizing a field programmable gate array to realize arbitration of a physical unclonable function and acquiring a unique fingerprint of the field programmable gate array; respectively training a convolutional neural network by utilizing the true data set and the pseudo data set to obtain a true convolutional neural network model and a pseudo convolutional neural network model; according to the response of the arbitration physical unclonable function, mixing the parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model by using a preset confusion algorithm to obtain a final convolutional neural network model; and generating a recovery matrix by using a recovery algorithm according to the response of the arbitration physical unclonable function, and multiplying the recovery matrix by a matrix of a confusion layer in the final convolutional neural network model to obtain a calculation result of the final convolutional neural network model.
Optionally, in an embodiment of the present application, the arbitrating a physically unclonable function by using a field programmable gate array includes: designing an arbitration physical unclonable function circuit with a preset size on the field programmable logic gate array, and providing a unique fingerprint for the field programmable logic gate array, so that the final convolutional neural network model only outputs a correct result when running on the field programmable logic gate array.
Optionally, in an embodiment of the present application, the true convolutional neural network model and the pseudo convolutional neural network model have the same structure.
Optionally, in an embodiment of the present application, the mixing, according to the response of the arbitrated physically unclonable function, parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model by using a preset obfuscation algorithm includes: and when the response bit of the arbitration physical unclonable function is 1, the parameter of the confusion matrix in the preset confusion algorithm is the parameter of the true convolutional neural network model, otherwise, the parameter of the confusion matrix in the preset confusion algorithm is the parameter of the pseudo convolutional neural network model.
The embodiment of the second aspect of the present application provides a convolutional neural network model intellectual property protection device based on PUF, including: the design module is used for realizing arbitration of a physical unclonable function by utilizing a field programmable logic gate array and acquiring a unique fingerprint of the field programmable logic gate array; the training module is used for respectively training the convolutional neural network by utilizing the true data set and the pseudo data set to obtain a true convolutional neural network model and a pseudo convolutional neural network model; the confusion module is used for mixing the parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model by using a preset confusion algorithm according to the response of the arbitration physical unclonable function to obtain a final convolutional neural network model; and the recovery module is used for generating a recovery matrix by utilizing a recovery algorithm according to the response of the arbitrated physical unclonable function, and multiplying the recovery matrix by a matrix of a confusion layer in the final convolutional neural network model to obtain a calculation result of the final convolutional neural network model.
Optionally, in an embodiment of the present application, the arbitrating a physically unclonable function by using a field programmable gate array includes: and designing an arbitration physical unclonable function circuit with a preset size on the field programmable logic gate array to provide a unique fingerprint for the field programmable logic gate array, so that the final convolutional neural network model outputs a correct result only when running on the field programmable logic gate array.
Optionally, in an embodiment of the present application, the true convolutional neural network model and the pseudo convolutional neural network model have the same structure.
Optionally, in an embodiment of the present application, the confusion module is further configured to, when the response bit of the arbitrated physically unclonable function is 1, determine parameters of a confusion matrix in the preset confusion algorithm as parameters of the true convolutional neural network model, and otherwise, determine parameters of a confusion matrix in the preset confusion algorithm as parameters of the pseudo convolutional neural network model.
An embodiment of a third aspect of the present application provides an electronic device, including: memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the program to perform a PUF-based convolutional neural network model intellectual property protection method as described in the above embodiments.
A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the program is executed by a processor to execute the method for intellectual property protection based on a convolutional neural network model with PUF according to the above embodiment.
According to the PUF-based convolutional neural network model intellectual property protection method, the PUF-based convolutional neural network model intellectual property protection device, the electronic equipment and the storage medium, a physical unclonable function is realized on a field programmable logic gate array, and a unique fingerprint bound with a specific field programmable logic gate array is provided. And respectively training the convolutional neural network model by using the true data set and the pseudo data set to obtain a true model and a pseudo model, and responding to the two parameter matrixes on the same layer of the hybrid true model and the pseudo model by using a physical unclonable function through executing a confusion algorithm. After the aliasing is completed, a recovery convolutional layer depending on the response of the physical unclonable function is added into the model, so that the correct result can be calculated only when the convolutional neural network model runs on a specific field programmable logic gate array. Therefore, the physical unclonable function is used for IP protection of the convolutional neural network model, and the IP protection work is advanced from the time after the model is stolen to the time before the model is stolen. The application of the physical unclonable function ensures that IP protection does not need to store a secret key and build a trusted execution environment, and the fine tuning of the model structure can be applied to other convolution neural network models embedded in the field programmable logic gate array.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a PUF-based convolutional neural network model intellectual property protection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an obfuscation algorithm provided in accordance with an embodiment of the present application;
fig. 3 is a schematic structural diagram of a CNN model before and after confusion according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating comparison of accuracy rates of an original model, a model running on a true FPGA after obfuscation, and a model running on a counterfeit FPGA after obfuscation according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a prediction accuracy result according to an embodiment of the present application;
FIG. 6 is an exemplary diagram of a PUF-based convolutional neural network model intellectual property protection device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative and intended to explain the present application and should not be construed as limiting the present application.
Specifically, fig. 1 is a flowchart of a PUF-based convolutional neural network model intellectual property protection method according to an embodiment of the present application.
As shown in fig. 1, the method for protecting intellectual property rights based on the convolutional neural network model with PUF comprises the following steps:
in step S101, the field programmable gate array is used to implement an arbitrated physical unclonable function, and a unique fingerprint of the field programmable gate array is obtained.
The embodiment of the application provides a method for binding a CNN model with a specific FPGA (field programmable gate array) so that the CNN model can obtain a correct result only when running on the specific FPGA.
Optionally, in an embodiment of the present application, the arbitrating the physically unclonable function is implemented by a field programmable gate array, including: an arbitration physical unclonable function circuit with a preset size is designed on the field programmable logic gate array to provide a unique fingerprint for the field programmable logic gate array, so that a final convolutional neural network model only outputs a correct result when running on the field programmable logic gate array.
Specifically, implementations of an APUF include: a64 x 1 APUF is implemented on an FPGA with 64 bits as input and 1 bit as output, which provides the FPGA's unique fingerprint for obfuscating model parameters and binding the model to a particular FPGA, and the correct result will be calculated only when the model is run on that FPGA.
Due to the unpredictability of PUFs, even if the PUFs are run on the same production lot of FPGAs using the same challenge, the responses from different FPGAs are completely different, so even if the input challenge of the PUF is exposed, the adversary cannot get the correct response as long as the PUF circuit is secret.
By duplicating individual PUF circuits and concatenating their outputs, responses of arbitrary length can be obtained.
As a specific example, a classic CNN face classification model, about 274MB in size, containing 3 convolutional layers and 1 fully-connected layer, may be implemented on a PC.
In step S102, a convolutional neural network is trained using the true data set and the pseudo data set, respectively, to obtain a true convolutional neural network model and a pseudo convolutional neural network model.
Optionally, in an embodiment of the present application, the true convolutional neural network model and the pseudo convolutional neural network model have the same structure.
In step S103, according to the response of the arbitrated physical unclonable function, the parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model are mixed by using a preset confusion algorithm to obtain a final convolutional neural network model.
Optionally, in an embodiment of the present application, mixing parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model by using a preset aliasing algorithm according to a response of the arbitrated physically unclonable function includes:
when the response bit of the arbitration physical unclonable function is 1, the parameters of the confusion matrix in the preset confusion algorithm are the parameters of the true convolutional neural network model, otherwise, the parameters of the confusion matrix in the preset confusion algorithm are the parameters of the pseudo convolutional neural network model.
Specifically, a confusion layer is added in the model, and two parameter matrixes of the same layer of the true model and the pseudo model are mixed according to an APUF response by using a confusion algorithm shown in FIG. 2.
Wherein, the confusion algorithm comprises: and (3) parameters of the same layer in the true model and the pseudo model are: w is a correct ,b correct ,w fake ,b fake And a PUF response value response input algorithm.
Defining the initial value of correct _ index and fake _ index as 1, n as the length of response, w con [n]And b con [n]Is 0.
For each i between 1 and n, the following operations are repeated:
if response i Equal to 1, then w correct [correct_index]Is given to w con [i]B is mixing correct [correct_index]Is assigned to b con [i]Assigning correct _ index +1 to correct _ index; if response i If not equal to 1, then w fale [fale_index]Is given to w con [i]B is mixing fake [fake_index]Is assigned to b con [i]The fake _ index +1 is given to the fake _ index.
Algorithm output obfuscating parameter w con And b con
When the PUF response bit is 1, the parameters of the confusion matrix are the parameters of the true model, otherwise, the parameters of the confusion matrix are the parameters of the pseudo model. As shown in fig. 2, when the PUF response is 101100, the first, third, and fourth rows of parameters of the confusion matrix are from the true model, and the second, fifth, and sixth rows of parameters are from the pseudo model.
It should be noted that the pufrespin 101100 in this embodiment is a randomly obtained parameter, here, a randomly generated value, which has no limiting effect and can be adjusted according to actual situations in practical applications.
In step S103, a recovery matrix is generated by using a recovery algorithm according to the response of the arbitrated physical unclonable function, and the recovery matrix is multiplied by a matrix of a confusion layer in the final convolutional neural network model, so as to obtain a calculation result of the final convolutional neural network model.
And adding a recovery layer into the model, and generating a recovery matrix according to the APUF response by using a recovery algorithm.
Wherein the recovery algorithm comprises:
the algorithm input is the output y of the confusion layer in the model con And a PUF response value response.
Defining the initial value of rec _ index as 1,n as the length of response, M rec [n/2][n]Is 0.
For each i between 1 and n, the following operations are repeated: if response i Equals 1, then let M fec [rec_index][i]Equal to 1, rec _ index +1 is assigned to rec _ index.
Let y rec Is equal to M rec ×y con
Algorithm output recovery layer output y rec
A restoration layer is added after the aliasing layer, which is considered to be the convolution layer of the model, with a kernel size of 1 x 1.
As shown in fig. 3, the aliasing layer output is multiplied by the recovery matrix to obtain the correct result.
And (3) evaluating the prediction accuracy of the original model, the confusion model running on the correct FPGA and the confusion model running on the wrong FPGA, selecting 10 random responses as the input of the confusion model for each combination which is or is not confused, obtaining the prediction accuracy of the models, and calculating the average value of the prediction accuracy and the prediction accuracy. At least one layer should be ambiguous so there are only 15 results.
As shown in fig. 4, the accuracy of the model running with the correct response is always similar to the original model regardless of whether there is a confusing combination, but with the incorrect response as input, the accuracy is always less than 4.0%, and on average about 2.0%, at which point the model cannot work properly.
The X-axis of fig. 4 indicates whether the layers of the model are confused, e.g., "0001" indicates that only the parameters of the fully connected layers are confused, and "1010" indicates that the first and third convolutional layers are confused.
It should be noted that the execution subject in this embodiment is any CNN model, and the CNN model including 3 convolutional layers and 1 fully-connected layer mentioned herein has no limitation, and may be adjusted according to actual situations in practical applications.
Changes in the environment will cause changes in APUFresponse, and the accuracy of the model will change as the Hamming Distance (HD) between the random response and the correct response decreases. HD can represent the similarity between two strings, and the calculation method is as follows:
Figure BDA0003854424520000061
the course of the model's accuracy as HD decreases was evaluated. And mixing the fully connected layers of the CNN model for testing, wherein the output of the fully connected layers is the probability that the input picture belongs to a certain class. Let n =138 be the response length and the output of the confusion model be an n-row matrix.
It should be noted that n in this embodiment is a parameter obtained in actual operation, and here is an operation generated value, which has no limiting effect and can be adjusted according to actual conditions in actual application.
Random responses were constructed and the HD of random response and correct response was continuously changed from 2 to n, with random responses with different HD as model inputs and the models were evaluated using the same dataset.
As shown in FIG. 5, wherein the x-axis is
Figure BDA0003854424520000062
And the y axis is the model prediction accuracy when the model is substituted into the random response. When followingWhen the machine response has 90% identical bits to the correct response, the model prediction accuracy remains less than 5%. When the random response has 97% identical bits to the correct response, the model predicted accuracy is 31.9%.
The probability of the adversary acquiring a random response with 97% identity is:
Figure BDA0003854424520000063
when n is greater than 64, the probability is negligible.
When random response is only two-bit wrong with correct response, the model prediction accuracy is 93.1%. The confusion model is not affected by the response error.
According to the intellectual property protection method based on the convolutional neural network model with the PUF, a physical unclonable function is realized on a field programmable logic gate array, and a unique fingerprint bound with a specific field programmable logic gate array is provided. Respectively training a convolutional neural network model by using a true data set and a pseudo data set to obtain a true model and a pseudo model, and responding to two parameter matrixes of a mixed true model and a pseudo model by using a physical unclonable function by executing a confusion algorithm. After completion of the obfuscation, a recovery convolutional layer that depends on the response of the physically unclonable function is added to the model so that the correct result can only be calculated when the convolutional neural network model is run on a specific field programmable logic gate array. Therefore, the physical unclonable function is used for IP protection of the convolutional neural network model, and the IP protection work is advanced from the time after the model is stolen to the time before the model is stolen. The application of the physical unclonable function ensures that IP protection does not need to store a secret key and build a trusted execution environment, and the fine adjustment of the model structure can be applied to convolution neural network models embedded in other field programmable gate arrays.
Next, an intellectual property protection apparatus based on a PUF convolutional neural network model according to an embodiment of the present application is described with reference to the drawings.
Fig. 6 is an exemplary diagram of a PUF-based convolutional neural network model intellectual property protection device according to an embodiment of the present application.
As shown in fig. 6, the PUF-based convolutional neural network model intellectual property protection apparatus 10 includes: a design module 100, a training module 200, an obfuscation module 300, and a recovery module 400.
The design module 100 is configured to implement an arbitration physical unclonable function by using a field programmable gate array, and obtain a unique fingerprint of the field programmable gate array; the training module 200 is used for respectively training the convolutional neural network by utilizing the true data set and the pseudo data set to obtain a true convolutional neural network model and a pseudo convolutional neural network model; the confusion module 300 is configured to mix parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model by using a preset confusion algorithm according to a response of the arbitrated physical unclonable function, so as to obtain a final convolutional neural network model; and the recovery module 400 is configured to generate a recovery matrix according to the response of the arbitrated physical unclonable function by using a recovery algorithm, and multiply the recovery matrix with a matrix of a confusion layer in the final convolutional neural network model to obtain a calculation result of the final convolutional neural network model.
Optionally, in an embodiment of the present application, the arbitrating the physically unclonable function is implemented by a field programmable gate array, including: an arbitration physical unclonable function circuit with a preset size is designed on the field programmable logic gate array to provide a unique fingerprint for the field programmable logic gate array, so that a final convolution neural network model only outputs a correct result when running on the field programmable logic gate array.
Optionally, in an embodiment of the present application, the true convolutional neural network model and the pseudo convolutional neural network model are identical in structure.
Optionally, in an embodiment of the present application, the obfuscation module 300 is further configured to preset a parameter of an obfuscation matrix in an obfuscation algorithm as a parameter of the true convolutional neural network model when a response bit of the arbitrated physically unclonable function is 1, and otherwise, preset a parameter of an obfuscation matrix in the obfuscation algorithm as a parameter of the pseudo convolutional neural network model.
It should be noted that the foregoing explanation of the embodiment of the PUF-based convolutional neural network model intellectual property protection method is also applicable to the PUF-based convolutional neural network model intellectual property protection apparatus of this embodiment, and is not repeated herein.
According to the intellectual property protection device based on the convolutional neural network model with the PUF, a physical unclonable function is realized on a field programmable logic gate array, and a unique fingerprint bound with a specific field programmable logic gate array is provided. And respectively training the convolutional neural network model by using the true data set and the pseudo data set to obtain a true model and a pseudo model, and responding to the two parameter matrixes on the same layer of the hybrid true model and the pseudo model by using a physical unclonable function through executing a confusion algorithm. After the aliasing is completed, a recovery convolutional layer depending on the response of the physical unclonable function is added into the model, so that the correct result can be calculated only when the convolutional neural network model runs on a specific field programmable logic gate array. Therefore, the physical unclonable function is used for IP protection of the convolutional neural network model, and the IP protection work is advanced from the time when the model is stolen to the time when the model is stolen. The application of the physical unclonable function ensures that IP protection does not need to store a secret key and build a trusted execution environment, and the fine adjustment of the model structure can be applied to convolution neural network models embedded in other field programmable gate arrays.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
memory 701, processor 702, and a computer program stored on memory 701 and executable on processor 702.
The processor 702, when executing the program, implements the PUF-based convolutional neural network model intellectual property protection method provided in the above-described embodiment.
Further, the electronic device further includes:
a communication interface 703 for communication between the memory 701 and the processor 702.
A memory 701 for storing computer programs operable on the processor 702.
Memory 701 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
If the memory 701, the processor 702 and the communication interface 703 are implemented independently, the communication interface 703, the memory 701 and the processor 702 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 701, the processor 702, and the communication interface 703 are integrated on a chip, the memory 701, the processor 702, and the communication interface 703 may complete mutual communication through an internal interface.
The processor 702 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the PUF-based convolutional neural network model intellectual property protection method as described above.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of implementing the embodiments of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.

Claims (10)

1. A convolutional neural network model intellectual property protection method based on PUF is characterized by comprising the following steps:
utilizing a field programmable gate array to realize arbitration of a physical unclonable function and acquiring a unique fingerprint of the field programmable gate array;
respectively training a convolutional neural network by utilizing the true data set and the pseudo data set to obtain a true convolutional neural network model and a pseudo convolutional neural network model;
according to the response of the arbitration physical unclonable function, mixing the parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model by using a preset confusion algorithm to obtain a final convolutional neural network model;
and generating a recovery matrix by using a recovery algorithm according to the response of the arbitration physical unclonable function, and multiplying the recovery matrix by a matrix of a confusion layer in the final convolutional neural network model to obtain a calculation result of the final convolutional neural network model.
2. The method of claim 1, wherein arbitrating the physically unclonable function with a field programmable gate array comprises:
and designing an arbitration physical unclonable function circuit with a preset size on the field programmable logic gate array to provide a unique fingerprint for the field programmable logic gate array, so that the final convolutional neural network model outputs a correct result only when running on the field programmable logic gate array.
3. The method of claim 1,
the true convolution neural network model and the pseudo convolution neural network model have the same structure.
4. The method according to claim 1, wherein said mixing parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model using a preset obfuscation algorithm according to the response of the arbitrated physically unclonable function comprises:
and when the response bit of the arbitration physical unclonable function is 1, the parameter of the confusion matrix in the preset confusion algorithm is the parameter of the true convolutional neural network model, otherwise, the parameter of the confusion matrix in the preset confusion algorithm is the parameter of the pseudo convolutional neural network model.
5. An intellectual property protection device based on a convolutional neural network model of PUF, comprising:
the system comprises a design module, a fingerprint acquisition module and a fingerprint generation module, wherein the design module is used for realizing arbitration of a physical unclonable function by using a field programmable logic gate array and acquiring a unique fingerprint of the field programmable logic gate array;
the training module is used for respectively training the convolutional neural network by utilizing the true data set and the pseudo data set to obtain a true convolutional neural network model and a pseudo convolutional neural network model;
the confusion module is used for mixing the parameters of the same layer of the true convolutional neural network model and the pseudo convolutional neural network model by utilizing a preset confusion algorithm according to the response of the arbitrated physical unclonable function to obtain a final convolutional neural network model;
and the recovery module is used for generating a recovery matrix by utilizing a recovery algorithm according to the response of the arbitrated physical unclonable function, and multiplying the recovery matrix by a matrix of a confusion layer in the final convolutional neural network model to obtain a calculation result of the final convolutional neural network model.
6. The apparatus of claim 5, wherein the implementing the arbitrated physically unclonable function with a field programmable gate array comprises:
and designing an arbitration physical unclonable function circuit with a preset size on the field programmable logic gate array to provide a unique fingerprint for the field programmable logic gate array, so that the final convolutional neural network model outputs a correct result only when running on the field programmable logic gate array.
7. The apparatus of claim 5,
the true convolution neural network model and the pseudo convolution neural network model are identical in structure.
8. The apparatus of claim 5, wherein the obfuscation module is further to,
and when the response bit of the arbitration physical unclonable function is 1, the parameter of the confusion matrix in the preset confusion algorithm is the parameter of the true convolutional neural network model, otherwise, the parameter of the confusion matrix in the preset confusion algorithm is the parameter of the pseudo convolutional neural network model.
9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the PUF-based convolutional neural network model intellectual property protection method of any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, the program being executed by a processor for implementing the PUF-based convolutional neural network model intellectual property protection method of any one of claims 1 to 4.
CN202211143537.4A 2022-09-20 2022-09-20 Convolutional neural network model intellectual property protection method based on PUF Pending CN115545154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211143537.4A CN115545154A (en) 2022-09-20 2022-09-20 Convolutional neural network model intellectual property protection method based on PUF

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211143537.4A CN115545154A (en) 2022-09-20 2022-09-20 Convolutional neural network model intellectual property protection method based on PUF

Publications (1)

Publication Number Publication Date
CN115545154A true CN115545154A (en) 2022-12-30

Family

ID=84728561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211143537.4A Pending CN115545154A (en) 2022-09-20 2022-09-20 Convolutional neural network model intellectual property protection method based on PUF

Country Status (1)

Country Link
CN (1) CN115545154A (en)

Similar Documents

Publication Publication Date Title
Yu et al. Incremental SAT-based reverse engineering of camouflaged logic circuits
CN108898028B (en) Neural network model encryption protection system and method related to iteration and random encryption
US8386990B1 (en) Unique identifier derived from an intrinsic characteristic of an integrated circuit
US7017043B1 (en) Methods and systems for the identification of circuits and circuit designs
CN108920981B (en) Neural network model encryption protection system and method related to data iterative encryption
CN109002883B (en) Convolutional neural network model calculation device and calculation method
US11824967B2 (en) Electronic device using homomorphic encryption and encrypted data processing method thereof
US20200044872A1 (en) Apparatus and method for generating physically unclonable functions
CN113438134B (en) Request message processing method, device, server and medium
Chen et al. Novel strong-PUF-based authentication protocols leveraging Shamir’s secret sharing
CN113055431A (en) Block chain-based industrial big data file efficient chaining method and device
DE102020121075A1 (en) Establishment and procedure for the authentication of software
JP5831203B2 (en) Individual information generation apparatus, encryption apparatus, authentication system, and individual information generation method
CN112948895A (en) Data watermark embedding method, watermark tracing method and device
CN114386058A (en) Model file encryption and decryption method and device
CN112632564B (en) Threat assessment method and device
Anshul et al. PSO based exploration of multi-phase encryption based secured image processing filter hardware IP core datapath during high level synthesis
CN115545154A (en) Convolutional neural network model intellectual property protection method based on PUF
Dobrovolsky et al. Development of a hash algorithm based on cellular automata and chaos theory
Li et al. PUF-based intellectual property protection for CNN model
Müelich Channel coding for hardware-intrinsic security
CN114244517A (en) Data encryption and signature method and device, computer equipment and storage medium
CN113408012A (en) Fault detection
CN114846473A (en) Data processing circuit, data processing method and electronic equipment
Yuan et al. Secure integrated circuit design via hybrid cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination