Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical problem that an algorithm model capable of cracking random phase encryption is lacked in the aspect of big data cryptanalysis exists in the prior art.
In order to solve the technical problem, the invention provides a method and a device for constructing a deep neural network model. Because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved.
Fig. 1 is a schematic flow chart illustrating a method for constructing a deep neural network model according to a first embodiment of the present invention. The method specifically comprises the following steps:
a, carrying out random phase encryption on a plurality of groups of original data to obtain training data;
step B, training an ith-1 deep neural network model by utilizing training data to obtain an ith deep neural network model, inputting an ith output result of the ith deep neural network model by the training data, and comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
step C, when the ith comparison result meets a preset convergence condition, determining the ith deep neural network model as the constructed deep neural network model;
and D, when the ith comparison result does not meet the preset convergence condition, enabling i to be i +1, and returning to execute the step B.
It should be noted that, in the method for constructing the deep neural network model, preferably, 6 ten thousand sets of original data are subjected to random phase encryption to obtain 6 ten thousand sets of training data. And training the deep neural network model by using 6 ten thousand sets of training data, and performing iterative training for about 500 times to obtain the deep neural network model which is the constructed deep neural network model. The method comprises the steps of inputting 6 ten thousand groups of training data into an ith deep neural network model to obtain an ith output result, comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result meeting a preset convergence condition, determining the ith deep neural network model meeting the preset convergence condition as the constructed deep neural network model, and enabling the value of i to fluctuate around 500.
In the embodiment of the invention, because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved.
Please refer to fig. 2, which is a flowchart illustrating a detailed step of step B according to a first embodiment of the present invention. Specifically, the method comprises the following steps:
step E, inputting the training data into the (i-1) th deep neural network model, enabling the training data to perform array remodeling on the first remodeling layer, and outputting first remodeling data, wherein the (i-1) th deep neural network model comprises a first remodeling layer, three hidden layers, an output layer and a second remodeling layer;
step F, inputting first remolding data into a three-layer hidden layer consisting of a plurality of neurons, inputting the first remolding data into an output layer, outputting processing data, inputting the processing data into a second remolding layer for array remolding, and outputting second remolding data, wherein an activation function of the neurons is a linear rectification function, the number of the neurons in the hidden layer corresponds to the format of the first remolding data, the second remolding data is an i-1 output result of training data input into an i-1 deep neural network model, and the format of the second remolding data is the same as that of the training data;
and G, comparing the second reconstruction data with the original data corresponding to the training data based on the mean square error function and the random gradient descent function to obtain a comparison result, and optimizing and updating the (i-1) th deep neural network model by using the comparison result to obtain the (i) th deep neural network model.
It should be noted that the training data is input into the (i-1) th deep neural network model, and after passing through the first remodeling layer, the three hidden layers, the output layer and the second remodeling layer, the obtained second remodeling data is the (i-1) th output result of the (i-1) th deep neural network model into which the training data is input.
Specifically, please refer to fig. 3, which is a schematic composition diagram of a deep neural network model according to a first embodiment of the present invention. Preferably, 6 ten thousand sets of training data are input into the i-1 th deep neural network model, the i-1 th deep neural network model comprises a first reshaping layer, three hidden layers (respectively, a hidden layer 1, a hidden layer 2 and a hidden layer 3), an output layer and a second reshaping layer, the 6 ten thousand sets of training data are encrypted data of 28 pixels, the encrypted data of 1 784 pixels are reshaped through the first reshaping layer, and the encrypted data of 1 784 pixels is the first reshaping data. The first reshaping data is input into three layers of hidden layers and output layers to output processing data, wherein each layer of hidden layer and each layer of output layer respectively comprise 784 neurons, the 784 neurons form a fully-connected neural network, the activation function of each neuron is a linear rectification function, and the format of the processing data is decryption data of 1 × 784 pixels. The processed data is reshaped into decrypted data of 28 x 28 pixels by the second reshaping layer. Comparing the second reconstruction data with original data corresponding to the training data based on a mean square error function and a random gradient descent function to obtain a comparison result, and optimizing and updating the (i-1) th deep neural network model by using the comparison result to obtain the (i) th deep neural network model, wherein the random gradient descent function is used for accelerating the speed of training the deep neural network model.
It is emphasized that the ith-1 deep neural network model is optimized and updated by using the comparison result, three layers of hidden layers (hidden layer 1, hidden layer 2 and hidden layer 3, respectively) and an output layer are mainly optimized and updated, and the weight parameters in the neural network, namely the parameters in the neurons, are optimized and updated, so that the ith output result output by the ith deep neural network model is closer to the original data corresponding to the training data than the ith-1 output result output by the ith-1 deep neural network model. That is, the decryption process for the training data is mainly in the three hidden layers and the output layer.
In the embodiment of the invention, training data are input into the first remolding layer, the three-layer hiding layer, the output layer and the second remolding layer to obtain second remolding data (namely an ith-1 output result), the second remolding data are compared with original data corresponding to the training data to obtain a comparison result, and the ith-1 deep neural network model is optimized and updated by using the comparison result to obtain the ith deep neural network model. The deep neural network model is closer to the decryption model standard meeting the requirements, the training speed of the deep neural network model is increased by adopting the random gradient descent function, and the training speed is increased.
Please refer to fig. 4, which is a flowchart illustrating an additional step after step C in the first embodiment of the present invention. Specifically, the method comprises the following steps:
step H, carrying out random phase encryption on multiple groups of original data to obtain test data;
step I, inputting the test data into the constructed deep neural network model to obtain a test output result, and calculating the correlation between the test output result and original data corresponding to the test data;
step J, when the correlation degree is larger than or equal to a preset correlation coefficient, determining the deep neural network model as a correct decryption model;
and K, returning to execute the step A when the correlation degree is smaller than the preset correlation coefficient.
Please refer to fig. 3. When the deep neural network model is trained for about 500 times, training data are input into an ith output result after the ith deep neural network model, the ith output result is compared with original data corresponding to the training data to obtain an ith comparison result, and when the ith comparison result meets a preset convergence condition, the ith deep neural network model is determined to be the built deep neural network model. And C, carrying out random phase encryption on another 1 ten thousand groups of original data to obtain 1 ten thousand groups of test data, inputting the 1 ten thousand groups of test data into the deep neural network model to obtain a test output result, calculating the correlation degree between the test output result and the original data corresponding to the test data, determining the deep neural network model as a correct decryption model when the correlation degree is greater than or equal to a preset correlation coefficient, otherwise, when the correlation degree is less than the preset correlation coefficient, indicating that the construction of the deep neural network model has errors, and needing to restart the construction of the deep neural network model, namely returning to the step A, and preferably, the preset correlation coefficient is 0.8.
In the embodiment of the invention, the correctness of the constructed deep neural network model is evaluated by adopting the test data, so that the correctness of the constructed decryption model is ensured.
Please refer to fig. 5, which is a flowchart illustrating a detailed procedure of step I in the first embodiment of the present invention. Specifically, the method comprises the following steps:
step L, inputting test data into a constructed deep neural network model, performing array remodeling on the test data in a first remodeling layer, and outputting first remodeling data, wherein the deep neural network model comprises a first remodeling layer, three hidden layers, an output layer and a second remodeling layer;
step M, inputting first remolding data into a three-layer hidden layer consisting of a plurality of neurons, inputting the first remolding data into an output layer, outputting processing data, inputting the processing data into a second remolding layer for array remolding, and outputting second remolding data which is a test output result obtained after a deep neural network model constructed by inputting test data is input;
and N, calculating the correlation degree between the second remodeling data and the original data corresponding to the test data by using a correlation coefficient function.
It should be noted that, in the deep neural network model constructed by inputting test data, the second remodeling data is obtained through the first remodeling layer, the three hidden layers (hidden layer 1, hidden layer 2 and hidden layer 3, respectively), the output layer and the second remodeling layer. The second remodeling data is a test output result of inputting the test data into the constructed deep neural network model.
In the embodiment of the invention, the correlation coefficient function is adopted to calculate the correlation degree between the second reconstruction data and the original data corresponding to the test data, so that whether the constructed deep neural network model is correct or not is judged conveniently.
In addition, in the first embodiment of the present invention, step a performs random phase encryption on multiple sets of original data to obtain training data, and step H performs random phase encryption on multiple sets of original data to obtain test data. The step A and the step H can be combined into one step, namely random phase encryption is carried out on multiple groups of original data, and the obtained encrypted data is divided into two parts, namely training data and test data. Therefore, the training data and the test data are encrypted in the same way and are both encrypted in random phase. The calculation formula of random phase encryption is as follows:
E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)
wherein E represents training data or test data, LCT represents linear canonical transformation, P represents raw data, M represents1,M2,…,MnRepresenting a random phase mask, n being a positive integer.
The following take dual random phase optical encryption, triple random phase optical encryption, and multiple random phase optical encryption, respectively, as examples:
please refer to fig. 6, which is a diagram illustrating a dual random phase optical encryption according to a first embodiment of the present invention. The encryption formula is expressed as:
E=ift(ft(P×M1)×M2)
where P denotes raw data, ft denotes fourier transform, ift denotes inverse fourier transform, E denotes encrypted data (including training data and test data), and M1 and M2 denote random phase masks. The encryption method is implemented by using a 4f optical system (i.e. two lenses with focal length f, the distance between the two lenses is 2f, the object distance is f, and the distance between the two lenses is also f), wherein P is a real value image, namely original data, and E is an encrypted image, namely encrypted data. The phase angle information of M1 and M2 is a two-dimensional normal distribution random array, the values of which are randomly distributed between [0,1], and the convolution and average values of the two arrays are both 0, namely, the two arrays are two mutually independent random white noises. Therefore, M1 and M2 are capable of generating random phases with phases between [0,2 π ]. In the encryption process, an M1 random phase mask is tightly attached to a real value image and positioned on a front focal plane of a first lens, an M2 random phase mask is placed on a Fourier transform plane, inverse Fourier transform is performed on the second lens, and an encrypted image E is finally obtained, wherein the encrypted data is generalized stable white noise.
Please refer to fig. 7, which is a schematic diagram of a three-random-phase optical encryption method according to a first embodiment of the present invention. The encryption formula is expressed as:
E=ift(ft(P×M1)×M2)×M3
where P denotes raw data, ft denotes fourier transform, ift denotes inverse fourier transform, E denotes encrypted data (including training data and test data), and M1, M2, and M3 denote random phase masks. The encryption method is implemented by using a 4f optical system (i.e. two lenses with focal length f, the distance between the two lenses is 2f, the object distance is f, and the distance between the two lenses is also f), wherein P is a real value image, namely original data, and E is an encrypted image, namely encrypted data. The phase angle information of M1, M2, and M3 is a two-dimensional normally distributed random array with values randomly distributed between [0,1 ]. Therefore, M1, M2 and M3 are capable of generating random phases with phases between [0,2 π ]. In the encryption process, an M1 random phase mask is closely attached to a real value image and positioned on the front focal plane of a first lens, an M2 random phase mask is placed on a Fourier transform plane, inverse Fourier transform is performed for the second lens, an M3 random phase mask is placed on the rear focal plane of the second lens, and finally an encrypted image E is obtained, wherein the encrypted data are approximate generalized stable white noise.
Fig. 8 is a schematic diagram of a multi-random phase optical encryption method according to a first embodiment of the present invention. The encryption formula is expressed as:
E=ift(ft(ift(ft(P×M1)×M2)×M3)×Λ)×Mn
where P denotes original data, ft denotes fourier transform, ift denotes inverse fourier transform, E denotes encrypted data (including training data and test data), and M1, M2, M3 … Mn denote random phase masks, where n is a positive integer and greater than 3. The encryption method is implemented by using an i-f optical system (i.e. i/2 lenses with focal length f, the distance is 2f, the object distance is f, and the distance is also f), P is a real-valued image, namely original data, and E is an encrypted image, namely encrypted data. The phase angle information of M1, M2, M3 … Mn is a two-dimensional normal distribution random array, the values of which are randomly distributed between [0,1 ]. Therefore, M1, M2, M3 … Mn are capable of generating random phases with phases between [0,2 π ]. In the encryption process, an M1 random phase mask is closely attached to a real value image and positioned on the front focal plane of a first lens, an M2 random phase mask is placed on a Fourier transform plane, inverse Fourier transform is performed for the second lens, an M3 random phase mask is placed on the rear focal plane of the second lens, and similarly, Mn is placed on the focal plane of the last lens, and an encrypted image E is finally obtained, wherein the encrypted data is approximate generalized stable white noise.
In the embodiment of the invention, the random phase encryption is carried out on the original data, although the specific mode of the random phase encryption has diversity, the construction method of the deep neural network model can be adopted to construct decryption models for cracking various types of random phase encryption, and the practicability of the construction method of the deep neural network model is improved.
Fig. 9 is a schematic structural diagram of a device for constructing a deep neural network model according to a second embodiment of the present invention. Specifically, the method comprises the following steps:
the first encryption module 10 is configured to perform random phase encryption on multiple sets of original data to obtain training data;
the training comparison module 20 is used for training the ith-1 deep neural network model by using training data to obtain an ith deep neural network model, inputting the ith output result of the ith deep neural network model by using the training data, and comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
the first determining module 30 is configured to determine the ith deep neural network model as the constructed deep neural network model when the ith comparison result meets a preset convergence condition;
the first returning module 40 is configured to, when the ith comparison result does not satisfy the preset convergence condition, return to the training comparison module 20 by setting i to i + 1.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment of the present invention, which is not repeated herein.
In the embodiment of the invention, because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved.
Please refer to fig. 10, which is a schematic structural diagram of a refining module of the training comparison module 20 according to the second embodiment of the present invention. Specifically, the method comprises the following steps:
the first reshaping module 201 is used for inputting training data into an i-1 th deep neural network model, so that the training data is subjected to array reshaping on a first reshaping layer, and the first reshaping data is output, wherein the i-1 th deep neural network model comprises a first reshaping layer, three hidden layers, an output layer and a second reshaping layer;
the second remolding module 202 is used for inputting the first remolding data into a three-layer hidden layer consisting of a plurality of neurons, inputting the three-layer hidden layer into an output layer, outputting processing data, inputting the processing data into the second remolding layer for array remolding, and outputting second remolding data, wherein an activation function of the neurons is a linear rectification function, the number of the neurons in the hidden layer corresponds to the format of the first remolding data, the second remolding data is an i-1 output result after the training data is input into an i-1 deep neural network model, and the format of the second remolding data is the same as that of the training data;
and the calculation updating module 203 is used for comparing the second remodeling data with the original data corresponding to the training data based on the mean square error function and the random gradient descent function to obtain a comparison result, and optimizing and updating the (i-1) th deep neural network model by using the comparison result to obtain the (i) th deep neural network model.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment of the present invention, which is not repeated herein.
In the embodiment of the invention, training data are input into the first remolding layer, the three-layer hiding layer, the output layer and the second remolding layer to obtain second remolding data (namely an ith-1 output result), the second remolding data are compared with original data corresponding to the training data to obtain a comparison result, and the ith-1 deep neural network model is optimized and updated by using the comparison result to obtain the ith deep neural network model. The deep neural network model is closer to the decryption model standard meeting the requirements, the training speed of the deep neural network model is increased by adopting the random gradient descent function, and the training speed is increased.
Fig. 11 is a schematic structural diagram of a device for constructing a deep neural network model according to a third embodiment of the present invention. Besides the first encryption module 10, the training comparison module 20, the first determination module 30 and the first return module 40 in the second embodiment of the present invention, the present invention further includes:
the second encryption module 50 is used for carrying out random phase encryption on multiple groups of original data to obtain test data;
an input calculation module 60, configured to input the test data into the constructed deep neural network model, obtain a test output result, and calculate a correlation between the test output result and original data corresponding to the test data;
a second determining module 70, configured to determine that the deep neural network model is a correct decryption model when the correlation degree is greater than or equal to a preset correlation coefficient;
and a second returning module 80, configured to return to the first encryption module 50 when the correlation degree is smaller than the preset correlation coefficient.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment of the present invention and the related description of the second embodiment of the present invention, which will not be described herein again.
In the embodiment of the invention, because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved. In addition, the correctness of the constructed deep neural network model is measured by adopting the test data, and the correctness of the constructed decryption model is ensured.
Please refer to fig. 12, which is a schematic structural diagram of a refinement module of the input calculation module 60 according to a third embodiment of the present invention. Specifically, the method comprises the following steps:
the third remolding module 601 is used for inputting test data into the constructed deep neural network model, so that the test data is subjected to array remolding in the first remolding layer, and the first remolding data is output, wherein the deep neural network model comprises a first remolding layer, three hidden layers, an output layer and a second remolding layer;
the fourth remodeling module 602 is used for inputting the first remodeling data into the three-layer hidden layer consisting of a plurality of neurons, outputting the processing data by the input and output layer, inputting the processing data into the second remodeling layer for array remodeling, and outputting second remodeling data which is a test output result obtained after the test data is input into the constructed deep neural network model;
a calculating module 603, configured to calculate a correlation between the second reconstruction data and the original data corresponding to the test data by using a correlation coefficient function.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment and the second embodiment of the present invention, which will not be described herein again.
In the embodiment of the invention, the correlation coefficient function is adopted to calculate the correlation degree between the second reconstruction data and the original data corresponding to the test data, so that whether the constructed deep neural network model is correct or not is conveniently judged.
In addition, the calculation formula of the random phase encryption in the first encryption module 10 and the second encryption module 50 is:
E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)
wherein E represents training data or test data, LCT represents linear canonical transformation, P represents raw data, M represents1,M2,…,MnRepresenting a random phase mask, n being a positive integer.
For a description of random phase encryption, please refer to the description of the first embodiment of the present invention, which is not repeated herein.
In the embodiment of the invention, the random phase encryption is carried out on the original data, although the specific mode of the random phase encryption has diversity, the construction method of the deep neural network model can be adopted to construct decryption models for cracking various types of random phase encryption, and the practicability of the construction method of the deep neural network model is improved.
It should be noted that for simplicity and convenience of description, the above-described method embodiments are shown as a series of combinations of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently considered to be preferred embodiments and that no single act or module is essential to the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the above description of the method and apparatus for constructing a deep neural network model provided by the present invention, for those skilled in the art, the idea of the embodiment of the present invention may be changed in the specific implementation manner and the application scope, and in summary, the content of the present specification should not be construed as limiting the present invention.