Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention
Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described reality
Applying example is only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, those skilled in the art
Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Lack the algorithm that can crack random phase encryption in terms of the cryptanalysis of big data due to existing in the prior art
The technical issues of model.
In order to solve the above-mentioned technical problem, the present invention proposes the construction method and device of a kind of deep neural network model.
Due to having carried out random phase encryption to initial data, and the training data obtained after encrypting is input to deep neural network model
In, obtained output with initial data the result is that be compared, therefore the model be the solution that can be cracked random phase and encrypt
Close model solves the technical issues of lacking the algorithm model that can crack random phase encryption.
Referring to Fig. 1, showing for the process of the construction method of deep neural network model a kind of in first embodiment of the invention
It is intended to.This method specifically includes:
Step A, random phase is carried out to multiple groups initial data to encrypt to obtain training data;
Step B, using training data the (i-1)-th deep neural network model of training, the i-th deep neural network model is obtained,
And the i-th output after training data the i-th deep neural network model of input is as a result, and export result and training data pair for i-th
The initial data answered is compared, and obtains the i-th comparison result, the initial value of i is 1, and the 0th deep neural network model is initial
Model;
Step C, when the i-th comparison result meets the default condition of convergence, determine that the i-th deep neural network model is building
Deep neural network model;
Step D, when the i-th comparison result is unsatisfactory for the default condition of convergence, i=i+1 is enabled, B is returned to step.
It should be noted that in the construction method of deep neural network model, it is preferred that selection is to 60,000 groups of original numbers
It encrypts to obtain 60,000 groups of training datas according to random phase is carried out.Using 60,000 groups of training data training deep neural network models, repeatedly
Generation training 500 times or so, obtained deep neural network model are the deep neural network model of building.I.e. by 60,000 groups of training
Data input the i-th output obtained after the i-th deep neural network model as a result, the i-th output result is corresponding with training data
Initial data is compared, and the i-th obtained comparison result meets the default condition of convergence, and meet the default condition of convergence i-th is deep
Degree neural network model is determined as the deep neural network model of building, and the value of i is in 500 or so fluctuations.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data
Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model
For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption
Problem.
Referring to Fig. 2, for the flow diagram of the refinement step of step B in first embodiment of the invention.Specifically:
Step E, will training data input the (i-1)-th deep neural network model in, make training data first remodeling layer into
The remodeling of line number group, output the first remodeling data, the (i-1)-th deep neural network model includes the first remodeling layer, three layers of hidden layer, defeated
Layer and the second remodeling layer out;
Step F, the first remodeling data input three layers of hidden layer being made of several neurons, and at input and output layer output
Data are managed, processing data input the second remodeling layer carries out array remodeling, output the second remodeling data, and the activation primitive of neuron is
Line rectification function, and the neuron number in hidden layer is corresponding with the first remodeling format of data, the second remodeling data are instruction
Practice data input the (i-1)-th deep neural network model after (i-1)-th output as a result, and second remodeling data format and training number
According to format it is identical;
Step G, corresponding with training data to the second remodeling data based on mean square deviation function and stochastic gradient descent function
Initial data is compared, and obtains comparison result, is optimized using comparison result and updates the (i-1)-th deep neural network model, obtained
I-th deep neural network model.
It should be noted that training data is input in the (i-1)-th deep neural network model, by the first remodeling layer,
After three layers of hidden layer, output layer and the second remodeling layer, the second obtained remodeling data are that training data is input to (i-1)-th deeply
The (i-1)-th output result after spending neural network model.
Specifically, referring to Fig. 3, being the composition schematic diagram of deep neural network model in first embodiment of the invention.It is excellent
Choosing, 60,000 groups of training datas are input in the (i-1)-th deep neural network model, and the (i-1)-th deep neural network model includes first
Remold layer, three layers of hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3), output layer and the second remodeling layer, 60,000 groups of instructions
The format for practicing data is the encryption data of 28*28 pixel, is the encryption data of 1*784 pixel by the first remodeling layer remodeling,
The encryption data of the 1*784 pixel is the first remodeling data.First remodeling data input three layers of hidden layer and output layer output
Handle data, wherein every layer of hidden layer and output layer include 784 neurons, and 784 neuron groups help connection nerve net
Network, and the activation primitive of each neuron is line rectification function, the format for handling data is the decryption number of 1*784 pixel
According to.Handle ciphertext data of the data by the second remodeling layer remodeling for 28*28 pixel.Based under mean square deviation function and stochastic gradient
Decreasing function is compared the second remodeling data initial data corresponding with training data, obtains comparison result, tied using comparing
Fruit optimization updates the (i-1)-th deep neural network model, obtains the i-th deep neural network model, wherein stochastic gradient descent function
It is the speed in order to accelerate to train deep neural network model.
It is emphasized that updating the (i-1)-th deep neural network model using comparison result optimization, main optimization updates three
Layer hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3) and output layer, optimization update the weight ginseng in neural network
Number, i.e. parameter in optimization neuron, so that the i-th output of the i-th deep neural network model output is as a result, than the (i-1)-th depth
The (i-1)-th of neural network model output exports result closer to the corresponding initial data of training data.That is, to the solution of training data
Close processing is mainly in three layers of hidden layer and output layer.
In embodiments of the present invention, by the way that layer, three layers of hidden layer, output layer and second are remolded in training data input first
Layer is remolded, the second remodeling data (the as (i-1)-th output result) is obtained, the second remodeling data are corresponding with training data original
Data are compared, and obtain comparison result, are optimized using comparison result and update the (i-1)-th deep neural network model, and it is deep to obtain i-th
Spend neural network model.So that deep neural network model becomes closer to satisfactory decrypted model standard, and using with
Machine gradient decreasing function accelerates the speed of trained deep neural network model, improves trained rate.
Referring to Fig. 4, for the flow diagram of the addition step after step C in first embodiment of the invention.Specifically:
Step H, random phase is carried out to multiple groups initial data to encrypt to obtain test data;
Step I, the deep neural network model of test data input building obtains test output as a result, and calculating test
Export the degree of correlation between result initial data corresponding with test data;
Step J, when the degree of correlation is more than or equal to preset correlation coefficient number, determine that deep neural network model is correctly decryption
Model;
Step K, when the degree of correlation is less than preset correlation coefficient number, A is returned to step.
It should be noted that please referring to Fig. 3.When deep neural network model training 500 times or so, training data input
The i-th output after i-th deep neural network model is as a result, the i-th output result initial data corresponding with training data is carried out
It compares, obtains the i-th comparison result, when which meets the default condition of convergence, determine the i-th deep neural network model
For the deep neural network model of building.Random phase encryption is carried out to another 10,000 groups of initial data, obtains 10,000 groups of test datas,
10,000 groups of test datas are input in deep neural network model, obtain test output as a result, calculating test output result and surveying
The degree of correlation between the corresponding initial data of data is tried, when the degree of correlation is more than or equal to preset correlation coefficient number, determines depth nerve
Network model is correct decrypted model, otherwise, when the degree of correlation is less than preset correlation coefficient number, illustrates deep neural network model
Building there are mistake, need to restart to construct depth neural network model, that is, return to step A, it is preferred that default phase
Relationship number is 0.8.
In embodiments of the present invention, correctness has been carried out using deep neural network model of the test data to building to estimate
Amount, it is ensured that the correctness of the decrypted model of building.
Referring to Fig. 5, for the flow diagram of the refinement step of step I in first embodiment of the invention.Specifically:
Step L, in the deep neural network model of test data input building, carry out test data in the first remodeling layer
Array remodeling, output first remodeling data, deep neural network model include first remodeling layer, three layers of hidden layer, output layer and
Second remodeling layer;
Step M, the first remodeling data input three layers of hidden layer being made of several neurons, and at input and output layer output
Data are managed, processing data input the second remodeling layer carries out array remodeling, and output the second remodeling data, the second remodeling data are test
The test output result obtained after the deep neural network model of data input building;
Step N, it is calculated between the second remodeling data initial data corresponding with test data using correlation coefficient function
The degree of correlation.
It should be noted that in the deep neural network model of test data input building, by the first remodeling layer, three layers
Hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3), output layer and the second remodeling layer, obtain the second remodeling data.It should
Second remodeling data are the test output result that test data is input in the deep neural network model of building.
In embodiments of the present invention, corresponding with test data original using correlation coefficient function calculating the second remodeling data
Whether the degree of correlation between data, the deep neural network model convenient for judging building are correct.
In addition, in the first embodiment of the invention, step A, encrypting and being instructed to multiple groups initial data progress random phase
Practice data and step H, multiple groups initial data progress random phase is encrypted to obtain test data.Step A can be closed with step H
For a step, i.e., random phase encryption is carried out to multiple groups initial data, obtained encryption data is divided into training data and test data
Two parts.Therefore, training data and the cipher mode of test data are the same, are random phase encryption.Random phase adds
Close calculation formula is:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates training data or test data, and LCT indicates that linear canonical transform, P indicate initial data, M1,
M2,…,MnIndicate random phase exposure mask, n is positive integer.
It is with double random phase optical encryption, three random phase optical encryptions and multiple random-phase optical encryption separately below
Example:
Referring to Fig. 6, the double random phase optical encryption schematic diagram provided for first embodiment of the invention.It encrypts formula
It is expressed as:
E=ift (ft (P × M1) × M2)
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data
(including training data and test data), M1 and M2 indicate random phase exposure mask.The encryption method (is had using 4f optical system
The lens that two focal lengths are f, at a distance of 2f, object distance f, apart also f) to realize, P is real value image, i.e. initial data, and E is to add
Close image, i.e. encryption data.The phase angle information of M1 and M2 is the random array of Two dimension normal distribution, and value is randomly distributed in [0,1]
Between, two array convolution and mean value are all 0, i.e., this is two mutually independent random white noises.So M1 and M2 are exactly can
Enough generate random phase of the phase between [0,2 π].In ciphering process, M1 random phase exposure mask is close to real value image and is located at the
On the front focal plane of one lens, then M2 random phase exposure mask is placed on Fourier transformation face, do once by second lens
Inverse Fourier transform finally obtains encrypted image E, which is extended stationary white noise.
Referring to Fig. 7, the three random phase optical encryption schematic diagrames provided for first embodiment of the invention.It encrypts formula
It is expressed as:
E=ift (ft (P × M1) × M2) × M3
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data
(including training data and test data), M1, M2 and M3 indicate random phase exposure mask.The encryption method uses 4f optical system
(i.e. there are two the lens that focal length is f, and at a distance of 2f, object distance f, at a distance of also f) to realize, P is real value image, i.e. initial data, E
For encrypted image, i.e. encryption data.The phase angle information of M1, M2 and M3 are the random array of Two dimension normal distribution, value random distribution
Between [0,1].So M1, M2 and M3 are to generate random phase of the phase between [0,2 π].In ciphering process,
M1 random phase exposure mask be close to real value image be located on the front focal plane of first lens, then on Fourier transformation face placement M2 with
Machine phase mask does an inverse Fourier transform by second lens, places M3 random phase exposure mask on focal plane behind, most
Encrypted image E is obtained eventually, which is approximate extended stationary white noise.
Referring to Fig. 8, the multiple random-phase optical encryption schematic diagram provided for first embodiment of the invention.It encrypts formula
It is expressed as:
E=ift (ft (ift (ft (P × M1) × M2) × M3) × Λ) × Mn
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data
(including training data and test data), M1, M2, M3 ... Mn indicate random phase exposure mask, wherein n is positive integer and is greater than 3.
Using i-f optical system, (have i/2 focal length is the lens of f to the encryption method, is apart also f) real at a distance of 2f, object distance f
Existing, P is real value image, i.e., initial data, E are encrypted image, i.e. encryption data.The phase angle information of M1, M2, M3 ... Mn are two dimensions
Normal distribution random number group, value are randomly distributed between [0,1].So M1, M2, M3 ... Mn be to generate phase exist
Random phase between [0,2 π].In ciphering process, before M1 random phase exposure mask abutting real value image is located at first lens
On focal plane, then M2 random phase exposure mask is placed on Fourier transformation face, do an inverse Fourier transform by second lens,
M3 random phase exposure mask is placed on focal plane behind, similarly, Mn is placed on the focal plane of last lens, finally obtains encryption figure
As E, which is approximate extended stationary white noise.
In embodiments of the present invention, random phase encryption is carried out to initial data, although the specific side of random phase encryption
Formula uses the construction method of deep neural network model in the present invention there are diversity, can construct crack it is various types of
The decrypted model of random phase encryption, increases the practicability of the construction method of deep neural network model in the present invention.
Referring to Fig. 9, showing for the structure of the construction device of deep neural network model a kind of in second embodiment of the invention
It is intended to.Specifically:
First encrypting module 10 encrypts to obtain training data for carrying out random phase to multiple groups initial data;
Training comparison module 20, for obtaining the i-th depth using training data the (i-1)-th deep neural network model of training
The i-th output after neural network model and training data the i-th deep neural network model of input is as a result, and export result for i-th
Initial data corresponding with training data is compared, and obtains the i-th comparison result, the initial value of i is 1, and the 0th depth nerve net
Network model is initial model;
First determining module 30, for determining the i-th depth nerve net when the i-th comparison result meets the default condition of convergence
Network model is the deep neural network model of building;
First return module 40 returns to instruction for enabling i=i+1 when the i-th comparison result is unsatisfactory for the default condition of convergence
Practice comparison module 20.
Related description in relation to the embodiment of the present invention please refers to the related description in relation to first embodiment of the invention, here
It repeats no more.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data
Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model
For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption
Problem.
Referring to Fig. 10, for the structural schematic diagram of the refinement module of training comparison module 20 in second embodiment of the invention.
Specifically:
First remodeling module 201 makes training data for inputting training data in the (i-1)-th deep neural network model
Array remodeling is carried out in the first remodeling layer, output the first remodeling data, the (i-1)-th deep neural network model includes the first remodeling
Layer, three layers of hidden layer, output layer and the second remodeling layer;
Second remodeling module 202 inputs three layers of hidden layer being made of several neurons for the first remodeling data, and defeated
Enter output layer output processing data, processing data input the second remodeling layer carries out array remodeling, output the second remodeling data, nerve
The activation primitive of member is line rectification function, and the neuron number in hidden layer is corresponding with the first remodeling format of data, the
Two remodeling data are for the (i-1)-th output after training data the (i-1)-th deep neural network model of input as a result, and the second remodeling data
Format it is identical as the format of training data;
Calculate update module 203, for based on mean square deviation function and stochastic gradient descent function, to the second remodeling data with
The corresponding initial data of training data is compared, and obtains comparison result, is optimized using comparison result and updates the (i-1)-th depth nerve
Network model obtains the i-th deep neural network model.
Related description in relation to the embodiment of the present invention please refers to the related description in relation to first embodiment of the invention, here
It repeats no more.
In embodiments of the present invention, by the way that layer, three layers of hidden layer, output layer and second are remolded in training data input first
Layer is remolded, the second remodeling data (the as (i-1)-th output result) is obtained, the second remodeling data are corresponding with training data original
Data are compared, and obtain comparison result, are optimized using comparison result and update the (i-1)-th deep neural network model, and it is deep to obtain i-th
Spend neural network model.So that deep neural network model becomes closer to satisfactory decrypted model standard, and using with
Machine gradient decreasing function accelerates the speed of trained deep neural network model, improves trained rate.
Figure 11 is please referred to, is shown for the structure of the construction device of deep neural network model a kind of in third embodiment of the invention
It is intended to.Except including the first encrypting module 10 in second embodiment of the invention, training comparison module 20, the first determining module 30 and the
Except one return module 40, further include:
Second encrypting module 50 encrypts to obtain test data for carrying out random phase to multiple groups initial data;
Computing module 60 is inputted, for the deep neural network model of test data input building, obtains test output knot
Fruit, and calculate the degree of correlation between test output result initial data corresponding with test data;
Second determining module 70, for determining deep neural network mould when the degree of correlation is more than or equal to preset correlation coefficient number
Type is correct decrypted model;
Second return module 80, for returning to the first encrypting module 50 when the degree of correlation is less than preset correlation coefficient number.
Related description in relation to the embodiment of the present invention, please refers to related first embodiment of the invention and the present invention second is implemented
The related description of example, which is not described herein again.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data
Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model
For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption
Problem.In addition, having carried out correctness appraisal using deep neural network model of the test data to building, it is ensured that the solution of building
The correctness of close model.
Figure 12 is please referred to, is the structural schematic diagram for inputting the refinement module of computing module 60 in third embodiment of the invention.
Specifically:
Third remodeling module 601 inputs in the deep neural network model of building for test data, test data is made to exist
First remodeling layer carry out array remodeling, output first remodeling data, deep neural network model include first remodeling layer, three layers it is hidden
Hide layer, output layer and the second remodeling layer;
Quadruple mold block 602 inputs three layers of hidden layer being made of several neurons for the first remodeling data, and defeated
Enter output layer output processing data, processing data input second remodeling layer carry out array remodeling, output second remodeling data, second
Remodeling data are that test data inputs the test output result obtained after the deep neural network model constructed;
Computing module 603, it is corresponding with test data original for calculating the second remodeling data using correlation coefficient function
The degree of correlation between data.
Related description in relation to the embodiment of the present invention, please refers to related first embodiment of the invention and the present invention second is implemented
The related description of example, which is not described herein again.
In embodiments of the present invention, corresponding with test data original using correlation coefficient function calculating the second remodeling data
Whether the degree of correlation between data, the deep neural network model convenient for judging building are correct.
In addition, the first encrypting module 10 and the calculation formula of the random phase encryption in the second encrypting module 50 are:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates training data or test data, and LCT indicates that linear canonical transform, P indicate initial data, M1,
M2,…,MnIndicate random phase exposure mask, n is positive integer.
In relation to the related description encrypted to random phase, related first embodiment of the invention related description is please referred to, here
It repeats no more.
In embodiments of the present invention, random phase encryption is carried out to initial data, although the specific side of random phase encryption
Formula uses the construction method of deep neural network model in the present invention there are diversity, can construct crack it is various types of
The decrypted model of random phase encryption, increases the practicability of the construction method of deep neural network model in the present invention.
It should be noted that for the various method embodiments described above, describing for simplicity, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, certain steps can use other sequences or carry out simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules might not all be this hair
Necessary to bright.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiments.
The above are to a kind of description of the construction method and device of deep neural network model provided by the present invention, for
Those skilled in the art, thought according to an embodiment of the present invention have change in specific embodiments and applications
Place, to sum up, the contents of this specification are not to be construed as limiting the invention.