CN108921282B - Construction method and device of deep neural network model - Google Patents

Construction method and device of deep neural network model Download PDF

Info

Publication number
CN108921282B
CN108921282B CN201810465595.6A CN201810465595A CN108921282B CN 108921282 B CN108921282 B CN 108921282B CN 201810465595 A CN201810465595 A CN 201810465595A CN 108921282 B CN108921282 B CN 108921282B
Authority
CN
China
Prior art keywords
data
neural network
network model
deep neural
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810465595.6A
Other languages
Chinese (zh)
Other versions
CN108921282A (en
Inventor
何文奇
海涵
彭翔
刘晓利
廖美华
卢大江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.
Sichuan Hisai Digital Technology Group Co ltd
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810465595.6A priority Critical patent/CN108921282B/en
Publication of CN108921282A publication Critical patent/CN108921282A/en
Application granted granted Critical
Publication of CN108921282B publication Critical patent/CN108921282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a method and a device for constructing a deep neural network model. The method comprises the steps of carrying out random phase encryption on original data to obtain training data, training an i-1 th deep neural network model by utilizing the training data to obtain an i-th deep neural network model, inputting the training data into the i-th deep neural network model to obtain an i-th output result, comparing the i-th output result with the original data corresponding to the training data, judging whether the comparison result meets a preset convergence condition, if so, determining the i-th deep neural network model as the built deep neural network model, if not, making i equal to i +1, and training the i-1 th deep neural network model by utilizing the training data again. Because the training data is input into the deep neural network model and the obtained output result is compared with the original data, the model is a decryption model capable of deciphering the random phase encryption, and the technical problem that an algorithm model capable of deciphering the random phase encryption is lacked is solved.

Description

Construction method and device of deep neural network model
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for constructing a deep neural network model.
Background
Deep learning is a new field in machine learning research, and the motivation is to establish and simulate a neural network for analyzing and learning of the human brain, which simulates the mechanism of the human brain to interpret data. The method is widely applied to image recognition, big data classification and the like. However, in the aspect of cryptanalysis of big data, an algorithm model which can crack random phase encryption is lacked.
Disclosure of Invention
The invention mainly aims to provide a method and a device for constructing a deep neural network model, which can solve the technical problem that an algorithm model capable of cracking random phase encryption is lacked in the aspect of big data cryptanalysis.
In order to achieve the above object, a first aspect of the present invention provides a method for constructing a deep neural network model, where the method includes:
step A, carrying out random phase encryption on a plurality of groups of original data to obtain training data;
step B, training an ith-1 deep neural network model by using the training data to obtain an ith deep neural network model, inputting an ith output result of the ith deep neural network model by using the training data, and comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
step C, when the ith comparison result meets a preset convergence condition, determining the ith deep neural network model as a built deep neural network model;
and D, when the ith comparison result does not meet the preset convergence condition, enabling i to be i +1, and returning to execute the step B.
In order to achieve the above object, a second aspect of the present invention provides an apparatus for constructing a deep neural network model, the apparatus comprising:
the first encryption module is used for carrying out random phase encryption on a plurality of groups of original data to obtain training data;
the training comparison module is used for training an i-1 th deep neural network model by using the training data to obtain an i-th deep neural network model, inputting an i-th output result after the training data is input into the i-th deep neural network model, and comparing the i-th output result with original data corresponding to the training data to obtain an i-th comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
the first determining module is used for determining the ith deep neural network model as the constructed deep neural network model when the ith comparison result meets a preset convergence condition;
and the first returning module is used for returning to the training comparison module when the ith comparison result does not meet the preset convergence condition by making i equal to i + 1.
The invention provides a method and a device for constructing a deep neural network model. Because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for constructing a deep neural network model according to a first embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a refinement step of step B in the first embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the components of a deep neural network model according to a first embodiment of the present invention;
FIG. 4 is a flow chart illustrating additional steps after step C in the first embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating a refinement step of step I in the first embodiment of the present invention;
FIG. 6 is a diagram illustrating a dual random phase optical encryption scheme according to a first embodiment of the present invention;
FIG. 7 is a diagram of a three-random-phase optical encryption scheme according to a first embodiment of the present invention;
FIG. 8 is a diagram illustrating a multi-random phase optical encryption scheme according to a first embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an apparatus for constructing a deep neural network model according to a second embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a refining module of the training alignment module 20 according to the second embodiment of the present invention;
FIG. 11 is a schematic structural diagram of an apparatus for constructing a deep neural network model according to a third embodiment of the present invention;
fig. 12 is a schematic structural diagram of a refinement module of the input calculation module 60 in the third embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical problem that an algorithm model capable of cracking random phase encryption is lacked in the aspect of big data cryptanalysis exists in the prior art.
In order to solve the technical problem, the invention provides a method and a device for constructing a deep neural network model. Because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved.
Fig. 1 is a schematic flow chart illustrating a method for constructing a deep neural network model according to a first embodiment of the present invention. The method specifically comprises the following steps:
a, carrying out random phase encryption on a plurality of groups of original data to obtain training data;
step B, training an ith-1 deep neural network model by utilizing training data to obtain an ith deep neural network model, inputting an ith output result of the ith deep neural network model by the training data, and comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
step C, when the ith comparison result meets a preset convergence condition, determining the ith deep neural network model as the constructed deep neural network model;
and D, when the ith comparison result does not meet the preset convergence condition, enabling i to be i +1, and returning to execute the step B.
It should be noted that, in the method for constructing the deep neural network model, preferably, 6 ten thousand sets of original data are subjected to random phase encryption to obtain 6 ten thousand sets of training data. And training the deep neural network model by using 6 ten thousand sets of training data, and performing iterative training for about 500 times to obtain the deep neural network model which is the constructed deep neural network model. The method comprises the steps of inputting 6 ten thousand groups of training data into an ith deep neural network model to obtain an ith output result, comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result meeting a preset convergence condition, determining the ith deep neural network model meeting the preset convergence condition as the constructed deep neural network model, and enabling the value of i to fluctuate around 500.
In the embodiment of the invention, because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved.
Please refer to fig. 2, which is a flowchart illustrating a detailed step of step B according to a first embodiment of the present invention. Specifically, the method comprises the following steps:
step E, inputting the training data into the (i-1) th deep neural network model, enabling the training data to perform array remodeling on the first remodeling layer, and outputting first remodeling data, wherein the (i-1) th deep neural network model comprises a first remodeling layer, three hidden layers, an output layer and a second remodeling layer;
step F, inputting first remolding data into a three-layer hidden layer consisting of a plurality of neurons, inputting the first remolding data into an output layer, outputting processing data, inputting the processing data into a second remolding layer for array remolding, and outputting second remolding data, wherein an activation function of the neurons is a linear rectification function, the number of the neurons in the hidden layer corresponds to the format of the first remolding data, the second remolding data is an i-1 output result of training data input into an i-1 deep neural network model, and the format of the second remolding data is the same as that of the training data;
and G, comparing the second reconstruction data with the original data corresponding to the training data based on the mean square error function and the random gradient descent function to obtain a comparison result, and optimizing and updating the (i-1) th deep neural network model by using the comparison result to obtain the (i) th deep neural network model.
It should be noted that the training data is input into the (i-1) th deep neural network model, and after passing through the first remodeling layer, the three hidden layers, the output layer and the second remodeling layer, the obtained second remodeling data is the (i-1) th output result of the (i-1) th deep neural network model into which the training data is input.
Specifically, please refer to fig. 3, which is a schematic composition diagram of a deep neural network model according to a first embodiment of the present invention. Preferably, 6 ten thousand sets of training data are input into the i-1 th deep neural network model, the i-1 th deep neural network model comprises a first reshaping layer, three hidden layers (respectively, a hidden layer 1, a hidden layer 2 and a hidden layer 3), an output layer and a second reshaping layer, the 6 ten thousand sets of training data are encrypted data of 28 pixels, the encrypted data of 1 784 pixels are reshaped through the first reshaping layer, and the encrypted data of 1 784 pixels is the first reshaping data. The first reshaping data is input into three layers of hidden layers and output layers to output processing data, wherein each layer of hidden layer and each layer of output layer respectively comprise 784 neurons, the 784 neurons form a fully-connected neural network, the activation function of each neuron is a linear rectification function, and the format of the processing data is decryption data of 1 × 784 pixels. The processed data is reshaped into decrypted data of 28 x 28 pixels by the second reshaping layer. Comparing the second reconstruction data with original data corresponding to the training data based on a mean square error function and a random gradient descent function to obtain a comparison result, and optimizing and updating the (i-1) th deep neural network model by using the comparison result to obtain the (i) th deep neural network model, wherein the random gradient descent function is used for accelerating the speed of training the deep neural network model.
It is emphasized that the ith-1 deep neural network model is optimized and updated by using the comparison result, three layers of hidden layers (hidden layer 1, hidden layer 2 and hidden layer 3, respectively) and an output layer are mainly optimized and updated, and the weight parameters in the neural network, namely the parameters in the neurons, are optimized and updated, so that the ith output result output by the ith deep neural network model is closer to the original data corresponding to the training data than the ith-1 output result output by the ith-1 deep neural network model. That is, the decryption process for the training data is mainly in the three hidden layers and the output layer.
In the embodiment of the invention, training data are input into the first remolding layer, the three-layer hiding layer, the output layer and the second remolding layer to obtain second remolding data (namely an ith-1 output result), the second remolding data are compared with original data corresponding to the training data to obtain a comparison result, and the ith-1 deep neural network model is optimized and updated by using the comparison result to obtain the ith deep neural network model. The deep neural network model is closer to the decryption model standard meeting the requirements, the training speed of the deep neural network model is increased by adopting the random gradient descent function, and the training speed is increased.
Please refer to fig. 4, which is a flowchart illustrating an additional step after step C in the first embodiment of the present invention. Specifically, the method comprises the following steps:
step H, carrying out random phase encryption on multiple groups of original data to obtain test data;
step I, inputting the test data into the constructed deep neural network model to obtain a test output result, and calculating the correlation between the test output result and original data corresponding to the test data;
step J, when the correlation degree is larger than or equal to a preset correlation coefficient, determining the deep neural network model as a correct decryption model;
and K, returning to execute the step A when the correlation degree is smaller than the preset correlation coefficient.
Please refer to fig. 3. When the deep neural network model is trained for about 500 times, training data are input into an ith output result after the ith deep neural network model, the ith output result is compared with original data corresponding to the training data to obtain an ith comparison result, and when the ith comparison result meets a preset convergence condition, the ith deep neural network model is determined to be the built deep neural network model. And C, carrying out random phase encryption on another 1 ten thousand groups of original data to obtain 1 ten thousand groups of test data, inputting the 1 ten thousand groups of test data into the deep neural network model to obtain a test output result, calculating the correlation degree between the test output result and the original data corresponding to the test data, determining the deep neural network model as a correct decryption model when the correlation degree is greater than or equal to a preset correlation coefficient, otherwise, when the correlation degree is less than the preset correlation coefficient, indicating that the construction of the deep neural network model has errors, and needing to restart the construction of the deep neural network model, namely returning to the step A, and preferably, the preset correlation coefficient is 0.8.
In the embodiment of the invention, the correctness of the constructed deep neural network model is evaluated by adopting the test data, so that the correctness of the constructed decryption model is ensured.
Please refer to fig. 5, which is a flowchart illustrating a detailed procedure of step I in the first embodiment of the present invention. Specifically, the method comprises the following steps:
step L, inputting test data into a constructed deep neural network model, performing array remodeling on the test data in a first remodeling layer, and outputting first remodeling data, wherein the deep neural network model comprises a first remodeling layer, three hidden layers, an output layer and a second remodeling layer;
step M, inputting first remolding data into a three-layer hidden layer consisting of a plurality of neurons, inputting the first remolding data into an output layer, outputting processing data, inputting the processing data into a second remolding layer for array remolding, and outputting second remolding data which is a test output result obtained after a deep neural network model constructed by inputting test data is input;
and N, calculating the correlation degree between the second remodeling data and the original data corresponding to the test data by using a correlation coefficient function.
It should be noted that, in the deep neural network model constructed by inputting test data, the second remodeling data is obtained through the first remodeling layer, the three hidden layers (hidden layer 1, hidden layer 2 and hidden layer 3, respectively), the output layer and the second remodeling layer. The second remodeling data is a test output result of inputting the test data into the constructed deep neural network model.
In the embodiment of the invention, the correlation coefficient function is adopted to calculate the correlation degree between the second reconstruction data and the original data corresponding to the test data, so that whether the constructed deep neural network model is correct or not is judged conveniently.
In addition, in the first embodiment of the present invention, step a performs random phase encryption on multiple sets of original data to obtain training data, and step H performs random phase encryption on multiple sets of original data to obtain test data. The step A and the step H can be combined into one step, namely random phase encryption is carried out on multiple groups of original data, and the obtained encrypted data is divided into two parts, namely training data and test data. Therefore, the training data and the test data are encrypted in the same way and are both encrypted in random phase. The calculation formula of random phase encryption is as follows:
E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)
wherein E represents training data or test data, LCT represents linear canonical transformation, P represents raw data, M represents1,M2,…,MnRepresenting a random phase mask, n being a positive integer.
The following take dual random phase optical encryption, triple random phase optical encryption, and multiple random phase optical encryption, respectively, as examples:
please refer to fig. 6, which is a diagram illustrating a dual random phase optical encryption according to a first embodiment of the present invention. The encryption formula is expressed as:
E=ift(ft(P×M1)×M2)
where P denotes raw data, ft denotes fourier transform, ift denotes inverse fourier transform, E denotes encrypted data (including training data and test data), and M1 and M2 denote random phase masks. The encryption method is implemented by using a 4f optical system (i.e. two lenses with focal length f, the distance between the two lenses is 2f, the object distance is f, and the distance between the two lenses is also f), wherein P is a real value image, namely original data, and E is an encrypted image, namely encrypted data. The phase angle information of M1 and M2 is a two-dimensional normal distribution random array, the values of which are randomly distributed between [0,1], and the convolution and average values of the two arrays are both 0, namely, the two arrays are two mutually independent random white noises. Therefore, M1 and M2 are capable of generating random phases with phases between [0,2 π ]. In the encryption process, an M1 random phase mask is tightly attached to a real value image and positioned on a front focal plane of a first lens, an M2 random phase mask is placed on a Fourier transform plane, inverse Fourier transform is performed on the second lens, and an encrypted image E is finally obtained, wherein the encrypted data is generalized stable white noise.
Please refer to fig. 7, which is a schematic diagram of a three-random-phase optical encryption method according to a first embodiment of the present invention. The encryption formula is expressed as:
E=ift(ft(P×M1)×M2)×M3
where P denotes raw data, ft denotes fourier transform, ift denotes inverse fourier transform, E denotes encrypted data (including training data and test data), and M1, M2, and M3 denote random phase masks. The encryption method is implemented by using a 4f optical system (i.e. two lenses with focal length f, the distance between the two lenses is 2f, the object distance is f, and the distance between the two lenses is also f), wherein P is a real value image, namely original data, and E is an encrypted image, namely encrypted data. The phase angle information of M1, M2, and M3 is a two-dimensional normally distributed random array with values randomly distributed between [0,1 ]. Therefore, M1, M2 and M3 are capable of generating random phases with phases between [0,2 π ]. In the encryption process, an M1 random phase mask is closely attached to a real value image and positioned on the front focal plane of a first lens, an M2 random phase mask is placed on a Fourier transform plane, inverse Fourier transform is performed for the second lens, an M3 random phase mask is placed on the rear focal plane of the second lens, and finally an encrypted image E is obtained, wherein the encrypted data are approximate generalized stable white noise.
Fig. 8 is a schematic diagram of a multi-random phase optical encryption method according to a first embodiment of the present invention. The encryption formula is expressed as:
E=ift(ft(ift(ft(P×M1)×M2)×M3)×Λ)×Mn
where P denotes original data, ft denotes fourier transform, ift denotes inverse fourier transform, E denotes encrypted data (including training data and test data), and M1, M2, M3 … Mn denote random phase masks, where n is a positive integer and greater than 3. The encryption method is implemented by using an i-f optical system (i.e. i/2 lenses with focal length f, the distance is 2f, the object distance is f, and the distance is also f), P is a real-valued image, namely original data, and E is an encrypted image, namely encrypted data. The phase angle information of M1, M2, M3 … Mn is a two-dimensional normal distribution random array, the values of which are randomly distributed between [0,1 ]. Therefore, M1, M2, M3 … Mn are capable of generating random phases with phases between [0,2 π ]. In the encryption process, an M1 random phase mask is closely attached to a real value image and positioned on the front focal plane of a first lens, an M2 random phase mask is placed on a Fourier transform plane, inverse Fourier transform is performed for the second lens, an M3 random phase mask is placed on the rear focal plane of the second lens, and similarly, Mn is placed on the focal plane of the last lens, and an encrypted image E is finally obtained, wherein the encrypted data is approximate generalized stable white noise.
In the embodiment of the invention, the random phase encryption is carried out on the original data, although the specific mode of the random phase encryption has diversity, the construction method of the deep neural network model can be adopted to construct decryption models for cracking various types of random phase encryption, and the practicability of the construction method of the deep neural network model is improved.
Fig. 9 is a schematic structural diagram of a device for constructing a deep neural network model according to a second embodiment of the present invention. Specifically, the method comprises the following steps:
the first encryption module 10 is configured to perform random phase encryption on multiple sets of original data to obtain training data;
the training comparison module 20 is used for training the ith-1 deep neural network model by using training data to obtain an ith deep neural network model, inputting the ith output result of the ith deep neural network model by using the training data, and comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
the first determining module 30 is configured to determine the ith deep neural network model as the constructed deep neural network model when the ith comparison result meets a preset convergence condition;
the first returning module 40 is configured to, when the ith comparison result does not satisfy the preset convergence condition, return to the training comparison module 20 by setting i to i + 1.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment of the present invention, which is not repeated herein.
In the embodiment of the invention, because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved.
Please refer to fig. 10, which is a schematic structural diagram of a refining module of the training comparison module 20 according to the second embodiment of the present invention. Specifically, the method comprises the following steps:
the first reshaping module 201 is used for inputting training data into an i-1 th deep neural network model, so that the training data is subjected to array reshaping on a first reshaping layer, and the first reshaping data is output, wherein the i-1 th deep neural network model comprises a first reshaping layer, three hidden layers, an output layer and a second reshaping layer;
the second remolding module 202 is used for inputting the first remolding data into a three-layer hidden layer consisting of a plurality of neurons, inputting the three-layer hidden layer into an output layer, outputting processing data, inputting the processing data into the second remolding layer for array remolding, and outputting second remolding data, wherein an activation function of the neurons is a linear rectification function, the number of the neurons in the hidden layer corresponds to the format of the first remolding data, the second remolding data is an i-1 output result after the training data is input into an i-1 deep neural network model, and the format of the second remolding data is the same as that of the training data;
and the calculation updating module 203 is used for comparing the second remodeling data with the original data corresponding to the training data based on the mean square error function and the random gradient descent function to obtain a comparison result, and optimizing and updating the (i-1) th deep neural network model by using the comparison result to obtain the (i) th deep neural network model.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment of the present invention, which is not repeated herein.
In the embodiment of the invention, training data are input into the first remolding layer, the three-layer hiding layer, the output layer and the second remolding layer to obtain second remolding data (namely an ith-1 output result), the second remolding data are compared with original data corresponding to the training data to obtain a comparison result, and the ith-1 deep neural network model is optimized and updated by using the comparison result to obtain the ith deep neural network model. The deep neural network model is closer to the decryption model standard meeting the requirements, the training speed of the deep neural network model is increased by adopting the random gradient descent function, and the training speed is increased.
Fig. 11 is a schematic structural diagram of a device for constructing a deep neural network model according to a third embodiment of the present invention. Besides the first encryption module 10, the training comparison module 20, the first determination module 30 and the first return module 40 in the second embodiment of the present invention, the present invention further includes:
the second encryption module 50 is used for carrying out random phase encryption on multiple groups of original data to obtain test data;
an input calculation module 60, configured to input the test data into the constructed deep neural network model, obtain a test output result, and calculate a correlation between the test output result and original data corresponding to the test data;
a second determining module 70, configured to determine that the deep neural network model is a correct decryption model when the correlation degree is greater than or equal to a preset correlation coefficient;
and a second returning module 80, configured to return to the first encryption module 50 when the correlation degree is smaller than the preset correlation coefficient.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment of the present invention and the related description of the second embodiment of the present invention, which will not be described herein again.
In the embodiment of the invention, because the original data is subjected to random phase encryption, the training data obtained after encryption is input into the deep neural network model, and the obtained output result is compared with the original data, the model is a decryption model capable of cracking the random phase encryption, and the technical problem that an algorithm model capable of cracking the random phase encryption is lacked is solved. In addition, the correctness of the constructed deep neural network model is measured by adopting the test data, and the correctness of the constructed decryption model is ensured.
Please refer to fig. 12, which is a schematic structural diagram of a refinement module of the input calculation module 60 according to a third embodiment of the present invention. Specifically, the method comprises the following steps:
the third remolding module 601 is used for inputting test data into the constructed deep neural network model, so that the test data is subjected to array remolding in the first remolding layer, and the first remolding data is output, wherein the deep neural network model comprises a first remolding layer, three hidden layers, an output layer and a second remolding layer;
the fourth remodeling module 602 is used for inputting the first remodeling data into the three-layer hidden layer consisting of a plurality of neurons, outputting the processing data by the input and output layer, inputting the processing data into the second remodeling layer for array remodeling, and outputting second remodeling data which is a test output result obtained after the test data is input into the constructed deep neural network model;
a calculating module 603, configured to calculate a correlation between the second reconstruction data and the original data corresponding to the test data by using a correlation coefficient function.
For the related description of the embodiments of the present invention, please refer to the related description of the first embodiment and the second embodiment of the present invention, which will not be described herein again.
In the embodiment of the invention, the correlation coefficient function is adopted to calculate the correlation degree between the second reconstruction data and the original data corresponding to the test data, so that whether the constructed deep neural network model is correct or not is conveniently judged.
In addition, the calculation formula of the random phase encryption in the first encryption module 10 and the second encryption module 50 is:
E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)
wherein E represents training data or test data, LCT represents linear canonical transformation, P represents raw data, M represents1,M2,…,MnRepresenting a random phase mask, n being a positive integer.
For a description of random phase encryption, please refer to the description of the first embodiment of the present invention, which is not repeated herein.
In the embodiment of the invention, the random phase encryption is carried out on the original data, although the specific mode of the random phase encryption has diversity, the construction method of the deep neural network model can be adopted to construct decryption models for cracking various types of random phase encryption, and the practicability of the construction method of the deep neural network model is improved.
It should be noted that for simplicity and convenience of description, the above-described method embodiments are shown as a series of combinations of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently considered to be preferred embodiments and that no single act or module is essential to the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the above description of the method and apparatus for constructing a deep neural network model provided by the present invention, for those skilled in the art, the idea of the embodiment of the present invention may be changed in the specific implementation manner and the application scope, and in summary, the content of the present specification should not be construed as limiting the present invention.

Claims (6)

1. A method for constructing a deep neural network model, the method comprising:
a, carrying out random phase encryption on a plurality of groups of original data to obtain training data; the original data is a real-valued image, and the random phase encryption process comprises the following steps: an M1 random phase mask is tightly attached to a real value image and is positioned on the front focal plane of the first lens, an M2 random phase mask is placed on the Fourier transform plane, and inverse Fourier transform is performed through the second lens;
step B, training an ith-1 deep neural network model by using the training data to obtain an ith deep neural network model, inputting an ith output result of the ith deep neural network model by using the training data, and comparing the ith output result with original data corresponding to the training data to obtain an ith comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
step C, when the ith comparison result meets a preset convergence condition, determining the ith deep neural network model as a built deep neural network model;
step H, carrying out random phase encryption on multiple groups of original data to obtain test data;
step L, inputting the test data into the constructed deep neural network model, performing array remodeling on the test data in a first remodeling layer, and outputting first remodeling data, wherein the deep neural network model comprises the first remodeling layer, a three-layer hidden layer, an output layer and a second remodeling layer;
step M, inputting the first remolding data into the three-layer hidden layer consisting of a plurality of neurons, inputting the first remolding data into the output layer, outputting processing data, inputting the processing data into the second remolding layer for carrying out array remolding, and outputting second remolding data which is a test output result obtained after the test data is input into the deep neural network model constructed by the test data;
step N, calculating the correlation degree between the second remodeling data and the original data corresponding to the test data by using a correlation coefficient function;
step J, when the correlation degree is larger than or equal to a preset correlation coefficient, determining the deep neural network model as a correct decryption model;
k, when the correlation degree is smaller than the preset correlation coefficient, returning to execute the step A;
and D, when the ith comparison result does not meet the preset convergence condition, enabling i to be i +1, and returning to execute the step B.
2. The method according to claim 1, wherein step B comprises in particular the steps of:
step E, inputting the training data into an i-1 deep neural network model, enabling the training data to perform array remodeling on a first remodeling layer, and outputting first remodeling data, wherein the i-1 deep neural network model comprises the first remodeling layer, a three-layer hidden layer, an output layer and a second remodeling layer;
step F, inputting the first remolding data into the three-layer hidden layer consisting of a plurality of neurons, inputting the first remolding data into the output layer, outputting processing data, inputting the processing data into the second remolding layer for array remolding, and outputting second remolding data, wherein the activation function of the neurons is a linear rectification function, the number of the neurons in the three-layer hidden layer corresponds to the format of the first remolding data, the second remolding data is an i-1 output result after the training data is input into the i-1 deep neural network model, and the format of the second remolding data is the same as that of the training data;
and G, comparing the second remodeling data with the original data corresponding to the training data based on a mean square error function and a random gradient descent function to obtain a comparison result, and optimizing and updating the i-1 deep neural network model by using the comparison result to obtain the i-1 deep neural network model.
3. The method of claim 1, wherein the random phase encryption is calculated by the formula:
E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)
wherein E represents the training numberAccording to said test data, LCT represents a linear canonical transformation, P represents said raw data, M1,M2,…,MnRepresenting a random phase mask, n being a positive integer.
4. An apparatus for constructing a deep neural network model, the apparatus comprising:
the first encryption module is used for carrying out random phase encryption on a plurality of groups of original data to obtain training data; the original data is a real-valued image, and the random phase encryption process comprises the following steps: an M1 random phase mask is tightly attached to a real value image and is positioned on the front focal plane of the first lens, an M2 random phase mask is placed on the Fourier transform plane, and inverse Fourier transform is performed through the second lens;
the training comparison module is used for training an i-1 th deep neural network model by using the training data to obtain an i-th deep neural network model, inputting an i-th output result after the training data is input into the i-th deep neural network model, and comparing the i-th output result with original data corresponding to the training data to obtain an i-th comparison result, wherein the initial value of i is 1, and the 0 th deep neural network model is an initial model;
the first determining module is used for determining the ith deep neural network model as the constructed deep neural network model when the ith comparison result meets a preset convergence condition;
the second encryption module is used for carrying out random phase encryption on a plurality of groups of original data to obtain test data;
an input calculation module comprising a third reshaping module, a fourth reshaping module, and a calculation module:
the third remodeling module is used for inputting the test data into the constructed deep neural network model, so that the test data is subjected to array remodeling on a first remodeling layer, and first remodeling data is output, wherein the deep neural network model comprises the first remodeling layer, a three-layer hidden layer, an output layer and a second remodeling layer;
the fourth remodeling module is used for inputting the first remodeling data into the three-layer hidden layer consisting of a plurality of neurons, inputting the processing data into the output layer, inputting the processing data into the second remodeling layer for array remodeling, and outputting second remodeling data, wherein the second remodeling data is a test output result obtained after the test data is input into the constructed deep neural network model;
the calculation module is used for calculating the correlation degree between the second remodeling data and the original data corresponding to the test data by utilizing a correlation coefficient function;
the second determining module is used for determining the deep neural network model as a correct decryption model when the correlation degree is greater than or equal to a preset correlation coefficient;
the second returning module is used for returning to the first encryption module when the correlation degree is smaller than the preset correlation coefficient;
and the first returning module is used for returning to the training comparison module when the ith comparison result does not meet the preset convergence condition, and the i is equal to i + 1.
5. The apparatus of claim 4, wherein the training alignment module comprises:
the first reshaping module is used for inputting the training data into an i-1 deep neural network model, so that the training data is subjected to array reshaping on a first reshaping layer, and first reshaping data is output, wherein the i-1 deep neural network model comprises the first reshaping layer, a three-layer hidden layer, an output layer and a second reshaping layer;
the second remodeling module is used for inputting the first remodeling data into the three-layer hidden layer consisting of a plurality of neurons, inputting the processing data into the output layer, outputting second remodeling data, wherein the activation function of the neurons is a linear rectification function, the number of the neurons in the three-layer hidden layer corresponds to the format of the first remodeling data, the second remodeling data is the output result of the ith-1 after the training data is input into the ith-1 deep neural network model, and the format of the second remodeling data is the same as that of the training data;
and the calculation updating module is used for comparing the second remodeling data with the original data corresponding to the training data based on a mean square error function and a random gradient descent function to obtain a comparison result, and optimizing and updating the (i-1) th deep neural network model by using the comparison result to obtain the (i) th deep neural network model.
6. The apparatus of claim 4, wherein the random phase encryption is calculated by:
E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)
wherein E represents the training data or the test data, LCT represents a linear canonical transformation, P represents the raw data, M1,M2,…,MnRepresenting a random phase mask, n being a positive integer.
CN201810465595.6A 2018-05-16 2018-05-16 Construction method and device of deep neural network model Active CN108921282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810465595.6A CN108921282B (en) 2018-05-16 2018-05-16 Construction method and device of deep neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810465595.6A CN108921282B (en) 2018-05-16 2018-05-16 Construction method and device of deep neural network model

Publications (2)

Publication Number Publication Date
CN108921282A CN108921282A (en) 2018-11-30
CN108921282B true CN108921282B (en) 2022-05-31

Family

ID=64404069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810465595.6A Active CN108921282B (en) 2018-05-16 2018-05-16 Construction method and device of deep neural network model

Country Status (1)

Country Link
CN (1) CN108921282B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110071798B (en) * 2019-03-21 2022-03-04 深圳大学 Equivalent key obtaining method and device and computer readable storage medium
CN110008914A (en) * 2019-04-11 2019-07-12 杨勇 A kind of pattern recognition system neural network based and recognition methods
CN110428873B (en) * 2019-06-11 2021-07-23 西安电子科技大学 Chromosome fold abnormality detection method and detection system
CN113723604B (en) * 2020-05-26 2024-03-26 杭州海康威视数字技术股份有限公司 Neural network training method and device, electronic equipment and readable storage medium
CN112697821B (en) * 2020-12-02 2022-12-02 赛诺威盛科技(北京)股份有限公司 Multi-energy spectrum CT scanning method and device, electronic equipment and CT equipment
CN112603345B (en) * 2020-12-02 2021-10-15 赛诺威盛科技(北京)股份有限公司 Model training method, multi-energy spectrum CT scanning method, device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800042A (en) * 2012-06-14 2012-11-28 南昌大学 Multi-image encryption method based on log-polar transform

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015130928A1 (en) * 2014-02-26 2015-09-03 Nancy Packes, Inc. Real estate evaluating platform methods, apparatuses, and media
CN104009836B (en) * 2014-05-26 2018-06-22 中国人民解放军理工大学 Encryption data detection method and system
US20170364794A1 (en) * 2016-06-20 2017-12-21 Telefonaktiebolaget Lm Ericsson (Publ) Method for classifying the payload of encrypted traffic flows
CN107358293B (en) * 2017-06-15 2021-04-02 北京图森智途科技有限公司 Neural network training method and device
CN107506822B (en) * 2017-07-26 2021-02-19 天津大学 Deep neural network method based on space fusion pooling

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800042A (en) * 2012-06-14 2012-11-28 南昌大学 Multi-image encryption method based on log-polar transform

Also Published As

Publication number Publication date
CN108921282A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN108921282B (en) Construction method and device of deep neural network model
US9390373B2 (en) Neural network and method of neural network training
CN105224984B (en) A kind of data category recognition methods and device based on deep neural network
CN112348191B (en) Knowledge base completion method based on multi-mode representation learning
TWI655587B (en) Neural network and method of neural network training
KR20190016539A (en) Neural network and neural network training method
CN111260620B (en) Image anomaly detection method and device and electronic equipment
EP4290448A1 (en) Image generation model training method, generation method, apparatus, and device
CN113628059A (en) Associated user identification method and device based on multilayer graph attention network
CN111967573A (en) Data processing method, device, equipment and computer readable storage medium
CN112905894B (en) Collaborative filtering recommendation method based on enhanced graph learning
CN114187261B (en) Multi-dimensional attention mechanism-based non-reference stereoscopic image quality evaluation method
CN110969243A (en) Method and device for training countermeasure generation network for preventing privacy leakage
CN107862329A (en) A kind of true and false target identification method of Radar range profile's based on depth confidence network
Huai et al. Zerobn: Learning compact neural networks for latency-critical edge systems
WO2019244803A1 (en) Answer training device, answer training method, answer generation device, answer generation method, and program
KR20210060146A (en) Method and apparatus for processing data using deep neural network model, method and apparatus for trining deep neural network model
CN113935496A (en) Robustness improvement defense method for integrated model
CN112001865A (en) Face recognition method, device and equipment
CN116503320A (en) Hyperspectral image anomaly detection method, hyperspectral image anomaly detection device, hyperspectral image anomaly detection equipment and readable storage medium
CN115952493A (en) Reverse attack method and attack device for black box model and storage medium
CN112785498B (en) Pathological image superscore modeling method based on deep learning
CN113283520B (en) Feature enhancement-based depth model privacy protection method and device for membership inference attack
Akan et al. Just noticeable difference for machine perception and generation of regularized adversarial images with minimal perturbation
CN109034278A (en) A kind of ELM-IN-ELM frame Ensemble Learning Algorithms based on extreme learning machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220930

Address after: 620000 unit 1, building 3, Tianfu Yuncheng district a, south of the fast track around Tianfu new area, Shigao street, Renshou County, Meishan City, Sichuan Province

Patentee after: Sichuan Hisai Digital Technology Group Co.,Ltd.

Address before: No. 5, Floor 2, Building 16, No. 69, North Section of East Yangliu Road, Liucheng Town, Wenjiang District, Chengdu, Sichuan 610000

Patentee before: Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.

Effective date of registration: 20220930

Address after: No. 5, Floor 2, Building 16, No. 69, North Section of East Yangliu Road, Liucheng Town, Wenjiang District, Chengdu, Sichuan 610000

Patentee after: Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.

Address before: 518060 No. 3688 Nanhai Road, Shenzhen, Guangdong, Nanshan District

Patentee before: SHENZHEN University