WO2019218243A1 - 一种深度神经网络模型的构建方法和装置 - Google Patents

一种深度神经网络模型的构建方法和装置 Download PDF

Info

Publication number
WO2019218243A1
WO2019218243A1 PCT/CN2018/087012 CN2018087012W WO2019218243A1 WO 2019218243 A1 WO2019218243 A1 WO 2019218243A1 CN 2018087012 W CN2018087012 W CN 2018087012W WO 2019218243 A1 WO2019218243 A1 WO 2019218243A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
neural network
network model
layer
remodeling
Prior art date
Application number
PCT/CN2018/087012
Other languages
English (en)
French (fr)
Inventor
何文奇
海涵
彭翔
刘晓利
廖美华
卢大江
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2018/087012 priority Critical patent/WO2019218243A1/zh
Publication of WO2019218243A1 publication Critical patent/WO2019218243A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the present invention relates to the field of image processing, and in particular, to a method and apparatus for constructing a deep neural network model.
  • Deep learning is a new field in machine learning research. Its motivation is to establish and simulate a neural network for human brain analysis and learning, which mimics the mechanism of the human brain to interpret data. It is widely used in image recognition, big data classification, etc. However, in the cryptanalysis of big data, there is a lack of an algorithm model that can crack random phase encryption.
  • the main object of the present invention is to provide a method and a device for constructing a deep neural network model, which can solve the technical problem of an algorithm model capable of cracking random phase encryption in the cryptanalysis of big data.
  • a first aspect of the present invention provides a method for constructing a deep neural network model, the method comprising:
  • Step A performing random phase encryption on multiple sets of original data to obtain training data
  • Step B training the i-1th depth neural network model by using the training data, obtaining an ith depth neural network model, and outputting the ith output of the training data after inputting the ith depth neural network model, and Comparing the ith output result with the original data corresponding to the training data, obtaining an ith comparison result, the initial value of the i is 1, and the 0th depth neural network model is an initial model;
  • Step C When the ith comparison result satisfies a preset convergence condition, determine that the ith depth neural network model is a constructed deep neural network model;
  • a second aspect of the present invention provides a device for constructing a deep neural network model, the device comprising:
  • a first encryption module configured to perform random phase encryption on multiple sets of original data to obtain training data
  • a training comparison module configured to train the i-th depth neural network model by using the training data, to obtain an ith depth neural network model, and an ith output result after the training data is input into the ith depth neural network model And comparing the ith output result with the original data corresponding to the training data to obtain an ith comparison result, the initial value of the i is 1, and the 0th depth neural network model is an initial model;
  • a first determining module configured to determine, when the ith comparison result meets a preset convergence condition, that the ith depth neural network model is a constructed deep neural network model
  • the invention provides a method and a device for constructing a deep neural network model. Since the original data is randomly phase-encrypted, and the trained training data is input into the deep neural network model, the output is compared with the original data, so the model is a decryption model capable of cracking random phase encryption. The technical problem of the lack of an algorithm model capable of cracking random phase encryption is solved.
  • FIG. 1 is a schematic flow chart of a method for constructing a deep neural network model according to a first embodiment of the present invention
  • step B is a schematic flowchart of a refinement step of step B in the first embodiment of the present invention
  • FIG. 3 is a schematic diagram showing the composition of a deep neural network model in the first embodiment of the present invention.
  • step C is a schematic flow chart of an additional step after step C in the first embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a refinement step of step I in the first embodiment of the present invention.
  • FIG. 6 is a schematic diagram of dual random phase optical encryption according to a first embodiment of the present invention.
  • FIG. 7 is a schematic diagram of three random phase optical encryption according to a first embodiment of the present invention.
  • FIG. 8 is a schematic diagram of multiple random phase optical encryption according to a first embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a device for constructing a deep neural network model according to a second embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a refinement module of the training comparison module 20 according to the second embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a device for constructing a deep neural network model according to a third embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a refinement module of the input calculation module 60 according to the third embodiment of the present invention.
  • the present invention proposes a method and apparatus for constructing a deep neural network model. Since the original data is randomly phase-encrypted, and the trained training data is input into the deep neural network model, the output is compared with the original data, so the model is a decryption model capable of cracking random phase encryption. The technical problem of the lack of an algorithm model capable of cracking random phase encryption is solved.
  • FIG. 1 is a schematic flowchart diagram of a method for constructing a deep neural network model according to a first embodiment of the present invention.
  • the method specifically includes:
  • Step A performing random phase encryption on multiple sets of original data to obtain training data
  • Step B training the i-1th depth neural network model by using the training data to obtain the ith depth neural network model, and the ith output result after the training data is input into the ith depth neural network model, and the ith output result and the training data are obtained.
  • the corresponding raw data is compared to obtain the ith comparison result, the initial value of i is 1, and the 0th depth neural network model is the initial model;
  • Step C When the ith comparison result satisfies the preset convergence condition, determine the ith depth neural network model as the constructed deep neural network model;
  • the deep neural network model it is preferable to perform random phase encryption on 60,000 sets of original data to obtain 60,000 sets of training data.
  • the deep neural network model was trained by using 60,000 sets of training data, and iteratively trained about 500 times.
  • the obtained deep neural network model was constructed as a deep neural network model.
  • the ith output result obtained after inputting 60,000 sets of training data into the i-th depth neural network model, the ith output result is compared with the original data corresponding to the training data, and the obtained ith comparison result satisfies a preset convergence condition.
  • the ith depth neural network model satisfying the preset convergence condition is determined as a constructed deep neural network model, and the value of i fluctuates around 500.
  • the model is capable of Cracking the decryption model of random phase encryption solves the technical problem of lacking an algorithm model that can crack random phase encryption.
  • FIG. 2 is a schematic flowchart of the refinement step of step B in the first embodiment of the present invention. specific:
  • Step E input training data into the i-1th depth neural network model, so that the training data is reshaped in the first remodeling layer, and the first remodeling data is output, and the i-1th depth neural network model includes the first weight.
  • Step F The first reshaping data input is a three-layer hidden layer composed of a plurality of neurons, and the input and output layers output processing data, and the processing data is input into the second remodeling layer for array reshaping, and the second reshaped data is output, and the neurons are
  • the activation function is a linear rectification function, and the number of neurons in the hidden layer corresponds to the format of the first remodeling data, and the second remodeling data is the i-1th after the training data is input into the i-1th depth neural network model. Output the result, and the format of the second reshaped data is the same as the format of the training data;
  • Step G based on the mean square error function and the stochastic gradient descent function, compare the second remodeling data with the original data corresponding to the training data, obtain a comparison result, and use the comparison result to optimize and update the i-1th depth neural network model.
  • the ith depth neural network model is obtained.
  • the training data is input into the i-1th depth neural network model, and after the first reshaping layer, the third layer hidden layer, the output layer and the second remodeling layer, the second remodeling data obtained is The i-1th output result after inputting the training data to the i-1th depth neural network model.
  • FIG. 3 is a schematic diagram of a composition of a deep neural network model according to a first embodiment of the present invention.
  • 60,000 sets of training data are input into the i-1th deep neural network model, and the i-1th deep neural network model includes a first remodeling layer and three hidden layers (hidden layer 1, hidden layer 2, and The hidden layer 3), the output layer and the second reshaping layer, the format of the 60,000 sets of training data are 28*28 pixels of encrypted data, and the first remodeling layer is reshaped into 1*784 pixels of encrypted data, the 1 * 784 pixels of encrypted data is the first reshaped data.
  • the first reshaping data input three layers of hidden layer and output layer output processing data, wherein each layer of hidden layer and output layer includes 784 neurons, 784 neurons constitute a fully connected neural network, and activation of each neuron
  • the functions are linear rectification functions, and the format of the processed data is 1*784 pixels of decrypted data.
  • the processed data is reshaped into 28*28 pixel decrypted data through the second reshaping layer.
  • the second remodeling data is compared with the original data corresponding to the training data, and the comparison result is obtained.
  • the i-1th depth neural network model is updated by using the comparison result to obtain the i-th A deep neural network model in which the stochastic gradient descent function is used to speed up the training of deep neural network models.
  • the i-1 deep neural network model is optimized by using the comparison result, and the three layers of hidden layers (hidden layer 1, hidden layer 2 and hidden layer 3, respectively) and the output layer are optimized and updated, and the neural network is optimized and updated.
  • the weight parameter in the optimization that is, the parameter in the optimized neuron, so that the ith output of the output of the i-th depth neural network model is closer to the training data than the i-th output of the i-1th depth neural network model output.
  • Raw data That is, the decryption processing of the training data is mainly in the three-layer hidden layer and the output layer.
  • the second remodeling data (that is, the i-1th output result) is obtained by inputting the training data into the first reshaping layer, the third concealing layer, the output layer, and the second reshaping layer.
  • the second remodeling data is compared with the original data corresponding to the training data, and the comparison result is obtained.
  • the i-th depth neural network model is optimized by using the comparison result to obtain the i-th depth neural network model.
  • the deep neural network model is getting closer to the required decryption model standard, and the random gradient descent function is used to speed up the training of the deep neural network model and improve the training rate.
  • FIG. 4 is a schematic flowchart of an additional step after step C in the first embodiment of the present invention. specific:
  • Step H performing random phase encryption on multiple sets of original data to obtain test data
  • Step I The deep neural network model constructed by the test data input, obtains the test output result, and calculates the correlation between the test output result and the original data corresponding to the test data;
  • Step J determining that the depth neural network model is a correct decryption model when the correlation is greater than or equal to the preset correlation coefficient
  • Step K When the correlation is less than the preset correlation coefficient, return to step A.
  • the training data is input to the ith output result after the ith depth neural network model, and the ith output result is compared with the original data corresponding to the training data to obtain the ith comparison result.
  • the ith depth neural network model is determined as the constructed deep neural network model. Random phase encryption is performed on another 10,000 sets of original data, and 10,000 sets of test data are obtained, and 10,000 sets of test data are input into a deep neural network model to obtain test output results, and raw data corresponding to test data and test data are calculated.
  • Correlation degree when the correlation degree is greater than or equal to the preset correlation coefficient, the depth neural network model is determined to be the correct decryption model. Otherwise, when the correlation degree is less than the preset correlation coefficient, it indicates that the construction of the deep neural network model is wrong, and Restarting the construction of the deep neural network model, returning to step A, preferably, the preset correlation coefficient is 0.8.
  • test data is used to estimate the correctness of the constructed deep neural network model, and the correctness of the constructed decryption model is ensured.
  • FIG. 5 is a schematic flowchart of the refinement step of step I in the first embodiment of the present invention. specific:
  • Step L In the deep neural network model constructed by the test data input, the test data is reshaped in the first remodeling layer, and the first remodeling data is output, and the deep neural network model includes the first remodeling layer and the third layer of hidden layer. , an output layer and a second remodeling layer;
  • Step M The first reshaping data input is a three-layer hidden layer composed of a plurality of neurons, and the input and output layers output processing data, and the processing data is input into the second remodeling layer for array reshaping, and the second reshaped data is output, second.
  • the test output obtained after reshaping the data into the deep neural network model constructed by the test data input;
  • Step N Calculating the correlation between the second remodeling data and the original data corresponding to the test data by using a correlation coefficient function.
  • the first remodeling layer and the three layers of hidden layers are performed.
  • Layer get the second remodeling data.
  • the second remodeling data is the test output of the test data input into the constructed deep neural network model.
  • the correlation coefficient function is used to calculate the correlation between the second remodeling data and the original data corresponding to the test data, so as to determine whether the constructed deep neural network model is correct.
  • step A performing random phase encryption on a plurality of sets of original data to obtain training data
  • step H performing random phase encryption on the plurality of sets of original data to obtain test data.
  • Step A and step H can be combined into one step, that is, random phase encryption is performed on multiple sets of original data, and the obtained encrypted data is divided into two parts: training data and test data. Therefore, the training data is encrypted in the same way as the test data, and is random phase encryption.
  • the calculation formula for random phase encryption is:
  • LCT linear regular transformation
  • P represents raw data
  • M n represents a random phase mask
  • n is a positive integer
  • FIG. 6 is a schematic diagram of dual random phase optical encryption according to a first embodiment of the present invention. Its encryption formula is expressed as:
  • P represents raw data
  • ft represents Fourier transform
  • ift represents inverse Fourier transform
  • E represents encrypted data (including training data and test data)
  • M1 and M2 represent random phase masks.
  • the encryption method is implemented using a 4f optical system (that is, a lens having two focal lengths f, a distance of 2f, an object distance of f, and a distance of f)
  • P is a real-value image, that is, original data
  • E is an encrypted image, that is, encryption. data.
  • the phase angle information of M1 and M2 is a two-dimensional normal distribution random array whose values are randomly distributed between [0, 1], and the two arrays have a convolution and an average of 0, that is, two independent random whites. noise.
  • M1 and M2 are capable of generating a random phase with a phase between [0, 2 ⁇ ].
  • the M1 random phase mask is placed close to the real-value image on the front focal plane of the first lens, and then the M2 random phase mask is placed on the Fourier transform surface, and the second lens is used to make an inverse Fuli.
  • the leaf transform finally results in an encrypted image E, which is a generalized stationary white noise.
  • FIG. 7 is a schematic diagram of three random phase optical encryption according to a first embodiment of the present invention. Its encryption formula is expressed as:
  • M1, M2 and M3 represent random phase masks.
  • the encryption method is implemented using a 4f optical system (that is, a lens having two focal lengths f, a distance of 2f, an object distance of f, and a distance of f), P is a real-value image, that is, original data, and E is an encrypted image, that is, encryption. data.
  • the phase angle information of M1, M2 and M3 is a two-dimensional normal distribution random array whose values are randomly distributed between [0, 1]. Therefore, M1, M2 and M3 are capable of generating a random phase with a phase between [0, 2 ⁇ ].
  • the M1 random phase mask is placed close to the real-value image on the front focal plane of the first lens, and then the M2 random phase mask is placed on the Fourier transform surface, and the second lens is used to make an inverse Fuli.
  • FIG. 8 is a schematic diagram of multiple random phase optical encryption according to a first embodiment of the present invention. Its encryption formula is expressed as:
  • n is a positive integer and greater than 3.
  • the encryption method is implemented using an if optical system (i.e., a lens having i/2 focal lengths f, a distance of 2f, an object distance of f, and a distance of f), P is a real-value image, that is, original data, and E is an encrypted image. That is, encrypt the data.
  • the phase angle information of M1, M2, M3...Mn is a two-dimensional normal distribution random array whose values are randomly distributed between [0, 1]. Therefore, M1, M2, M3...Mn are capable of generating a random phase with a phase between [0, 2 ⁇ ].
  • the M1 random phase mask is placed close to the real-value image on the front focal plane of the first lens, and then the M2 random phase mask is placed on the Fourier transform surface, and the second lens is used to make an inverse Fuli.
  • the leaf transforms placing an M3 random phase mask on the back focal plane.
  • Mn is placed on the focal plane of the last lens, and finally an encrypted image E is obtained, which is an approximate generalized stationary white noise.
  • the random phase encryption is performed on the original data.
  • the construction method of the deep neural network model in the present invention can be used to construct various types of random phase encryption.
  • the decryption model increases the practicability of the construction method of the deep neural network model in the present invention.
  • FIG. 9 is a schematic structural diagram of a device for constructing a deep neural network model according to a second embodiment of the present invention. specific:
  • the first encryption module 10 is configured to perform random phase encryption on the plurality of sets of original data to obtain training data;
  • the training comparison module 20 is configured to train the i-1th depth neural network model by using the training data to obtain the i-th depth neural network model, and the ith output result after the training data is input into the i-th depth neural network model, and the i-th output The output result is compared with the original data corresponding to the training data, and the ith comparison result is obtained, the initial value of i is 1, and the 0th depth neural network model is the initial model;
  • the first determining module 30 is configured to determine, when the ith comparison result meets a preset convergence condition, that the ith depth neural network model is a constructed deep neural network model;
  • the model is capable of Cracking the decryption model of random phase encryption solves the technical problem of lacking an algorithm model that can crack random phase encryption.
  • FIG. 10 is a schematic structural diagram of a refinement module of the training comparison module 20 according to the second embodiment of the present invention. specific:
  • the first remodeling module 201 is configured to input the training data into the i-1th depth neural network model, so that the training data is reshaped in the first remodeling layer, and the first remodeling data is output, the i-1th deep nerve
  • the network model includes a first reshaping layer, three hidden layers, an output layer, and a second reshaping layer;
  • the second reshaping module 202 is configured to: the first reshaping data input is a three-layer hidden layer composed of a plurality of neurons, and the input and output layers output processing data, and the processing data is input into the second remodeling layer for array reshaping, and the output is second. Remodeling the data, the activation function of the neuron is a linear rectification function, and the number of neurons in the hidden layer corresponds to the format of the first remodeling data, and the second remodeling data is the training data input to the i-1th depth neural network model. The subsequent i-1th output result, and the format of the second remodeling data is the same as the format of the training data;
  • the calculation update module 203 is configured to compare the second remodeling data with the original data corresponding to the training data based on the mean square error function and the random gradient descent function, obtain a comparison result, and optimize the update of the i-1th depth by using the comparison result.
  • the neural network model obtains the i-th depth neural network model.
  • the second remodeling data (that is, the i-1th output result) is obtained by inputting the training data into the first reshaping layer, the third concealing layer, the output layer, and the second reshaping layer.
  • the second remodeling data is compared with the original data corresponding to the training data, and the comparison result is obtained.
  • the i-th depth neural network model is optimized by using the comparison result to obtain the i-th depth neural network model.
  • the deep neural network model is getting closer to the required decryption model standard, and the random gradient descent function is used to speed up the training of the deep neural network model and improve the training rate.
  • FIG. 11 is a schematic structural diagram of a device for constructing a deep neural network model according to a third embodiment of the present invention.
  • the method further includes:
  • a second encryption module 50 configured to perform random phase encryption on multiple sets of original data to obtain test data
  • the input calculation module 60 is configured to test the deep neural network model constructed by the data input, obtain a test output result, and calculate a correlation between the test output result and the original data corresponding to the test data;
  • the second determining module 70 is configured to determine that the deep neural network model is a correct decryption model when the correlation is greater than or equal to the preset correlation coefficient;
  • the second returning module 80 is configured to return to the first encryption module 50 when the correlation is less than the preset correlation coefficient.
  • the original data is randomly phase-encrypted, and the trained training data is input into the deep neural network model, the obtained output result is compared with the original data, so the model is capable of Cracking the decryption model of random phase encryption solves the technical problem of lacking an algorithm model that can crack random phase encryption.
  • the test data is used to estimate the correctness of the constructed deep neural network model, which ensures the correctness of the constructed decryption model.
  • FIG. 12 is a schematic structural diagram of a refinement module of the input calculation module 60 according to the third embodiment of the present invention. specific:
  • the third reshaping module 601 is configured to test the deep neural network model of the data input construction, so that the test data is reshaped in the first remodeling layer, and the first remodeling data is output, and the deep neural network model includes the first reshaping. a layer, three hidden layers, an output layer, and a second reshaped layer;
  • the fourth reshaping module 602 is configured to: the first reshaping data input is a three-layer hidden layer composed of a plurality of neurons, and the input and output layers output processing data, and the processing data is input into the second remodeling layer for array reshaping, and the output is second. Remodeling the data, the second reshaping data is the test output obtained after the deep neural network model constructed by the test data input;
  • the calculating module 603 is configured to calculate, by using a correlation coefficient function, a correlation between the second remodeling data and the original data corresponding to the test data.
  • the correlation coefficient function is used to calculate the correlation between the second remodeling data and the original data corresponding to the test data, so as to determine whether the constructed deep neural network model is correct.
  • calculation formula of the random phase encryption in the first encryption module 10 and the second encryption module 50 is:
  • LCT linear regular transformation
  • P represents raw data
  • M n represents a random phase mask
  • n is a positive integer
  • the random phase encryption is performed on the original data.
  • the construction method of the deep neural network model in the present invention can be used to construct various types of random phase encryption.
  • the decryption model increases the practicability of the construction method of the deep neural network model in the present invention.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种深度神经网络模型的构建方法和装置。对原始数据进行随机相位加密得到训练数据,利用训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,将训练数据输入第i深度神经网络模型得到第i输出结果,与训练数据对应的原始数据进行比对,判断比对结果是否满足预设收敛条件,若满足,则确定第i深度神经网络模型为构建的深度神经网络模型,若不满足,则令i=i+1,重新利用训练数据训练第i-1深度神经网络模型。由于训练数据输入到深度神经网络模型中,得到的输出结果是与原始数据进行比对的,因此该模型为能够破解随机相位加密的解密模型,解决了缺少能破解随机相位加密的算法模型的技术问题。

Description

一种深度神经网络模型的构建方法和装置 技术领域
本发明涉及图像处理领域,尤其涉及一种深度神经网络模型的构建方法和装置。
背景技术
深度学习是机器学习研究中的一个新的领域,其动机在于建立、模拟人脑进行分析学习的神经网络,它模仿人脑的机制来解释数据。被广泛应用于图像识别,大数据分类等。但是,在大数据的密码分析方面,缺少能破解随机相位加密的算法模型。
发明内容
本发明的主要目的在于提供一种深度神经网络模型的构建方法和装置,可以解决在大数据的密码分析方面,缺少能破解随机相位加密的算法模型的技术问题。
为实现上述目的,本发明第一方面提供一种深度神经网络模型的构建方法,其特征在于,所述方法包括:
步骤A、对多组原始数据进行随机相位加密得到训练数据;
步骤B、利用所述训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及所述训练数据输入所述第i深度神经网络模型后的第i输出结果,并将所述第i输出结果与所述训练数据对应的原始数据进行比对,得到第i比对结果,所述i的初始值为1,且第0深度神经网络模型为初始模型;
步骤C、当所述第i比对结果满足预设收敛条件时,确定所述第i深度神经网络模型为构建的深度神经网络模型;
步骤D、当所述第i比对结果不满足所述预设收敛条件时,令i=i+1,返回执行所述步骤B。
为实现上述目的,本发明第二方面提供一种深度神经网络模型的构建装置,其特征在于,所述装置包括:
第一加密模块,用于对多组原始数据进行随机相位加密得到训练数据;
训练比对模块,用于利用所述训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及所述训练数据输入所述第i深度神经网络模型后的第i输出结果,并将所述第i输出结果与所述训练数据对应的原始数据进行比对,得到第i比对结果,所述i的初始值为1,且第0深度神经网络模型为初始模型;
第一确定模块,用于当所述第i比对结果满足预设收敛条件时,确定所述第i深度神经网络模型为构建的深度神经网络模型;
第一返回模块,用于当所述第i比对结果不满足所述预设收敛条件时,令i=i+1,返回所述训练比对模块。
本发明提供一种深度神经网络模型的构建方法和装置。由于对原始数据进行了随机相位加密,且加密后得到的训练数据输入到深度神经网络模型中,得到的输出结果是与原始数据进行比对的,因此该模型为能够破解随机相位加密的解密模型,解决了缺少能破解随机相位加密的算法模型的技术问题。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明第一实施例中一种深度神经网络模型的构建方法的流程示意图;
图2为本发明第一实施例中步骤B的细化步骤的流程示意图;
图3为本发明第一实施例中深度神经网络模型的组成示意图;
图4为本发明第一实施例中步骤C之后的追加步骤的流程示意图;
图5为本发明第一实施例中步骤I的细化步骤的流程示意图;
图6为本发明第一实施例提供的双随机相位光学加密示意图;
图7为本发明第一实施例提供的三随机相位光学加密示意图;
图8为本发明第一实施例提供的多随机相位光学加密示意图;
图9为本发明第二实施例中一种深度神经网络模型的构建装置的结构示意图;
图10为本发明第二实施例中训练比对模块20的细化模块的结构示意图;
图11为本发明第三实施例中一种深度神经网络模型的构建装置的结构示意图;
图12为本发明第三实施例中输入计算模块60的细化模块的结构示意图。
具体实施方式
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
由于现有技术中存在在大数据的密码分析方面,缺少能破解随机相位加密的算法模型的技术问题。
为了解决上述技术问题,本发明提出一种深度神经网络模型的构建方法和 装置。由于对原始数据进行了随机相位加密,且加密后得到的训练数据输入到深度神经网络模型中,得到的输出结果是与原始数据进行比对的,因此该模型为能够破解随机相位加密的解密模型,解决了缺少能破解随机相位加密的算法模型的技术问题。
请参阅图1,为本发明第一实施例中一种深度神经网络模型的构建方法的流程示意图。该方法具体包括:
步骤A、对多组原始数据进行随机相位加密得到训练数据;
步骤B、利用训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及训练数据输入第i深度神经网络模型后的第i输出结果,并将第i输出结果与训练数据对应的原始数据进行比对,得到第i比对结果,i的初始值为1,且第0深度神经网络模型为初始模型;
步骤C、当第i比对结果满足预设收敛条件时,确定第i深度神经网络模型为构建的深度神经网络模型;
步骤D、当第i比对结果不满足预设收敛条件时,令i=i+1,返回执行步骤B。
需要说明的是,在深度神经网络模型的构建方法中,优选的,选择对6万组原始数据进行随机相位加密得到6万组训练数据。利用6万组训练数据训练深度神经网络模型,迭代训练500次左右,得到的深度神经网络模型为构建的深度神经网络模型。即将6万组训练数据输入第i深度神经网络模型后得到的第i输出结果,该第i输出结果与训练数据对应的原始数据进行比对,得到的第i比对结果满足预设收敛条件,满足该预设收敛条件的第i深度神经网络模型确定为构建的深度神经网络模型,且i的值在500左右波动。
在本发明实施例中,由于对原始数据进行了随机相位加密,且加密后得到的训练数据输入到深度神经网络模型中,得到的输出结果是与原始数据进行比对的,因此该模型为能够破解随机相位加密的解密模型,解决了缺少能破解随 机相位加密的算法模型的技术问题。
请参阅图2,为本发明第一实施例中步骤B的细化步骤的流程示意图。具体的:
步骤E、将训练数据输入第i-1深度神经网络模型中,使训练数据在第一重塑层进行数组重塑,输出第一重塑数据,第i-1深度神经网络模型包括第一重塑层、三层隐藏层、输出层和第二重塑层;
步骤F、第一重塑数据输入由若干神经元组成的三层隐藏层,并输入输出层输出处理数据,处理数据输入第二重塑层进行数组重塑,输出第二重塑数据,神经元的激活函数为线性整流函数,且隐藏层中的神经元个数与第一重塑数据的格式对应,第二重塑数据为训练数据输入第i-1深度神经网络模型后的第i-1输出结果,且第二重塑数据的格式与训练数据的格式相同;
步骤G、基于均方差函数与随机梯度下降函数,对第二重塑数据与训练数据对应的原始数据进行比对,得到比对结果,利用比对结果优化更新第i-1深度神经网络模型,得到第i深度神经网络模型。
需要说明的是,将训练数据输入到第i-1深度神经网络模型中,经过第一重塑层、三层隐藏层、输出层和第二重塑层后,得到的第二重塑数据即为将训练数据输入到第i-1深度神经网络模型后的第i-1输出结果。
具体的,请参阅图3,为本发明第一实施例中深度神经网络模型的组成示意图。优选的,6万组训练数据输入到第i-1深度神经网络模型中,第i-1深度神经网络模型包括第一重塑层、三层隐藏层(分别为隐藏层1、隐藏层2和隐藏层3)、输出层和第二重塑层,6万组训练数据的格式均为28*28像素的加密数据,经过第一重塑层重塑为1*784像素的加密数据,该1*784像素的加密数据即为第一重塑数据。第一重塑数据输入三层隐藏层和输出层输出处理数据,其中,每层隐藏层和输出层均包括784个神经元,784个神经元组成全连接神经网络,且每个神经元的激活函数均为线性整流函数,处理数据的格式为1*784 像素的解密数据。处理数据经过第二重塑层重塑为28*28像素的解密数据。基于均方差函数与随机梯度下降函数,对第二重塑数据与训练数据对应的原始数据进行比对,得到比对结果,利用比对结果优化更新第i-1深度神经网络模型,得到第i深度神经网络模型,其中随机梯度下降函数是为了加快训练深度神经网络模型的速度。
需要强调的是,利用比对结果优化更新第i-1深度神经网络模型,主要优化更新三层隐藏层(分别为隐藏层1、隐藏层2和隐藏层3)和输出层,优化更新神经网络中的权重参数,即优化神经元中的参数,使得第i深度神经网络模型输出的第i输出结果,比第i-1深度神经网络模型输出的第i-1输出结果更接近训练数据对应的原始数据。即,对训练数据的解密处理主要在三层隐藏层和输出层中。
在本发明实施例中,通过将训练数据输入第一重塑层、三层隐藏层、输出层和第二重塑层,得到第二重塑数据(即为第i-1输出结果),该第二重塑数据与训练数据对应的原始数据进行比对,得到比对结果,利用比对结果优化更新第i-1深度神经网络模型,得到第i深度神经网络模型。使得深度神经网络模型越来越接近符合要求的解密模型标准,且采用随机梯度下降函数加快了训练深度神经网络模型的速度,提高了训练速率。
请参阅图4,为本发明第一实施例中步骤C之后的追加步骤的流程示意图。具体的:
步骤H、对多组原始数据进行随机相位加密得到测试数据;
步骤I、测试数据输入构建的深度神经网络模型,得到测试输出结果,并计算测试输出结果与测试数据对应的原始数据之间的相关度;
步骤J、当相关度大于等于预设相关系数时,确定深度神经网络模型为正确的解密模型;
步骤K、当相关度小于预设相关系数时,返回执行步骤A。
需要说明的是,请参阅图3。当深度神经网络模型训练500次左右时,训练数据输入第i深度神经网络模型后的第i输出结果,将第i输出结果与训练数据对应的原始数据进行比对,得到第i比对结果,该第i比对结果满足预设收敛条件时,确定第i深度神经网络模型为构建的深度神经网络模型。对另1万组原始数据进行随机相位加密,得到1万组测试数据,将1万组测试数据输入到深度神经网络模型中,得到测试输出结果,计算测试输出结果与测试数据对应的原始数据之间的相关度,当相关度大于等于预设相关系数时,确定深度神经网络模型为正确的解密模型,否则,当相关度小于预设相关系数时,说明深度神经网络模型的构建存在错误,需要重新开始构建深度神经网路模型,即返回执行步骤A,优选的,预设相关系数为0.8。
在本发明实施例中,采用测试数据对构建的深度神经网络模型进行了正确性估量,确保了构建的解密模型的正确性。
请参阅图5,为本发明第一实施例中步骤I的细化步骤的流程示意图。具体的:
步骤L、测试数据输入构建的深度神经网络模型中,使测试数据在第一重塑层进行数组重塑,输出第一重塑数据,深度神经网络模型包括第一重塑层、三层隐藏层、输出层和第二重塑层;
步骤M、第一重塑数据输入由若干神经元组成的三层隐藏层,并输入输出层输出处理数据,处理数据输入第二重塑层进行数组重塑,输出第二重塑数据,第二重塑数据为测试数据输入构建的深度神经网络模型后得到的测试输出结果;
步骤N、利用相关系数函数计算第二重塑数据与测试数据对应的原始数据之间的相关度。
需要说明的是,测试数据输入构建的深度神经网络模型中,经过第一重塑层、三层隐藏层(分别为隐藏层1、隐藏层2和隐藏层3)、输出层和第二重塑 层,得到第二重塑数据。该第二重塑数据即为测试数据输入到构建的深度神经网络模型中的测试输出结果。
在本发明实施例中,采用相关系数函数计算第二重塑数据与测试数据对应的原始数据之间的相关度,便于判断构建的深度神经网络模型是否正确。
另外,在本发明第一实施例中,步骤A、对多组原始数据进行随机相位加密得到训练数据,及步骤H、对多组原始数据进行随机相位加密得到测试数据。步骤A与步骤H可以合为一步,即对多组原始数据进行随机相位加密,得到的加密数据分为训练数据与测试数据两部分。因此,训练数据与测试数据的加密方式是一样的,均为随机相位加密。随机相位加密的计算公式为:
E=LCT(LCT(LCT(P×M 1)×M 2)×…×M n)
其中,E表示训练数据或者测试数据,LCT表示线性正则变换,P表示原始数据,M 1,M 2,…,M n表示随机相位掩膜,n为正整数。
下面分别以双随机相位光学加密、三随机相位光学加密和多随机相位光学加密为例:
请参阅图6,为本发明第一实施例提供的双随机相位光学加密示意图。其加密公式表示为:
E=ift(ft(P×M1)×M2)
其中,P表示原始数据,ft表示傅里叶变换,ift表示逆傅里叶变换,E表示加密数据(包括训练数据和测试数据),M1与M2表示随机相位掩膜。该加密方法使用4f光学系统(即有两个焦距为f的透镜,相距2f,物距为f,相距也为f)实现,P为实值图像,即原始数据,E为加密图像,即加密数据。M1和M2的相角信息是二维正态分布随机数组,其值随机分布于[0,1]之间,两个数组卷积及均值都为0,即这是两个相互独立的随机白噪声。所以,M1和M2就是能够产生相位在[0,2π]之间的随机相位。加密过程中,M1随机相位掩膜紧贴实值图像位于第一个透镜的前焦面上,再在傅里叶变换面上放置M2随机相 位掩膜,经过第二个透镜做一次逆傅里叶变换,最终得到加密图像E,该加密数据为广义平稳白噪声。
请参阅图7,为本发明第一实施例提供的三随机相位光学加密示意图。其加密公式表示为:
E=ift(ft(P×M1)×M2)×M3
其中,P表示原始数据,ft表示傅里叶变换,ift表示逆傅里叶变换,E表示加密数据(包括训练数据和测试数据),M1、M2与M3表示随机相位掩膜。该加密方法使用4f光学系统(即有两个焦距为f的透镜,相距2f,物距为f,相距也为f)实现,P为实值图像,即原始数据,E为加密图像,即加密数据。M1、M2与M3的相角信息是二维正态分布随机数组,其值随机分布于[0,1]之间。所以,M1、M2与M3就是能够产生相位在[0,2π]之间的随机相位。加密过程中,M1随机相位掩膜紧贴实值图像位于第一个透镜的前焦面上,再在傅里叶变换面上放置M2随机相位掩膜,经过第二个透镜做一次逆傅里叶变换,在其后焦面上放置M3随机相位掩膜,最终得到加密图像E,该加密数据为近似的广义平稳白噪声。
请参阅图8,为本发明第一实施例提供的多随机相位光学加密示意图。其加密公式表示为:
E=ift(ft(ift(ft(P×M1)×M2)×M3)×Λ)×Mn
其中,P表示原始数据,ft表示傅里叶变换,ift表示逆傅里叶变换,E表示加密数据(包括训练数据和测试数据),M1、M2、M3…Mn表示随机相位掩膜,其中,n为正整数且大于3。该加密方法使用i-f光学系统(即有i/2个焦距为f的透镜,相距2f,物距为f,相距也为f)实现,P为实值图像,即原始数据,E为加密图像,即加密数据。M1、M2、M3…Mn的相角信息是二维正态分布随机数组,其值随机分布于[0,1]之间。所以,M1、M2、M3…Mn就是能够产生相位在[0,2π]之间的随机相位。加密过程中,M1随机相位掩膜紧贴实值图 像位于第一个透镜的前焦面上,再在傅里叶变换面上放置M2随机相位掩膜,经过第二个透镜做一次逆傅里叶变换,在其后焦面上放置M3随机相位掩膜,同理,Mn放置在最后透镜的焦平面上,最终得到加密图像E,该加密数据为近似的广义平稳白噪声。
在本发明实施例中,对原始数据进行随机相位加密,虽然随机相位加密的具体方式存在多样性,但是采用本发明中深度神经网络模型的构建方法,可构建破解各种类型的随机相位加密的解密模型,增加了本发明中深度神经网络模型的构建方法的实用性。
请参阅图9,为本发明第二实施例中一种深度神经网络模型的构建装置的结构示意图。具体的:
第一加密模块10,用于对多组原始数据进行随机相位加密得到训练数据;
训练比对模块20,用于利用训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及训练数据输入第i深度神经网络模型后的第i输出结果,并将第i输出结果与训练数据对应的原始数据进行比对,得到第i比对结果,i的初始值为1,且第0深度神经网络模型为初始模型;
第一确定模块30,用于当第i比对结果满足预设收敛条件时,确定第i深度神经网络模型为构建的深度神经网络模型;
第一返回模块40,用于当第i比对结果不满足预设收敛条件时,令i=i+1,返回训练比对模块20。
有关本发明实施例的相关说明,请参阅有关本发明第一实施例的相关说明,这里不再赘述。
在本发明实施例中,由于对原始数据进行了随机相位加密,且加密后得到的训练数据输入到深度神经网络模型中,得到的输出结果是与原始数据进行比对的,因此该模型为能够破解随机相位加密的解密模型,解决了缺少能破解随机相位加密的算法模型的技术问题。
请参阅图10,为本发明第二实施例中训练比对模块20的细化模块的结构示意图。具体的:
第一重塑模块201,用于将训练数据输入第i-1深度神经网络模型中,使训练数据在第一重塑层进行数组重塑,输出第一重塑数据,第i-1深度神经网络模型包括第一重塑层、三层隐藏层、输出层和第二重塑层;
第二重塑模块202,用于第一重塑数据输入由若干神经元组成的三层隐藏层,并输入输出层输出处理数据,处理数据输入第二重塑层进行数组重塑,输出第二重塑数据,神经元的激活函数为线性整流函数,且隐藏层中的神经元个数与第一重塑数据的格式对应,第二重塑数据为训练数据输入第i-1深度神经网络模型后的第i-1输出结果,且第二重塑数据的格式与训练数据的格式相同;
计算更新模块203,用于基于均方差函数与随机梯度下降函数,对第二重塑数据与训练数据对应的原始数据进行比对,得到比对结果,利用比对结果优化更新第i-1深度神经网络模型,得到第i深度神经网络模型。
有关本发明实施例的相关说明,请参阅有关本发明第一实施例的相关说明,这里不再赘述。
在本发明实施例中,通过将训练数据输入第一重塑层、三层隐藏层、输出层和第二重塑层,得到第二重塑数据(即为第i-1输出结果),该第二重塑数据与训练数据对应的原始数据进行比对,得到比对结果,利用比对结果优化更新第i-1深度神经网络模型,得到第i深度神经网络模型。使得深度神经网络模型越来越接近符合要求的解密模型标准,且采用随机梯度下降函数加快了训练深度神经网络模型的速度,提高了训练速率。
请参阅图11,为本发明第三实施例中一种深度神经网络模型的构建装置的结构示意图。除包括本发明第二实施例中第一加密模块10、训练比对模块20、第一确定模块30和第一返回模块40之外,还包括:
第二加密模块50,用于对多组原始数据进行随机相位加密得到测试数据;
输入计算模块60,用于测试数据输入构建的深度神经网络模型,得到测试输出结果,并计算测试输出结果与测试数据对应的原始数据之间的相关度;
第二确定模块70,用于当相关度大于等于预设相关系数时,确定深度神经网络模型为正确的解密模型;
第二返回模块80,用于当相关度小于预设相关系数时,返回第一加密模块50。
有关本发明实施例的相关说明,请参阅有关本发明第一实施例及本发明第二实施例的相关说明,这里不再赘述。
在本发明实施例中,由于对原始数据进行了随机相位加密,且加密后得到的训练数据输入到深度神经网络模型中,得到的输出结果是与原始数据进行比对的,因此该模型为能够破解随机相位加密的解密模型,解决了缺少能破解随机相位加密的算法模型的技术问题。此外,采用测试数据对构建的深度神经网络模型进行了正确性估量,确保了构建的解密模型的正确性。
请参阅图12,为本发明第三实施例中输入计算模块60的细化模块的结构示意图。具体的:
第三重塑模块601,用于测试数据输入构建的深度神经网络模型中,使测试数据在第一重塑层进行数组重塑,输出第一重塑数据,深度神经网络模型包括第一重塑层、三层隐藏层、输出层和第二重塑层;
第四重塑模块602,用于第一重塑数据输入由若干神经元组成的三层隐藏层,并输入输出层输出处理数据,处理数据输入第二重塑层进行数组重塑,输出第二重塑数据,第二重塑数据为测试数据输入构建的深度神经网络模型后得到的测试输出结果;
计算模块603,用于利用相关系数函数计算第二重塑数据与测试数据对应的原始数据之间的相关度。
有关本发明实施例的相关说明,请参阅有关本发明第一实施例及本发明第 二实施例的相关说明,这里不再赘述。
在本发明实施例中,采用相关系数函数计算第二重塑数据与测试数据对应的原始数据之间的相关度,便于判断构建的深度神经网络模型是否正确。
另外,第一加密模块10与第二加密模块50中的随机相位加密的计算公式为:
E=LCT(LCT(LCT(P×M 1)×M 2)×…×M n)
其中,E表示训练数据或者测试数据,LCT表示线性正则变换,P表示原始数据,M 1,M 2,…,M n表示随机相位掩膜,n为正整数。
有关对随机相位加密的相关说明,请参阅有关本发明第一实施例相关说明,这里不再赘述。
在本发明实施例中,对原始数据进行随机相位加密,虽然随机相位加密的具体方式存在多样性,但是采用本发明中深度神经网络模型的构建方法,可构建破解各种类型的随机相位加密的解密模型,增加了本发明中深度神经网络模型的构建方法的实用性。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本发明所提供的一种深度神经网络模型的构建方法和装置的描述,对于本领域的技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种深度神经网络模型的构建方法,其特征在于,所述方法包括:
    步骤A、对多组原始数据进行随机相位加密得到训练数据;
    步骤B、利用所述训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及所述训练数据输入所述第i深度神经网络模型后的第i输出结果,并将所述第i输出结果与所述训练数据对应的原始数据进行比对,得到第i比对结果,所述i的初始值为1,且第0深度神经网络模型为初始模型;
    步骤C、当所述第i比对结果满足预设收敛条件时,确定所述第i深度神经网络模型为构建的深度神经网络模型;
    步骤D、当所述第i比对结果不满足所述预设收敛条件时,令i=i+1,返回执行所述步骤B。
  2. 根据权利要求1所述的方法,其特征在于,所述步骤B具体包括以下步骤:
    步骤E、将所述训练数据输入第i-1深度神经网络模型中,使所述训练数据在第一重塑层进行数组重塑,输出第一重塑数据,所述第i-1深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;
    步骤F、所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述神经元的激活函数为线性整流函数,且所述隐藏层中的神经元个数与所述第一重塑数据的格式对应,所述第二重塑数据为所述训练数据输入所述第i-1深度神经网络模型后的第i-1输出结果,且所述第二重塑数据的格式与所述训练数据的格式相同;
    步骤G、基于均方差函数与随机梯度下降函数,对所述第二重塑数据与所述训练数据对应的原始数据进行比对,得到比对结果,利用所述比对结果优化更新所述第i-1深度神经网络模型,得到第i深度神经网络模型。
  3. 根据权利要求1所述的方法,其特征在于,所述步骤C之后还包括以下步骤:
    步骤H、对多组原始数据进行随机相位加密得到测试数据;
    步骤I、所述测试数据输入构建的所述深度神经网络模型,得到测试输出结果,并计算所述测试输出结果与所述测试数据对应的原始数据之间的相关度;
    步骤J、当所述相关度大于等于预设相关系数时,确定所述深度神经网络模型为正确的解密模型;
    步骤K、当所述相关度小于所述预设相关系数时,返回执行所述步骤A。
  4. 根据权利要求3所述的方法,其特征在于,所述步骤I具体包括以下步骤:
    步骤L、所述测试数据输入构建的所述深度神经网络模型中,使所述测试数据在第一重塑层进行数组重塑,输出第一重塑数据,所述深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;
    步骤M、所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述第二重塑数据为所述测试数据输入构建的所述深度神经网络模型后得到的所述测试输出结果;
    步骤N、利用相关系数函数计算所述第二重塑数据与所述测试数据对应的原始数据之间的相关度。
  5. 根据权利要求4所述的方法,其特征在于,所述随机相位加密的计算公式为:
    E=LCT(LCT(LCT(P×M 1)×M 2)×…×M n)
    其中,E表示所述训练数据或者所述测试数据,LCT表示线性正则变换,P表示所述原始数据,M 1,M 2,…,M n表示随机相位掩膜,n为正整数。
  6. 一种深度神经网络模型的构建装置,其特征在于,所述装置包括:
    第一加密模块,用于对多组原始数据进行随机相位加密得到训练数据;
    训练比对模块,用于利用所述训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及所述训练数据输入所述第i深度神经网络模型后的第i输出结果,并将所述第i输出结果与所述训练数据对应的原始数据进行比对,得到第i比对结果,所述i的初始值为1,且第0深度神经网络模型为初始模型;
    第一确定模块,用于当所述第i比对结果满足预设收敛条件时,确定所述第i深度神经网络模型为构建的深度神经网络模型;
    第一返回模块,用于当所述第i比对结果不满足所述预设收敛条件时,令i=i+1,返回所述训练比对模块。
  7. 根据权利要求6所述的装置,其特征在于,所述训练比对模块具体包括以下模块:
    第一重塑模块,用于将所述训练数据输入第i-1深度神经网络模型中,使所述训练数据在第一重塑层进行数组重塑,输出第一重塑数据,所述第i-1深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;
    第二重塑模块,用于所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述神经元的激活函数为线性整流函数,且所述隐藏层中的神经元个数与所述第一重塑数据的格式对应,所述第二重塑数据为所述训练数据输入所述第i-1深度神经网络模型后的第i-1输出结果,且所述第二重塑数据的格式与所述训练数据的格式相同;
    计算更新模块,用于基于均方差函数与随机梯度下降函数,对所述第二重塑数据与所述训练数据对应的原始数据进行比对,得到比对结果,利用所述比对结果优化更新所述第i-1深度神经网络模型,得到第i深度神经网络模型。
  8. 根据权利要求6所述的装置,其特征在于,所述第一确定模块之后还包括以下模块:
    第二加密模块,用于对多组原始数据进行随机相位加密得到测试数据;
    输入计算模块,用于所述测试数据输入构建的所述深度神经网络模型,得到测试输出结果,并计算所述测试输出结果与所述测试数据对应的原始数据之间的相关度;
    第二确定模块,用于当所述相关度大于等于预设相关系数时,确定所述深度神经网络模型为正确的解密模型;
    第二返回模块,用于当所述相关度小于所述预设相关系数时,返回所述第一加密模块。
  9. 根据权利要求8所述的装置,其特征在于,所述输入计算模块具体包括以下模块:
    第三重塑模块,用于所述测试数据输入构建的所述深度神经网络模型中,使所述测试数据在第一重塑层进行数组重塑,输出第一重塑数据,所述深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;
    第四重塑模块,用于所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述第二重塑数据为所述测试数据输入构建的所述深度神经网络模型后得到的所述测试输出结果;
    计算模块,用于利用相关系数函数计算所述第二重塑数据与所述测试数据对应的原始数据之间的相关度。
  10. 根据权利要求9所述的装置,其特征在于,所述随机相位加密的计算公式为:
    E=LCT(LCT(LCT(P×M 1)×M 2)×…×M n)
    其中,E表示所述训练数据或者所述测试数据,LCT表示线性正则变换,P表示所述原始数据,M 1,M 2,…,M n表示随机相位掩膜,n为正整数。
PCT/CN2018/087012 2018-05-16 2018-05-16 一种深度神经网络模型的构建方法和装置 WO2019218243A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/087012 WO2019218243A1 (zh) 2018-05-16 2018-05-16 一种深度神经网络模型的构建方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/087012 WO2019218243A1 (zh) 2018-05-16 2018-05-16 一种深度神经网络模型的构建方法和装置

Publications (1)

Publication Number Publication Date
WO2019218243A1 true WO2019218243A1 (zh) 2019-11-21

Family

ID=68539335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/087012 WO2019218243A1 (zh) 2018-05-16 2018-05-16 一种深度神经网络模型的构建方法和装置

Country Status (1)

Country Link
WO (1) WO2019218243A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485201A (zh) * 2016-09-09 2017-03-08 首都师范大学 超复数加密域的彩色人脸识别方法
US20170090418A1 (en) * 2015-09-25 2017-03-30 City University Of Hong Kong Holographic encryption of multi-dimensional images and decryption of encrypted multi-dimensional images
CN107437019A (zh) * 2017-07-31 2017-12-05 广东欧珀移动通信有限公司 唇语识别的身份验证方法和装置
CN107886551A (zh) * 2017-11-12 2018-04-06 四川大学 双柱面随机相位编码的光学图像加密方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170090418A1 (en) * 2015-09-25 2017-03-30 City University Of Hong Kong Holographic encryption of multi-dimensional images and decryption of encrypted multi-dimensional images
CN106485201A (zh) * 2016-09-09 2017-03-08 首都师范大学 超复数加密域的彩色人脸识别方法
CN107437019A (zh) * 2017-07-31 2017-12-05 广东欧珀移动通信有限公司 唇语识别的身份验证方法和装置
CN107886551A (zh) * 2017-11-12 2018-04-06 四川大学 双柱面随机相位编码的光学图像加密方法

Similar Documents

Publication Publication Date Title
US9390373B2 (en) Neural network and method of neural network training
US10963817B2 (en) Training tree-based machine-learning modeling algorithms for predicting outputs and generating explanatory data
CN105224984B (zh) 一种基于深度神经网络的数据类别识别方法及装置
EP3467724A1 (en) Device and method for generating artificial neural network-based prediction model
CN108921282A (zh) 一种深度神经网络模型的构建方法和装置
US20200143137A1 (en) Neural networks for biometric recognition
TWI655587B (zh) 神經網路及神經網路訓練的方法
CN109120652A (zh) 基于差分wgan网络安全态势预测
JP2016532953A5 (zh)
KR102061935B1 (ko) 딥 신경망을 이용한 정보 이전 방법 및 그 장치
JP2022513858A (ja) 顔画像生成用のデータ処理方法、データ処理機器、コンピュータプログラム、及びコンピュータ機器
Dong et al. Dropping activation outputs with localized first-layer deep network for enhancing user privacy and data security
JP7140317B2 (ja) 原本データとマークデータとを合成してマーキング済みデータを生成するデータエンベディングネットワークを学習する方法、及びテストする方法、並びに、それを利用した学習装置
CN114363043B (zh) 一种对等网络中基于可验证聚合和差分隐私的异步联邦学习方法
JP7411758B2 (ja) 残基固有の分子構造特徴を用いた分子変異体の分子特性の予測
CN106203628A (zh) 一种增强深度学习算法鲁棒性的优化方法和系统
JP2019197311A (ja) 学習方法、学習プログラム、および学習装置
Valdez et al. A framework for interactive structural design exploration
Xiao Eigenspace restructuring: a principle of space and frequency in neural networks
WO2019244803A1 (ja) 回答学習装置、回答学習方法、回答生成装置、回答生成方法、及びプログラム
WO2019218243A1 (zh) 一种深度神经网络模型的构建方法和装置
CN116562366A (zh) 一种基于特征选择和特征对齐的联邦学习方法
CN116503320A (zh) 高光谱图像异常检测方法、装置、设备及可读存储介质
TWI524307B (zh) Two - dimensional image depth value estimation method and its system
KR102340387B1 (ko) 뇌 연결성 학습 방법 및 이를 위한 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18918750

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18918750

Country of ref document: EP

Kind code of ref document: A1