CN108921282A - A kind of construction method and device of deep neural network model - Google Patents

A kind of construction method and device of deep neural network model Download PDF

Info

Publication number
CN108921282A
CN108921282A CN201810465595.6A CN201810465595A CN108921282A CN 108921282 A CN108921282 A CN 108921282A CN 201810465595 A CN201810465595 A CN 201810465595A CN 108921282 A CN108921282 A CN 108921282A
Authority
CN
China
Prior art keywords
data
remodeling
network model
neural network
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810465595.6A
Other languages
Chinese (zh)
Other versions
CN108921282B (en
Inventor
何文奇
海涵
彭翔
刘晓利
廖美华
卢大江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.
Sichuan Hisai Digital Technology Group Co ltd
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810465595.6A priority Critical patent/CN108921282B/en
Publication of CN108921282A publication Critical patent/CN108921282A/en
Application granted granted Critical
Publication of CN108921282B publication Critical patent/CN108921282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of construction method of deep neural network model and devices.Random phase is carried out to initial data to encrypt to obtain training data, utilize training data the (i-1)-th deep neural network model of training, obtain the i-th deep neural network model, training data is inputted into the i-th deep neural network model and obtains the i-th output result, initial data corresponding with training data is compared, judge whether comparison result meets the default condition of convergence, if meeting, then determine that the i-th deep neural network model is the deep neural network model of building, if being unsatisfactory for, i=i+1 is then enabled, training data the (i-1)-th deep neural network model of training is re-used.Since training data is input in deep neural network model, obtained output is the result is that be compared with initial data, therefore the model is the decrypted model that can crack random phase encryption, solves the technical issues of lacking the algorithm model that can crack random phase encryption.

Description

A kind of construction method and device of deep neural network model
Technical field
The present invention relates to the construction methods and device of field of image processing more particularly to a kind of deep neural network model.
Background technique
Deep learning is a new field in machine learning research, and motivation is that foundation, simulation human brain are divided The neural network of study is analysed, it imitates the mechanism of human brain to explain data.It is widely used in image recognition, big data classification Deng.But in terms of the cryptanalysis of big data, lack the algorithm model that can crack random phase encryption.
Summary of the invention
The main purpose of the present invention is to provide a kind of construction method of deep neural network model and devices, can solve In terms of the cryptanalysis of big data, lack the technical issues of capable of cracking the algorithm model that random phase encrypts.
To achieve the above object, first aspect present invention provides a kind of construction method of deep neural network model, special Sign is, the method includes:
Step A, random phase is carried out to multiple groups initial data to encrypt to obtain training data;
Step B, using the training data the (i-1)-th deep neural network model of training, the i-th deep neural network mould is obtained Type and the training data input the i-th output after i-th deep neural network model as a result, and will the i-th output knot Fruit initial data corresponding with the training data is compared, and obtains the i-th comparison result, and the initial value of the i is the 1, and the 0th Deep neural network model is initial model;
Step C, when i-th comparison result meets the default condition of convergence, i-th deep neural network model is determined For the deep neural network model of building;
Step D, when i-th comparison result is unsatisfactory for the default condition of convergence, i=i+1 is enabled, is returned described in executing Step B.
To achieve the above object, second aspect of the present invention provides a kind of construction device of deep neural network model, special Sign is that described device includes:
First encrypting module encrypts to obtain training data for carrying out random phase to multiple groups initial data;
Training comparison module, for it is deep to obtain i-th using the training data the (i-1)-th deep neural network model of training Degree neural network model and the training data input the i-th output after i-th deep neural network model as a result, and will Described i-th, which exports result initial data corresponding with the training data, is compared, and obtains the i-th comparison result, the i's is first Initial value is 1, and the 0th deep neural network model is initial model;
First determining module, for determining i-th depth when i-th comparison result meets the default condition of convergence Neural network model is the deep neural network model of building;
First return module, for enabling i=i+1 when i-th comparison result is unsatisfactory for the default condition of convergence, Return to the trained comparison module.
The present invention provides the construction method and device of a kind of deep neural network model.Due to initial data carried out with Machine phase bit encryption, and obtained training data is input in deep neural network model after encrypting, obtained output the result is that with What initial data was compared, therefore the model is the decrypted model that can crack random phase encryption, solves and lacks and can break The technical issues of solving the algorithm model of random phase encryption.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those skilled in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 is a kind of flow diagram of the construction method of deep neural network model in first embodiment of the invention;
Fig. 2 is the flow diagram of the refinement step of step B in first embodiment of the invention;
Fig. 3 is the composition schematic diagram of deep neural network model in first embodiment of the invention;
Fig. 4 is the flow diagram of the addition step in first embodiment of the invention after step C;
Fig. 5 is the flow diagram of the refinement step of step I in first embodiment of the invention;
Fig. 6 is the double random phase optical encryption schematic diagram that first embodiment of the invention provides;
Fig. 7 is the three random phase optical encryption schematic diagrames that first embodiment of the invention provides;
Fig. 8 is the multiple random-phase optical encryption schematic diagram that first embodiment of the invention provides;
Fig. 9 is a kind of structural schematic diagram of the construction device of deep neural network model in second embodiment of the invention;
Figure 10 is the structural schematic diagram of the refinement module of training comparison module 20 in second embodiment of the invention;
Figure 11 is a kind of structural schematic diagram of the construction device of deep neural network model in third embodiment of the invention;
Figure 12 is the structural schematic diagram that the refinement module of computing module 60 is inputted in third embodiment of the invention.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described reality Applying example is only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, those skilled in the art Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Lack the algorithm that can crack random phase encryption in terms of the cryptanalysis of big data due to existing in the prior art The technical issues of model.
In order to solve the above-mentioned technical problem, the present invention proposes the construction method and device of a kind of deep neural network model. Due to having carried out random phase encryption to initial data, and the training data obtained after encrypting is input to deep neural network model In, obtained output with initial data the result is that be compared, therefore the model be the solution that can be cracked random phase and encrypt Close model solves the technical issues of lacking the algorithm model that can crack random phase encryption.
Referring to Fig. 1, showing for the process of the construction method of deep neural network model a kind of in first embodiment of the invention It is intended to.This method specifically includes:
Step A, random phase is carried out to multiple groups initial data to encrypt to obtain training data;
Step B, using training data the (i-1)-th deep neural network model of training, the i-th deep neural network model is obtained, And the i-th output after training data the i-th deep neural network model of input is as a result, and export result and training data pair for i-th The initial data answered is compared, and obtains the i-th comparison result, the initial value of i is 1, and the 0th deep neural network model is initial Model;
Step C, when the i-th comparison result meets the default condition of convergence, determine that the i-th deep neural network model is building Deep neural network model;
Step D, when the i-th comparison result is unsatisfactory for the default condition of convergence, i=i+1 is enabled, B is returned to step.
It should be noted that in the construction method of deep neural network model, it is preferred that selection is to 60,000 groups of original numbers It encrypts to obtain 60,000 groups of training datas according to random phase is carried out.Using 60,000 groups of training data training deep neural network models, repeatedly Generation training 500 times or so, obtained deep neural network model are the deep neural network model of building.I.e. by 60,000 groups of training Data input the i-th output obtained after the i-th deep neural network model as a result, the i-th output result is corresponding with training data Initial data is compared, and the i-th obtained comparison result meets the default condition of convergence, and meet the default condition of convergence i-th is deep Degree neural network model is determined as the deep neural network model of building, and the value of i is in 500 or so fluctuations.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption Problem.
Referring to Fig. 2, for the flow diagram of the refinement step of step B in first embodiment of the invention.Specifically:
Step E, will training data input the (i-1)-th deep neural network model in, make training data first remodeling layer into The remodeling of line number group, output the first remodeling data, the (i-1)-th deep neural network model includes the first remodeling layer, three layers of hidden layer, defeated Layer and the second remodeling layer out;
Step F, the first remodeling data input three layers of hidden layer being made of several neurons, and at input and output layer output Data are managed, processing data input the second remodeling layer carries out array remodeling, output the second remodeling data, and the activation primitive of neuron is Line rectification function, and the neuron number in hidden layer is corresponding with the first remodeling format of data, the second remodeling data are instruction Practice data input the (i-1)-th deep neural network model after (i-1)-th output as a result, and second remodeling data format and training number According to format it is identical;
Step G, corresponding with training data to the second remodeling data based on mean square deviation function and stochastic gradient descent function Initial data is compared, and obtains comparison result, is optimized using comparison result and updates the (i-1)-th deep neural network model, obtained I-th deep neural network model.
It should be noted that training data is input in the (i-1)-th deep neural network model, by the first remodeling layer, After three layers of hidden layer, output layer and the second remodeling layer, the second obtained remodeling data are that training data is input to (i-1)-th deeply The (i-1)-th output result after spending neural network model.
Specifically, referring to Fig. 3, being the composition schematic diagram of deep neural network model in first embodiment of the invention.It is excellent Choosing, 60,000 groups of training datas are input in the (i-1)-th deep neural network model, and the (i-1)-th deep neural network model includes first Remold layer, three layers of hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3), output layer and the second remodeling layer, 60,000 groups of instructions The format for practicing data is the encryption data of 28*28 pixel, is the encryption data of 1*784 pixel by the first remodeling layer remodeling, The encryption data of the 1*784 pixel is the first remodeling data.First remodeling data input three layers of hidden layer and output layer output Handle data, wherein every layer of hidden layer and output layer include 784 neurons, and 784 neuron groups help connection nerve net Network, and the activation primitive of each neuron is line rectification function, the format for handling data is the decryption number of 1*784 pixel According to.Handle ciphertext data of the data by the second remodeling layer remodeling for 28*28 pixel.Based under mean square deviation function and stochastic gradient Decreasing function is compared the second remodeling data initial data corresponding with training data, obtains comparison result, tied using comparing Fruit optimization updates the (i-1)-th deep neural network model, obtains the i-th deep neural network model, wherein stochastic gradient descent function It is the speed in order to accelerate to train deep neural network model.
It is emphasized that updating the (i-1)-th deep neural network model using comparison result optimization, main optimization updates three Layer hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3) and output layer, optimization update the weight ginseng in neural network Number, i.e. parameter in optimization neuron, so that the i-th output of the i-th deep neural network model output is as a result, than the (i-1)-th depth The (i-1)-th of neural network model output exports result closer to the corresponding initial data of training data.That is, to the solution of training data Close processing is mainly in three layers of hidden layer and output layer.
In embodiments of the present invention, by the way that layer, three layers of hidden layer, output layer and second are remolded in training data input first Layer is remolded, the second remodeling data (the as (i-1)-th output result) is obtained, the second remodeling data are corresponding with training data original Data are compared, and obtain comparison result, are optimized using comparison result and update the (i-1)-th deep neural network model, and it is deep to obtain i-th Spend neural network model.So that deep neural network model becomes closer to satisfactory decrypted model standard, and using with Machine gradient decreasing function accelerates the speed of trained deep neural network model, improves trained rate.
Referring to Fig. 4, for the flow diagram of the addition step after step C in first embodiment of the invention.Specifically:
Step H, random phase is carried out to multiple groups initial data to encrypt to obtain test data;
Step I, the deep neural network model of test data input building obtains test output as a result, and calculating test Export the degree of correlation between result initial data corresponding with test data;
Step J, when the degree of correlation is more than or equal to preset correlation coefficient number, determine that deep neural network model is correctly decryption Model;
Step K, when the degree of correlation is less than preset correlation coefficient number, A is returned to step.
It should be noted that please referring to Fig. 3.When deep neural network model training 500 times or so, training data input The i-th output after i-th deep neural network model is as a result, the i-th output result initial data corresponding with training data is carried out It compares, obtains the i-th comparison result, when which meets the default condition of convergence, determine the i-th deep neural network model For the deep neural network model of building.Random phase encryption is carried out to another 10,000 groups of initial data, obtains 10,000 groups of test datas, 10,000 groups of test datas are input in deep neural network model, obtain test output as a result, calculating test output result and surveying The degree of correlation between the corresponding initial data of data is tried, when the degree of correlation is more than or equal to preset correlation coefficient number, determines depth nerve Network model is correct decrypted model, otherwise, when the degree of correlation is less than preset correlation coefficient number, illustrates deep neural network model Building there are mistake, need to restart to construct depth neural network model, that is, return to step A, it is preferred that default phase Relationship number is 0.8.
In embodiments of the present invention, correctness has been carried out using deep neural network model of the test data to building to estimate Amount, it is ensured that the correctness of the decrypted model of building.
Referring to Fig. 5, for the flow diagram of the refinement step of step I in first embodiment of the invention.Specifically:
Step L, in the deep neural network model of test data input building, carry out test data in the first remodeling layer Array remodeling, output first remodeling data, deep neural network model include first remodeling layer, three layers of hidden layer, output layer and Second remodeling layer;
Step M, the first remodeling data input three layers of hidden layer being made of several neurons, and at input and output layer output Data are managed, processing data input the second remodeling layer carries out array remodeling, and output the second remodeling data, the second remodeling data are test The test output result obtained after the deep neural network model of data input building;
Step N, it is calculated between the second remodeling data initial data corresponding with test data using correlation coefficient function The degree of correlation.
It should be noted that in the deep neural network model of test data input building, by the first remodeling layer, three layers Hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3), output layer and the second remodeling layer, obtain the second remodeling data.It should Second remodeling data are the test output result that test data is input in the deep neural network model of building.
In embodiments of the present invention, corresponding with test data original using correlation coefficient function calculating the second remodeling data Whether the degree of correlation between data, the deep neural network model convenient for judging building are correct.
In addition, in the first embodiment of the invention, step A, encrypting and being instructed to multiple groups initial data progress random phase Practice data and step H, multiple groups initial data progress random phase is encrypted to obtain test data.Step A can be closed with step H For a step, i.e., random phase encryption is carried out to multiple groups initial data, obtained encryption data is divided into training data and test data Two parts.Therefore, training data and the cipher mode of test data are the same, are random phase encryption.Random phase adds Close calculation formula is:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates training data or test data, and LCT indicates that linear canonical transform, P indicate initial data, M1, M2,…,MnIndicate random phase exposure mask, n is positive integer.
It is with double random phase optical encryption, three random phase optical encryptions and multiple random-phase optical encryption separately below Example:
Referring to Fig. 6, the double random phase optical encryption schematic diagram provided for first embodiment of the invention.It encrypts formula It is expressed as:
E=ift (ft (P × M1) × M2)
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data (including training data and test data), M1 and M2 indicate random phase exposure mask.The encryption method (is had using 4f optical system The lens that two focal lengths are f, at a distance of 2f, object distance f, apart also f) to realize, P is real value image, i.e. initial data, and E is to add Close image, i.e. encryption data.The phase angle information of M1 and M2 is the random array of Two dimension normal distribution, and value is randomly distributed in [0,1] Between, two array convolution and mean value are all 0, i.e., this is two mutually independent random white noises.So M1 and M2 are exactly can Enough generate random phase of the phase between [0,2 π].In ciphering process, M1 random phase exposure mask is close to real value image and is located at the On the front focal plane of one lens, then M2 random phase exposure mask is placed on Fourier transformation face, do once by second lens Inverse Fourier transform finally obtains encrypted image E, which is extended stationary white noise.
Referring to Fig. 7, the three random phase optical encryption schematic diagrames provided for first embodiment of the invention.It encrypts formula It is expressed as:
E=ift (ft (P × M1) × M2) × M3
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data (including training data and test data), M1, M2 and M3 indicate random phase exposure mask.The encryption method uses 4f optical system (i.e. there are two the lens that focal length is f, and at a distance of 2f, object distance f, at a distance of also f) to realize, P is real value image, i.e. initial data, E For encrypted image, i.e. encryption data.The phase angle information of M1, M2 and M3 are the random array of Two dimension normal distribution, value random distribution Between [0,1].So M1, M2 and M3 are to generate random phase of the phase between [0,2 π].In ciphering process, M1 random phase exposure mask be close to real value image be located on the front focal plane of first lens, then on Fourier transformation face placement M2 with Machine phase mask does an inverse Fourier transform by second lens, places M3 random phase exposure mask on focal plane behind, most Encrypted image E is obtained eventually, which is approximate extended stationary white noise.
Referring to Fig. 8, the multiple random-phase optical encryption schematic diagram provided for first embodiment of the invention.It encrypts formula It is expressed as:
E=ift (ft (ift (ft (P × M1) × M2) × M3) × Λ) × Mn
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data (including training data and test data), M1, M2, M3 ... Mn indicate random phase exposure mask, wherein n is positive integer and is greater than 3. Using i-f optical system, (have i/2 focal length is the lens of f to the encryption method, is apart also f) real at a distance of 2f, object distance f Existing, P is real value image, i.e., initial data, E are encrypted image, i.e. encryption data.The phase angle information of M1, M2, M3 ... Mn are two dimensions Normal distribution random number group, value are randomly distributed between [0,1].So M1, M2, M3 ... Mn be to generate phase exist Random phase between [0,2 π].In ciphering process, before M1 random phase exposure mask abutting real value image is located at first lens On focal plane, then M2 random phase exposure mask is placed on Fourier transformation face, do an inverse Fourier transform by second lens, M3 random phase exposure mask is placed on focal plane behind, similarly, Mn is placed on the focal plane of last lens, finally obtains encryption figure As E, which is approximate extended stationary white noise.
In embodiments of the present invention, random phase encryption is carried out to initial data, although the specific side of random phase encryption Formula uses the construction method of deep neural network model in the present invention there are diversity, can construct crack it is various types of The decrypted model of random phase encryption, increases the practicability of the construction method of deep neural network model in the present invention.
Referring to Fig. 9, showing for the structure of the construction device of deep neural network model a kind of in second embodiment of the invention It is intended to.Specifically:
First encrypting module 10 encrypts to obtain training data for carrying out random phase to multiple groups initial data;
Training comparison module 20, for obtaining the i-th depth using training data the (i-1)-th deep neural network model of training The i-th output after neural network model and training data the i-th deep neural network model of input is as a result, and export result for i-th Initial data corresponding with training data is compared, and obtains the i-th comparison result, the initial value of i is 1, and the 0th depth nerve net Network model is initial model;
First determining module 30, for determining the i-th depth nerve net when the i-th comparison result meets the default condition of convergence Network model is the deep neural network model of building;
First return module 40 returns to instruction for enabling i=i+1 when the i-th comparison result is unsatisfactory for the default condition of convergence Practice comparison module 20.
Related description in relation to the embodiment of the present invention please refers to the related description in relation to first embodiment of the invention, here It repeats no more.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption Problem.
Referring to Fig. 10, for the structural schematic diagram of the refinement module of training comparison module 20 in second embodiment of the invention. Specifically:
First remodeling module 201 makes training data for inputting training data in the (i-1)-th deep neural network model Array remodeling is carried out in the first remodeling layer, output the first remodeling data, the (i-1)-th deep neural network model includes the first remodeling Layer, three layers of hidden layer, output layer and the second remodeling layer;
Second remodeling module 202 inputs three layers of hidden layer being made of several neurons for the first remodeling data, and defeated Enter output layer output processing data, processing data input the second remodeling layer carries out array remodeling, output the second remodeling data, nerve The activation primitive of member is line rectification function, and the neuron number in hidden layer is corresponding with the first remodeling format of data, the Two remodeling data are for the (i-1)-th output after training data the (i-1)-th deep neural network model of input as a result, and the second remodeling data Format it is identical as the format of training data;
Calculate update module 203, for based on mean square deviation function and stochastic gradient descent function, to the second remodeling data with The corresponding initial data of training data is compared, and obtains comparison result, is optimized using comparison result and updates the (i-1)-th depth nerve Network model obtains the i-th deep neural network model.
Related description in relation to the embodiment of the present invention please refers to the related description in relation to first embodiment of the invention, here It repeats no more.
In embodiments of the present invention, by the way that layer, three layers of hidden layer, output layer and second are remolded in training data input first Layer is remolded, the second remodeling data (the as (i-1)-th output result) is obtained, the second remodeling data are corresponding with training data original Data are compared, and obtain comparison result, are optimized using comparison result and update the (i-1)-th deep neural network model, and it is deep to obtain i-th Spend neural network model.So that deep neural network model becomes closer to satisfactory decrypted model standard, and using with Machine gradient decreasing function accelerates the speed of trained deep neural network model, improves trained rate.
Figure 11 is please referred to, is shown for the structure of the construction device of deep neural network model a kind of in third embodiment of the invention It is intended to.Except including the first encrypting module 10 in second embodiment of the invention, training comparison module 20, the first determining module 30 and the Except one return module 40, further include:
Second encrypting module 50 encrypts to obtain test data for carrying out random phase to multiple groups initial data;
Computing module 60 is inputted, for the deep neural network model of test data input building, obtains test output knot Fruit, and calculate the degree of correlation between test output result initial data corresponding with test data;
Second determining module 70, for determining deep neural network mould when the degree of correlation is more than or equal to preset correlation coefficient number Type is correct decrypted model;
Second return module 80, for returning to the first encrypting module 50 when the degree of correlation is less than preset correlation coefficient number.
Related description in relation to the embodiment of the present invention, please refers to related first embodiment of the invention and the present invention second is implemented The related description of example, which is not described herein again.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption Problem.In addition, having carried out correctness appraisal using deep neural network model of the test data to building, it is ensured that the solution of building The correctness of close model.
Figure 12 is please referred to, is the structural schematic diagram for inputting the refinement module of computing module 60 in third embodiment of the invention. Specifically:
Third remodeling module 601 inputs in the deep neural network model of building for test data, test data is made to exist First remodeling layer carry out array remodeling, output first remodeling data, deep neural network model include first remodeling layer, three layers it is hidden Hide layer, output layer and the second remodeling layer;
Quadruple mold block 602 inputs three layers of hidden layer being made of several neurons for the first remodeling data, and defeated Enter output layer output processing data, processing data input second remodeling layer carry out array remodeling, output second remodeling data, second Remodeling data are that test data inputs the test output result obtained after the deep neural network model constructed;
Computing module 603, it is corresponding with test data original for calculating the second remodeling data using correlation coefficient function The degree of correlation between data.
Related description in relation to the embodiment of the present invention, please refers to related first embodiment of the invention and the present invention second is implemented The related description of example, which is not described herein again.
In embodiments of the present invention, corresponding with test data original using correlation coefficient function calculating the second remodeling data Whether the degree of correlation between data, the deep neural network model convenient for judging building are correct.
In addition, the first encrypting module 10 and the calculation formula of the random phase encryption in the second encrypting module 50 are:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates training data or test data, and LCT indicates that linear canonical transform, P indicate initial data, M1, M2,…,MnIndicate random phase exposure mask, n is positive integer.
In relation to the related description encrypted to random phase, related first embodiment of the invention related description is please referred to, here It repeats no more.
In embodiments of the present invention, random phase encryption is carried out to initial data, although the specific side of random phase encryption Formula uses the construction method of deep neural network model in the present invention there are diversity, can construct crack it is various types of The decrypted model of random phase encryption, increases the practicability of the construction method of deep neural network model in the present invention.
It should be noted that for the various method embodiments described above, describing for simplicity, therefore, it is stated as a series of Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because According to the present invention, certain steps can use other sequences or carry out simultaneously.Secondly, those skilled in the art should also know It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules might not all be this hair Necessary to bright.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiments.
The above are to a kind of description of the construction method and device of deep neural network model provided by the present invention, for Those skilled in the art, thought according to an embodiment of the present invention have change in specific embodiments and applications Place, to sum up, the contents of this specification are not to be construed as limiting the invention.

Claims (10)

1. a kind of construction method of deep neural network model, which is characterized in that the method includes:
Step A, random phase is carried out to multiple groups initial data to encrypt to obtain training data;
Step B, using the training data the (i-1)-th deep neural network model of training, the i-th deep neural network model is obtained, And the i-th output for inputting after i-th deep neural network model of the training data will be as a result, and described i-th will export result Initial data corresponding with the training data is compared, and obtains the i-th comparison result, and the initial value of the i is the 1, and the 0th deep Degree neural network model is initial model;
Step C, when i-th comparison result meets the default condition of convergence, determine that i-th deep neural network model is structure The deep neural network model built;
Step D, when i-th comparison result is unsatisfactory for the default condition of convergence, i=i+1 is enabled, returns and executes the step B。
2. the method according to claim 1, wherein the step B specifically includes following steps:
Step E, the training data is inputted in the (i-1)-th deep neural network model, makes the training data in the first remodeling Layer carry out array remodeling, output first remodeling data, (i-1)-th deep neural network model include it is described first remodeling layer, Three layers of hidden layer, output layer and the second remodeling layer;
Step F, the described first remodeling data input three layers of hidden layer being made of several neurons, and input the output Layer output processing data, processing data input the second remodeling layer carry out array remodeling, output the second remodeling data, institute The activation primitive for stating neuron is line rectification function, and the neuron number in the hidden layer and described first remolds data Format it is corresponding, the second remodeling data be the training data input after (i-1)-th deep neural network model the I-1 output as a result, and it is described second remodeling data format it is identical as the format of the training data;
Step G, based on mean square deviation function and stochastic gradient descent function, to the second remodeling data and the training data pair The initial data answered is compared, and obtains comparison result, updates the (i-1)-th depth nerve net using comparison result optimization Network model obtains the i-th deep neural network model.
3. the method according to claim 1, wherein further comprising the steps of after the step C:
Step H, random phase is carried out to multiple groups initial data to encrypt to obtain test data;
Step I, the deep neural network model of the described test data input building, obtains test output as a result, and calculating The test exports the degree of correlation between result initial data corresponding with the test data;
Step J, when the degree of correlation is more than or equal to preset correlation coefficient number, determine that the deep neural network model is correct Decrypted model;
Step K, it when the degree of correlation is less than the preset correlation coefficient number, returns and executes the step A.
4. according to the method described in claim 3, it is characterized in that, the step I specifically includes following steps:
Step L, in the deep neural network model of the described test data input building, make the test data in the first weight It moulds layer and carries out array remodeling, output the first remodeling data, the deep neural network model includes the first remodeling layer, three layers Hidden layer, output layer and the second remodeling layer;
Step M, the described first remodeling data input three layers of hidden layer being made of several neurons, and input the output Layer output processing data, processing data input the second remodeling layer carry out array remodeling, output the second remodeling data, institute It is defeated to state the test obtained after the deep neural network model that the second remodeling data are test data input building Result out;
Step N, using correlation coefficient function calculate corresponding with the test data initial data of the second remodeling data it Between the degree of correlation.
5. according to the method described in claim 4, it is characterized in that, the calculation formula of random phase encryption is:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates the training data or the test data, and LCT indicates that linear canonical transform, P indicate described original Data, M1,M2,…,MnIndicate random phase exposure mask, n is positive integer.
6. a kind of construction device of deep neural network model, which is characterized in that described device includes:
First encrypting module encrypts to obtain training data for carrying out random phase to multiple groups initial data;
Training comparison module, for obtaining the i-th depth mind using the training data the (i-1)-th deep neural network model of training The i-th output after inputting i-th deep neural network model through network model and the training data will be as a result, and will be described I-th output result initial data corresponding with the training data is compared, and obtains the i-th comparison result, the initial value of the i Be the 1, and the 0th deep neural network model be initial model;
First determining module, for when i-th comparison result meets the default condition of convergence, determining the i-th depth nerve Network model is the deep neural network model of building;
First return module is returned for enabling i=i+1 when i-th comparison result is unsatisfactory for the default condition of convergence The trained comparison module.
7. device according to claim 6, which is characterized in that the trained comparison module is specifically included with lower module:
First remodeling module makes the training data for inputting the training data in the (i-1)-th deep neural network model Array remodeling is carried out in the first remodeling layer, output the first remodeling data, (i-1)-th deep neural network model includes described the One remodeling layer, three layers of hidden layer, output layer and the second remodeling layer;
Second remodeling module inputs three layers of hidden layer being made of several neurons for the first remodeling data, and Output layer output processing data are inputted, processing data input the second remodeling layer carries out array remodeling, exports the Two remodeling data, the activation primitive of the neuron are line rectification function, and neuron number and institute in the hidden layer The format for stating the first remodeling data is corresponding, and the second remodeling data are that the training data inputs the (i-1)-th depth nerve After network model (i-1)-th output as a result, and it is described second remodeling data format it is identical as the format of the training data;
Update module is calculated, for remolding data and institute to described second based on mean square deviation function and stochastic gradient descent function It states the corresponding initial data of training data to be compared, obtains comparison result, update described the using comparison result optimization I-1 deep neural network model obtains the i-th deep neural network model.
8. device according to claim 6, which is characterized in that further include with lower module after first determining module:
Second encrypting module encrypts to obtain test data for carrying out random phase to multiple groups initial data;
Computing module is inputted, for the deep neural network model of test data input building, obtains test output As a result, and calculating the degree of correlation between test output result initial data corresponding with the test data;
Second determining module, for determining the deep neural network when the degree of correlation is more than or equal to preset correlation coefficient number Model is correct decrypted model;
Second return module, for returning to first encrypting module when the degree of correlation is less than the preset correlation coefficient number.
9. device according to claim 8, which is characterized in that the input computing module is specifically included with lower module:
Third remodeling module inputs in the deep neural network model of building for the test data, makes the test Data carry out array remodeling in the first remodeling layer, and output the first remodeling data, the deep neural network model includes described the One remodeling layer, three layers of hidden layer, output layer and the second remodeling layer;
Quadruple mold block inputs three layers of hidden layer being made of several neurons for the first remodeling data, and Output layer output processing data are inputted, processing data input the second remodeling layer carries out array remodeling, exports the Two remodeling data, the second remodeling data are to obtain after the test data inputs the deep neural network model constructed The test export result;
Computing module, it is corresponding with the test data original for calculating the second remodeling data using correlation coefficient function The degree of correlation between data.
10. device according to claim 9, which is characterized in that the calculation formula of random phase encryption is:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates the training data or the test data, and LCT indicates that linear canonical transform, P indicate described original Data, M1,M2,…,MnIndicate random phase exposure mask, n is positive integer.
CN201810465595.6A 2018-05-16 2018-05-16 Construction method and device of deep neural network model Active CN108921282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810465595.6A CN108921282B (en) 2018-05-16 2018-05-16 Construction method and device of deep neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810465595.6A CN108921282B (en) 2018-05-16 2018-05-16 Construction method and device of deep neural network model

Publications (2)

Publication Number Publication Date
CN108921282A true CN108921282A (en) 2018-11-30
CN108921282B CN108921282B (en) 2022-05-31

Family

ID=64404069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810465595.6A Active CN108921282B (en) 2018-05-16 2018-05-16 Construction method and device of deep neural network model

Country Status (1)

Country Link
CN (1) CN108921282B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008914A (en) * 2019-04-11 2019-07-12 杨勇 A kind of pattern recognition system neural network based and recognition methods
CN110071798A (en) * 2019-03-21 2019-07-30 深圳大学 A kind of equivalent key acquisition methods, device and computer readable storage medium
CN110428873A (en) * 2019-06-11 2019-11-08 西安电子科技大学 A kind of chromosome G banding method for detecting abnormality and detection system
CN112603345A (en) * 2020-12-02 2021-04-06 赛诺威盛科技(北京)有限公司 Model training method, multi-energy spectrum CT scanning method, device and electronic equipment
CN112697821A (en) * 2020-12-02 2021-04-23 赛诺威盛科技(北京)有限公司 Multi-energy spectrum CT scanning method and device, electronic equipment and CT equipment
CN113723604A (en) * 2020-05-26 2021-11-30 杭州海康威视数字技术股份有限公司 Neural network training method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800042A (en) * 2012-06-14 2012-11-28 南昌大学 Multi-image encryption method based on log-polar transform
CN104009836A (en) * 2014-05-26 2014-08-27 南京泰锐斯通信科技有限公司 Encrypted data detection method and system
US20150242747A1 (en) * 2014-02-26 2015-08-27 Nancy Packes, Inc. Real estate evaluating platform methods, apparatuses, and media
CN107358293A (en) * 2017-06-15 2017-11-17 北京图森未来科技有限公司 A kind of neural network training method and device
CN107506822A (en) * 2017-07-26 2017-12-22 天津大学 A kind of deep neural network method based on Space integration pond
WO2017221152A1 (en) * 2016-06-20 2017-12-28 Telefonaktiebolaget Lm Ericsson (Publ) Method for classifying the payload of encrypted traffic flows

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800042A (en) * 2012-06-14 2012-11-28 南昌大学 Multi-image encryption method based on log-polar transform
US20150242747A1 (en) * 2014-02-26 2015-08-27 Nancy Packes, Inc. Real estate evaluating platform methods, apparatuses, and media
CN104009836A (en) * 2014-05-26 2014-08-27 南京泰锐斯通信科技有限公司 Encrypted data detection method and system
WO2017221152A1 (en) * 2016-06-20 2017-12-28 Telefonaktiebolaget Lm Ericsson (Publ) Method for classifying the payload of encrypted traffic flows
CN107358293A (en) * 2017-06-15 2017-11-17 北京图森未来科技有限公司 A kind of neural network training method and device
CN107506822A (en) * 2017-07-26 2017-12-22 天津大学 A kind of deep neural network method based on Space integration pond

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潘伟洲等: "基于人工神经网络的百度地图坐标解密方法", 《计算机工程与应用》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110071798A (en) * 2019-03-21 2019-07-30 深圳大学 A kind of equivalent key acquisition methods, device and computer readable storage medium
CN110071798B (en) * 2019-03-21 2022-03-04 深圳大学 Equivalent key obtaining method and device and computer readable storage medium
CN110008914A (en) * 2019-04-11 2019-07-12 杨勇 A kind of pattern recognition system neural network based and recognition methods
CN110428873A (en) * 2019-06-11 2019-11-08 西安电子科技大学 A kind of chromosome G banding method for detecting abnormality and detection system
CN110428873B (en) * 2019-06-11 2021-07-23 西安电子科技大学 Chromosome fold abnormality detection method and detection system
CN113723604A (en) * 2020-05-26 2021-11-30 杭州海康威视数字技术股份有限公司 Neural network training method and device, electronic equipment and readable storage medium
CN113723604B (en) * 2020-05-26 2024-03-26 杭州海康威视数字技术股份有限公司 Neural network training method and device, electronic equipment and readable storage medium
CN112603345A (en) * 2020-12-02 2021-04-06 赛诺威盛科技(北京)有限公司 Model training method, multi-energy spectrum CT scanning method, device and electronic equipment
CN112697821A (en) * 2020-12-02 2021-04-23 赛诺威盛科技(北京)有限公司 Multi-energy spectrum CT scanning method and device, electronic equipment and CT equipment
CN112697821B (en) * 2020-12-02 2022-12-02 赛诺威盛科技(北京)股份有限公司 Multi-energy spectrum CT scanning method and device, electronic equipment and CT equipment

Also Published As

Publication number Publication date
CN108921282B (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN108921282A (en) A kind of construction method and device of deep neural network model
CN110460600B (en) Joint deep learning method capable of resisting generation of counterattack network attacks
CN111259443B (en) PSI (program specific information) technology-based method for protecting privacy of federal learning prediction stage
CN110490128B (en) Handwriting recognition method based on encryption neural network
CN110572253A (en) Method and system for enhancing privacy of federated learning training data
CN110399742A (en) A kind of training, prediction technique and the device of federation's transfer learning model
CN109165515A (en) Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
CN104380245B (en) random number generator and stream cipher
CN113761557A (en) Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm
CN106790303B (en) The data integrity verification method completed in cloud storage by third party
CN114363043B (en) Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN108647525A (en) The secret protection single layer perceptron batch training method that can verify that
CN110324147A (en) GAN game based on chaotic model fights encryption system (method)
CN109460536A (en) The safely outsourced algorithm of extensive matrix operation
CN108650269A (en) A kind of graded encryption method and system based on intensified learning
CN112949865A (en) Sigma protocol-based federal learning contribution degree evaluation method
CN107862329A (en) A kind of true and false target identification method of Radar range profile's based on depth confidence network
Jin et al. 3D CUBE algorithm for the key generation method: Applying deep neural network learning-based
CN116467736A (en) Verifiable privacy protection federal learning method and system
CN116777294A (en) Crowd-sourced quality safety assessment method based on federal learning under assistance of blockchain
CN114443754A (en) Block chain-based federated learning processing method, device, system and medium
CN108259180B (en) Method for quantum specifying verifier signature
CN117521102A (en) Model training method and device based on federal learning
CN111769945B (en) Auction processing method based on block chain and block chain link point
CN116032636B (en) Internet of vehicles data encryption method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220930

Address after: 620000 unit 1, building 3, Tianfu Yuncheng district a, south of the fast track around Tianfu new area, Shigao street, Renshou County, Meishan City, Sichuan Province

Patentee after: Sichuan Hisai Digital Technology Group Co.,Ltd.

Address before: No. 5, Floor 2, Building 16, No. 69, North Section of East Yangliu Road, Liucheng Town, Wenjiang District, Chengdu, Sichuan 610000

Patentee before: Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.

Effective date of registration: 20220930

Address after: No. 5, Floor 2, Building 16, No. 69, North Section of East Yangliu Road, Liucheng Town, Wenjiang District, Chengdu, Sichuan 610000

Patentee after: Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.

Address before: 518060 No. 3688 Nanhai Road, Shenzhen, Guangdong, Nanshan District

Patentee before: SHENZHEN University