CN108921282A - A kind of construction method and device of deep neural network model - Google Patents

A kind of construction method and device of deep neural network model Download PDF

Info

Publication number
CN108921282A
CN108921282A CN201810465595.6A CN201810465595A CN108921282A CN 108921282 A CN108921282 A CN 108921282A CN 201810465595 A CN201810465595 A CN 201810465595A CN 108921282 A CN108921282 A CN 108921282A
Authority
CN
China
Prior art keywords
data
neural network
network model
deep neural
remodeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810465595.6A
Other languages
Chinese (zh)
Other versions
CN108921282B (en
Inventor
何文奇
海涵
彭翔
刘晓利
廖美华
卢大江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Qizhizhi Intellectual Property Operation Co ltd
Sichuan Hisai Digital Technology Group Co ltd
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810465595.6A priority Critical patent/CN108921282B/en
Publication of CN108921282A publication Critical patent/CN108921282A/en
Application granted granted Critical
Publication of CN108921282B publication Critical patent/CN108921282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

本发明公开了一种深度神经网络模型的构建方法和装置。对原始数据进行随机相位加密得到训练数据,利用训练数据训练第i‑1深度神经网络模型,得到第i深度神经网络模型,将训练数据输入第i深度神经网络模型得到第i输出结果,与训练数据对应的原始数据进行比对,判断比对结果是否满足预设收敛条件,若满足,则确定第i深度神经网络模型为构建的深度神经网络模型,若不满足,则令i=i+1,重新利用训练数据训练第i‑1深度神经网络模型。由于训练数据输入到深度神经网络模型中,得到的输出结果是与原始数据进行比对的,因此该模型为能够破解随机相位加密的解密模型,解决了缺少能破解随机相位加密的算法模型的技术问题。

The invention discloses a method and device for constructing a deep neural network model. Perform random phase encryption on the original data to obtain training data, use the training data to train the i-1th deep neural network model to obtain the i-th deep neural network model, input the training data into the i-th deep neural network model to obtain the i-th output result, and train Compare the original data corresponding to the data, and judge whether the comparison result meets the preset convergence condition. If so, then determine the i-th deep neural network model as the constructed deep neural network model. If not, set i=i+1 , reuse the training data to train the i‑1th deep neural network model. Since the training data is input into the deep neural network model, the output result obtained is compared with the original data, so the model is a decryption model capable of cracking random phase encryption, which solves the lack of algorithm models that can crack random phase encryption. question.

Description

A kind of construction method and device of deep neural network model
Technical field
The present invention relates to the construction methods and device of field of image processing more particularly to a kind of deep neural network model.
Background technique
Deep learning is a new field in machine learning research, and motivation is that foundation, simulation human brain are divided The neural network of study is analysed, it imitates the mechanism of human brain to explain data.It is widely used in image recognition, big data classification Deng.But in terms of the cryptanalysis of big data, lack the algorithm model that can crack random phase encryption.
Summary of the invention
The main purpose of the present invention is to provide a kind of construction method of deep neural network model and devices, can solve In terms of the cryptanalysis of big data, lack the technical issues of capable of cracking the algorithm model that random phase encrypts.
To achieve the above object, first aspect present invention provides a kind of construction method of deep neural network model, special Sign is, the method includes:
Step A, random phase is carried out to multiple groups initial data to encrypt to obtain training data;
Step B, using the training data the (i-1)-th deep neural network model of training, the i-th deep neural network mould is obtained Type and the training data input the i-th output after i-th deep neural network model as a result, and will the i-th output knot Fruit initial data corresponding with the training data is compared, and obtains the i-th comparison result, and the initial value of the i is the 1, and the 0th Deep neural network model is initial model;
Step C, when i-th comparison result meets the default condition of convergence, i-th deep neural network model is determined For the deep neural network model of building;
Step D, when i-th comparison result is unsatisfactory for the default condition of convergence, i=i+1 is enabled, is returned described in executing Step B.
To achieve the above object, second aspect of the present invention provides a kind of construction device of deep neural network model, special Sign is that described device includes:
First encrypting module encrypts to obtain training data for carrying out random phase to multiple groups initial data;
Training comparison module, for it is deep to obtain i-th using the training data the (i-1)-th deep neural network model of training Degree neural network model and the training data input the i-th output after i-th deep neural network model as a result, and will Described i-th, which exports result initial data corresponding with the training data, is compared, and obtains the i-th comparison result, the i's is first Initial value is 1, and the 0th deep neural network model is initial model;
First determining module, for determining i-th depth when i-th comparison result meets the default condition of convergence Neural network model is the deep neural network model of building;
First return module, for enabling i=i+1 when i-th comparison result is unsatisfactory for the default condition of convergence, Return to the trained comparison module.
The present invention provides the construction method and device of a kind of deep neural network model.Due to initial data carried out with Machine phase bit encryption, and obtained training data is input in deep neural network model after encrypting, obtained output the result is that with What initial data was compared, therefore the model is the decrypted model that can crack random phase encryption, solves and lacks and can break The technical issues of solving the algorithm model of random phase encryption.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those skilled in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 is a kind of flow diagram of the construction method of deep neural network model in first embodiment of the invention;
Fig. 2 is the flow diagram of the refinement step of step B in first embodiment of the invention;
Fig. 3 is the composition schematic diagram of deep neural network model in first embodiment of the invention;
Fig. 4 is the flow diagram of the addition step in first embodiment of the invention after step C;
Fig. 5 is the flow diagram of the refinement step of step I in first embodiment of the invention;
Fig. 6 is the double random phase optical encryption schematic diagram that first embodiment of the invention provides;
Fig. 7 is the three random phase optical encryption schematic diagrames that first embodiment of the invention provides;
Fig. 8 is the multiple random-phase optical encryption schematic diagram that first embodiment of the invention provides;
Fig. 9 is a kind of structural schematic diagram of the construction device of deep neural network model in second embodiment of the invention;
Figure 10 is the structural schematic diagram of the refinement module of training comparison module 20 in second embodiment of the invention;
Figure 11 is a kind of structural schematic diagram of the construction device of deep neural network model in third embodiment of the invention;
Figure 12 is the structural schematic diagram that the refinement module of computing module 60 is inputted in third embodiment of the invention.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described reality Applying example is only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, those skilled in the art Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Lack the algorithm that can crack random phase encryption in terms of the cryptanalysis of big data due to existing in the prior art The technical issues of model.
In order to solve the above-mentioned technical problem, the present invention proposes the construction method and device of a kind of deep neural network model. Due to having carried out random phase encryption to initial data, and the training data obtained after encrypting is input to deep neural network model In, obtained output with initial data the result is that be compared, therefore the model be the solution that can be cracked random phase and encrypt Close model solves the technical issues of lacking the algorithm model that can crack random phase encryption.
Referring to Fig. 1, showing for the process of the construction method of deep neural network model a kind of in first embodiment of the invention It is intended to.This method specifically includes:
Step A, random phase is carried out to multiple groups initial data to encrypt to obtain training data;
Step B, using training data the (i-1)-th deep neural network model of training, the i-th deep neural network model is obtained, And the i-th output after training data the i-th deep neural network model of input is as a result, and export result and training data pair for i-th The initial data answered is compared, and obtains the i-th comparison result, the initial value of i is 1, and the 0th deep neural network model is initial Model;
Step C, when the i-th comparison result meets the default condition of convergence, determine that the i-th deep neural network model is building Deep neural network model;
Step D, when the i-th comparison result is unsatisfactory for the default condition of convergence, i=i+1 is enabled, B is returned to step.
It should be noted that in the construction method of deep neural network model, it is preferred that selection is to 60,000 groups of original numbers It encrypts to obtain 60,000 groups of training datas according to random phase is carried out.Using 60,000 groups of training data training deep neural network models, repeatedly Generation training 500 times or so, obtained deep neural network model are the deep neural network model of building.I.e. by 60,000 groups of training Data input the i-th output obtained after the i-th deep neural network model as a result, the i-th output result is corresponding with training data Initial data is compared, and the i-th obtained comparison result meets the default condition of convergence, and meet the default condition of convergence i-th is deep Degree neural network model is determined as the deep neural network model of building, and the value of i is in 500 or so fluctuations.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption Problem.
Referring to Fig. 2, for the flow diagram of the refinement step of step B in first embodiment of the invention.Specifically:
Step E, will training data input the (i-1)-th deep neural network model in, make training data first remodeling layer into The remodeling of line number group, output the first remodeling data, the (i-1)-th deep neural network model includes the first remodeling layer, three layers of hidden layer, defeated Layer and the second remodeling layer out;
Step F, the first remodeling data input three layers of hidden layer being made of several neurons, and at input and output layer output Data are managed, processing data input the second remodeling layer carries out array remodeling, output the second remodeling data, and the activation primitive of neuron is Line rectification function, and the neuron number in hidden layer is corresponding with the first remodeling format of data, the second remodeling data are instruction Practice data input the (i-1)-th deep neural network model after (i-1)-th output as a result, and second remodeling data format and training number According to format it is identical;
Step G, corresponding with training data to the second remodeling data based on mean square deviation function and stochastic gradient descent function Initial data is compared, and obtains comparison result, is optimized using comparison result and updates the (i-1)-th deep neural network model, obtained I-th deep neural network model.
It should be noted that training data is input in the (i-1)-th deep neural network model, by the first remodeling layer, After three layers of hidden layer, output layer and the second remodeling layer, the second obtained remodeling data are that training data is input to (i-1)-th deeply The (i-1)-th output result after spending neural network model.
Specifically, referring to Fig. 3, being the composition schematic diagram of deep neural network model in first embodiment of the invention.It is excellent Choosing, 60,000 groups of training datas are input in the (i-1)-th deep neural network model, and the (i-1)-th deep neural network model includes first Remold layer, three layers of hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3), output layer and the second remodeling layer, 60,000 groups of instructions The format for practicing data is the encryption data of 28*28 pixel, is the encryption data of 1*784 pixel by the first remodeling layer remodeling, The encryption data of the 1*784 pixel is the first remodeling data.First remodeling data input three layers of hidden layer and output layer output Handle data, wherein every layer of hidden layer and output layer include 784 neurons, and 784 neuron groups help connection nerve net Network, and the activation primitive of each neuron is line rectification function, the format for handling data is the decryption number of 1*784 pixel According to.Handle ciphertext data of the data by the second remodeling layer remodeling for 28*28 pixel.Based under mean square deviation function and stochastic gradient Decreasing function is compared the second remodeling data initial data corresponding with training data, obtains comparison result, tied using comparing Fruit optimization updates the (i-1)-th deep neural network model, obtains the i-th deep neural network model, wherein stochastic gradient descent function It is the speed in order to accelerate to train deep neural network model.
It is emphasized that updating the (i-1)-th deep neural network model using comparison result optimization, main optimization updates three Layer hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3) and output layer, optimization update the weight ginseng in neural network Number, i.e. parameter in optimization neuron, so that the i-th output of the i-th deep neural network model output is as a result, than the (i-1)-th depth The (i-1)-th of neural network model output exports result closer to the corresponding initial data of training data.That is, to the solution of training data Close processing is mainly in three layers of hidden layer and output layer.
In embodiments of the present invention, by the way that layer, three layers of hidden layer, output layer and second are remolded in training data input first Layer is remolded, the second remodeling data (the as (i-1)-th output result) is obtained, the second remodeling data are corresponding with training data original Data are compared, and obtain comparison result, are optimized using comparison result and update the (i-1)-th deep neural network model, and it is deep to obtain i-th Spend neural network model.So that deep neural network model becomes closer to satisfactory decrypted model standard, and using with Machine gradient decreasing function accelerates the speed of trained deep neural network model, improves trained rate.
Referring to Fig. 4, for the flow diagram of the addition step after step C in first embodiment of the invention.Specifically:
Step H, random phase is carried out to multiple groups initial data to encrypt to obtain test data;
Step I, the deep neural network model of test data input building obtains test output as a result, and calculating test Export the degree of correlation between result initial data corresponding with test data;
Step J, when the degree of correlation is more than or equal to preset correlation coefficient number, determine that deep neural network model is correctly decryption Model;
Step K, when the degree of correlation is less than preset correlation coefficient number, A is returned to step.
It should be noted that please referring to Fig. 3.When deep neural network model training 500 times or so, training data input The i-th output after i-th deep neural network model is as a result, the i-th output result initial data corresponding with training data is carried out It compares, obtains the i-th comparison result, when which meets the default condition of convergence, determine the i-th deep neural network model For the deep neural network model of building.Random phase encryption is carried out to another 10,000 groups of initial data, obtains 10,000 groups of test datas, 10,000 groups of test datas are input in deep neural network model, obtain test output as a result, calculating test output result and surveying The degree of correlation between the corresponding initial data of data is tried, when the degree of correlation is more than or equal to preset correlation coefficient number, determines depth nerve Network model is correct decrypted model, otherwise, when the degree of correlation is less than preset correlation coefficient number, illustrates deep neural network model Building there are mistake, need to restart to construct depth neural network model, that is, return to step A, it is preferred that default phase Relationship number is 0.8.
In embodiments of the present invention, correctness has been carried out using deep neural network model of the test data to building to estimate Amount, it is ensured that the correctness of the decrypted model of building.
Referring to Fig. 5, for the flow diagram of the refinement step of step I in first embodiment of the invention.Specifically:
Step L, in the deep neural network model of test data input building, carry out test data in the first remodeling layer Array remodeling, output first remodeling data, deep neural network model include first remodeling layer, three layers of hidden layer, output layer and Second remodeling layer;
Step M, the first remodeling data input three layers of hidden layer being made of several neurons, and at input and output layer output Data are managed, processing data input the second remodeling layer carries out array remodeling, and output the second remodeling data, the second remodeling data are test The test output result obtained after the deep neural network model of data input building;
Step N, it is calculated between the second remodeling data initial data corresponding with test data using correlation coefficient function The degree of correlation.
It should be noted that in the deep neural network model of test data input building, by the first remodeling layer, three layers Hidden layer (respectively hidden layer 1, hidden layer 2 and hidden layer 3), output layer and the second remodeling layer, obtain the second remodeling data.It should Second remodeling data are the test output result that test data is input in the deep neural network model of building.
In embodiments of the present invention, corresponding with test data original using correlation coefficient function calculating the second remodeling data Whether the degree of correlation between data, the deep neural network model convenient for judging building are correct.
In addition, in the first embodiment of the invention, step A, encrypting and being instructed to multiple groups initial data progress random phase Practice data and step H, multiple groups initial data progress random phase is encrypted to obtain test data.Step A can be closed with step H For a step, i.e., random phase encryption is carried out to multiple groups initial data, obtained encryption data is divided into training data and test data Two parts.Therefore, training data and the cipher mode of test data are the same, are random phase encryption.Random phase adds Close calculation formula is:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates training data or test data, and LCT indicates that linear canonical transform, P indicate initial data, M1, M2,…,MnIndicate random phase exposure mask, n is positive integer.
It is with double random phase optical encryption, three random phase optical encryptions and multiple random-phase optical encryption separately below Example:
Referring to Fig. 6, the double random phase optical encryption schematic diagram provided for first embodiment of the invention.It encrypts formula It is expressed as:
E=ift (ft (P × M1) × M2)
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data (including training data and test data), M1 and M2 indicate random phase exposure mask.The encryption method (is had using 4f optical system The lens that two focal lengths are f, at a distance of 2f, object distance f, apart also f) to realize, P is real value image, i.e. initial data, and E is to add Close image, i.e. encryption data.The phase angle information of M1 and M2 is the random array of Two dimension normal distribution, and value is randomly distributed in [0,1] Between, two array convolution and mean value are all 0, i.e., this is two mutually independent random white noises.So M1 and M2 are exactly can Enough generate random phase of the phase between [0,2 π].In ciphering process, M1 random phase exposure mask is close to real value image and is located at the On the front focal plane of one lens, then M2 random phase exposure mask is placed on Fourier transformation face, do once by second lens Inverse Fourier transform finally obtains encrypted image E, which is extended stationary white noise.
Referring to Fig. 7, the three random phase optical encryption schematic diagrames provided for first embodiment of the invention.It encrypts formula It is expressed as:
E=ift (ft (P × M1) × M2) × M3
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data (including training data and test data), M1, M2 and M3 indicate random phase exposure mask.The encryption method uses 4f optical system (i.e. there are two the lens that focal length is f, and at a distance of 2f, object distance f, at a distance of also f) to realize, P is real value image, i.e. initial data, E For encrypted image, i.e. encryption data.The phase angle information of M1, M2 and M3 are the random array of Two dimension normal distribution, value random distribution Between [0,1].So M1, M2 and M3 are to generate random phase of the phase between [0,2 π].In ciphering process, M1 random phase exposure mask be close to real value image be located on the front focal plane of first lens, then on Fourier transformation face placement M2 with Machine phase mask does an inverse Fourier transform by second lens, places M3 random phase exposure mask on focal plane behind, most Encrypted image E is obtained eventually, which is approximate extended stationary white noise.
Referring to Fig. 8, the multiple random-phase optical encryption schematic diagram provided for first embodiment of the invention.It encrypts formula It is expressed as:
E=ift (ft (ift (ft (P × M1) × M2) × M3) × Λ) × Mn
Wherein, P indicates initial data, and ft indicates Fourier transformation, and ift indicates inverse Fourier transform, and E indicates encryption data (including training data and test data), M1, M2, M3 ... Mn indicate random phase exposure mask, wherein n is positive integer and is greater than 3. Using i-f optical system, (have i/2 focal length is the lens of f to the encryption method, is apart also f) real at a distance of 2f, object distance f Existing, P is real value image, i.e., initial data, E are encrypted image, i.e. encryption data.The phase angle information of M1, M2, M3 ... Mn are two dimensions Normal distribution random number group, value are randomly distributed between [0,1].So M1, M2, M3 ... Mn be to generate phase exist Random phase between [0,2 π].In ciphering process, before M1 random phase exposure mask abutting real value image is located at first lens On focal plane, then M2 random phase exposure mask is placed on Fourier transformation face, do an inverse Fourier transform by second lens, M3 random phase exposure mask is placed on focal plane behind, similarly, Mn is placed on the focal plane of last lens, finally obtains encryption figure As E, which is approximate extended stationary white noise.
In embodiments of the present invention, random phase encryption is carried out to initial data, although the specific side of random phase encryption Formula uses the construction method of deep neural network model in the present invention there are diversity, can construct crack it is various types of The decrypted model of random phase encryption, increases the practicability of the construction method of deep neural network model in the present invention.
Referring to Fig. 9, showing for the structure of the construction device of deep neural network model a kind of in second embodiment of the invention It is intended to.Specifically:
First encrypting module 10 encrypts to obtain training data for carrying out random phase to multiple groups initial data;
Training comparison module 20, for obtaining the i-th depth using training data the (i-1)-th deep neural network model of training The i-th output after neural network model and training data the i-th deep neural network model of input is as a result, and export result for i-th Initial data corresponding with training data is compared, and obtains the i-th comparison result, the initial value of i is 1, and the 0th depth nerve net Network model is initial model;
First determining module 30, for determining the i-th depth nerve net when the i-th comparison result meets the default condition of convergence Network model is the deep neural network model of building;
First return module 40 returns to instruction for enabling i=i+1 when the i-th comparison result is unsatisfactory for the default condition of convergence Practice comparison module 20.
Related description in relation to the embodiment of the present invention please refers to the related description in relation to first embodiment of the invention, here It repeats no more.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption Problem.
Referring to Fig. 10, for the structural schematic diagram of the refinement module of training comparison module 20 in second embodiment of the invention. Specifically:
First remodeling module 201 makes training data for inputting training data in the (i-1)-th deep neural network model Array remodeling is carried out in the first remodeling layer, output the first remodeling data, the (i-1)-th deep neural network model includes the first remodeling Layer, three layers of hidden layer, output layer and the second remodeling layer;
Second remodeling module 202 inputs three layers of hidden layer being made of several neurons for the first remodeling data, and defeated Enter output layer output processing data, processing data input the second remodeling layer carries out array remodeling, output the second remodeling data, nerve The activation primitive of member is line rectification function, and the neuron number in hidden layer is corresponding with the first remodeling format of data, the Two remodeling data are for the (i-1)-th output after training data the (i-1)-th deep neural network model of input as a result, and the second remodeling data Format it is identical as the format of training data;
Calculate update module 203, for based on mean square deviation function and stochastic gradient descent function, to the second remodeling data with The corresponding initial data of training data is compared, and obtains comparison result, is optimized using comparison result and updates the (i-1)-th depth nerve Network model obtains the i-th deep neural network model.
Related description in relation to the embodiment of the present invention please refers to the related description in relation to first embodiment of the invention, here It repeats no more.
In embodiments of the present invention, by the way that layer, three layers of hidden layer, output layer and second are remolded in training data input first Layer is remolded, the second remodeling data (the as (i-1)-th output result) is obtained, the second remodeling data are corresponding with training data original Data are compared, and obtain comparison result, are optimized using comparison result and update the (i-1)-th deep neural network model, and it is deep to obtain i-th Spend neural network model.So that deep neural network model becomes closer to satisfactory decrypted model standard, and using with Machine gradient decreasing function accelerates the speed of trained deep neural network model, improves trained rate.
Figure 11 is please referred to, is shown for the structure of the construction device of deep neural network model a kind of in third embodiment of the invention It is intended to.Except including the first encrypting module 10 in second embodiment of the invention, training comparison module 20, the first determining module 30 and the Except one return module 40, further include:
Second encrypting module 50 encrypts to obtain test data for carrying out random phase to multiple groups initial data;
Computing module 60 is inputted, for the deep neural network model of test data input building, obtains test output knot Fruit, and calculate the degree of correlation between test output result initial data corresponding with test data;
Second determining module 70, for determining deep neural network mould when the degree of correlation is more than or equal to preset correlation coefficient number Type is correct decrypted model;
Second return module 80, for returning to the first encrypting module 50 when the degree of correlation is less than preset correlation coefficient number.
Related description in relation to the embodiment of the present invention, please refers to related first embodiment of the invention and the present invention second is implemented The related description of example, which is not described herein again.
In embodiments of the present invention, due to having carried out random phase encryption, and the training obtained after encrypting to initial data Data are input in deep neural network model, and obtained output with initial data the result is that be compared, therefore the model For the decrypted model that can crack random phase encryption, solves the technology for lacking the algorithm model that can crack random phase encryption Problem.In addition, having carried out correctness appraisal using deep neural network model of the test data to building, it is ensured that the solution of building The correctness of close model.
Figure 12 is please referred to, is the structural schematic diagram for inputting the refinement module of computing module 60 in third embodiment of the invention. Specifically:
Third remodeling module 601 inputs in the deep neural network model of building for test data, test data is made to exist First remodeling layer carry out array remodeling, output first remodeling data, deep neural network model include first remodeling layer, three layers it is hidden Hide layer, output layer and the second remodeling layer;
Quadruple mold block 602 inputs three layers of hidden layer being made of several neurons for the first remodeling data, and defeated Enter output layer output processing data, processing data input second remodeling layer carry out array remodeling, output second remodeling data, second Remodeling data are that test data inputs the test output result obtained after the deep neural network model constructed;
Computing module 603, it is corresponding with test data original for calculating the second remodeling data using correlation coefficient function The degree of correlation between data.
Related description in relation to the embodiment of the present invention, please refers to related first embodiment of the invention and the present invention second is implemented The related description of example, which is not described herein again.
In embodiments of the present invention, corresponding with test data original using correlation coefficient function calculating the second remodeling data Whether the degree of correlation between data, the deep neural network model convenient for judging building are correct.
In addition, the first encrypting module 10 and the calculation formula of the random phase encryption in the second encrypting module 50 are:
E=LCT (LCT (LCT (P × M1)×M2)×…×Mn)
Wherein, E indicates training data or test data, and LCT indicates that linear canonical transform, P indicate initial data, M1, M2,…,MnIndicate random phase exposure mask, n is positive integer.
In relation to the related description encrypted to random phase, related first embodiment of the invention related description is please referred to, here It repeats no more.
In embodiments of the present invention, random phase encryption is carried out to initial data, although the specific side of random phase encryption Formula uses the construction method of deep neural network model in the present invention there are diversity, can construct crack it is various types of The decrypted model of random phase encryption, increases the practicability of the construction method of deep neural network model in the present invention.
It should be noted that for the various method embodiments described above, describing for simplicity, therefore, it is stated as a series of Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because According to the present invention, certain steps can use other sequences or carry out simultaneously.Secondly, those skilled in the art should also know It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules might not all be this hair Necessary to bright.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiments.
The above are to a kind of description of the construction method and device of deep neural network model provided by the present invention, for Those skilled in the art, thought according to an embodiment of the present invention have change in specific embodiments and applications Place, to sum up, the contents of this specification are not to be construed as limiting the invention.

Claims (10)

1.一种深度神经网络模型的构建方法,其特征在于,所述方法包括:1. a construction method of deep neural network model, is characterized in that, described method comprises: 步骤A、对多组原始数据进行随机相位加密得到训练数据;Step A, performing random phase encryption on multiple sets of raw data to obtain training data; 步骤B、利用所述训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及所述训练数据输入所述第i深度神经网络模型后的第i输出结果,并将所述第i输出结果与所述训练数据对应的原始数据进行比对,得到第i比对结果,所述i的初始值为1,且第0深度神经网络模型为初始模型;Step B, using the training data to train the i-1th deep neural network model to obtain the i-th deep neural network model, and the i-th output result after the training data is input into the i-th deep neural network model, and The ith output result is compared with the original data corresponding to the training data to obtain the ith comparison result, the initial value of i is 1, and the 0th deep neural network model is the initial model; 步骤C、当所述第i比对结果满足预设收敛条件时,确定所述第i深度神经网络模型为构建的深度神经网络模型;Step C. When the i-th comparison result meets the preset convergence condition, determine that the i-th deep neural network model is the constructed deep neural network model; 步骤D、当所述第i比对结果不满足所述预设收敛条件时,令i=i+1,返回执行所述步骤B。Step D. When the i-th comparison result does not satisfy the preset convergence condition, set i=i+1, and return to step B. 2.根据权利要求1所述的方法,其特征在于,所述步骤B具体包括以下步骤:2. The method according to claim 1, wherein said step B specifically comprises the following steps: 步骤E、将所述训练数据输入第i-1深度神经网络模型中,使所述训练数据在第一重塑层进行数组重塑,输出第一重塑数据,所述第i-1深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;Step E, input the training data into the i-1th deep neural network model, make the training data undergo array reshaping in the first reshaping layer, and output the first reshaping data, the i-1th deep neural network model The network model includes the first remodeling layer, three hidden layers, an output layer and a second remodeling layer; 步骤F、所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述神经元的激活函数为线性整流函数,且所述隐藏层中的神经元个数与所述第一重塑数据的格式对应,所述第二重塑数据为所述训练数据输入所述第i-1深度神经网络模型后的第i-1输出结果,且所述第二重塑数据的格式与所述训练数据的格式相同;Step F, the first remodeling data is input into the three hidden layers composed of several neurons, and input into the output layer to output processed data, and the processed data is input into the second remodeling layer for array remodeling , output the second reshaping data, the activation function of the neuron is a linear rectification function, and the number of neurons in the hidden layer corresponds to the format of the first reshaping data, the second reshaping data The i-1th output result after inputting the i-1th deep neural network model for the training data, and the format of the second remodeling data is the same as the format of the training data; 步骤G、基于均方差函数与随机梯度下降函数,对所述第二重塑数据与所述训练数据对应的原始数据进行比对,得到比对结果,利用所述比对结果优化更新所述第i-1深度神经网络模型,得到第i深度神经网络模型。Step G. Based on the mean square error function and the stochastic gradient descent function, compare the second remodeling data with the original data corresponding to the training data to obtain a comparison result, and use the comparison result to optimize and update the first i-1 deep neural network model to get the i-th deep neural network model. 3.根据权利要求1所述的方法,其特征在于,所述步骤C之后还包括以下步骤:3. method according to claim 1, is characterized in that, also comprises the following steps after described step C: 步骤H、对多组原始数据进行随机相位加密得到测试数据;Step H, performing random phase encryption on multiple sets of raw data to obtain test data; 步骤I、所述测试数据输入构建的所述深度神经网络模型,得到测试输出结果,并计算所述测试输出结果与所述测试数据对应的原始数据之间的相关度;Step 1, the deep neural network model constructed by the test data input, obtain test output results, and calculate the correlation between the test output results and the original data corresponding to the test data; 步骤J、当所述相关度大于等于预设相关系数时,确定所述深度神经网络模型为正确的解密模型;Step J, when the correlation degree is greater than or equal to the preset correlation coefficient, determine that the deep neural network model is the correct decryption model; 步骤K、当所述相关度小于所述预设相关系数时,返回执行所述步骤A。Step K. When the correlation degree is smaller than the preset correlation coefficient, return to step A. 4.根据权利要求3所述的方法,其特征在于,所述步骤I具体包括以下步骤:4. method according to claim 3, is characterized in that, described step 1 specifically comprises the following steps: 步骤L、所述测试数据输入构建的所述深度神经网络模型中,使所述测试数据在第一重塑层进行数组重塑,输出第一重塑数据,所述深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;In step L, the deep neural network model constructed by the test data input, the test data is subjected to array remodeling at the first reshaping layer, and the first remodeling data is output, and the deep neural network model includes the first reshaping layer, three hidden layers, output layer and second reshaping layer; 步骤M、所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述第二重塑数据为所述测试数据输入构建的所述深度神经网络模型后得到的所述测试输出结果;Step M, the first remodeling data is input into the three-layer hidden layer composed of several neurons, and input into the output layer to output processed data, and the processed data is input into the second remodeling layer for array remodeling , outputting second remodeling data, the second remodeling data is the test output result obtained after the test data is input to the constructed deep neural network model; 步骤N、利用相关系数函数计算所述第二重塑数据与所述测试数据对应的原始数据之间的相关度。Step N, using a correlation coefficient function to calculate the correlation between the second remodeling data and the original data corresponding to the test data. 5.根据权利要求4所述的方法,其特征在于,所述随机相位加密的计算公式为:5. method according to claim 4, is characterized in that, the calculation formula of described random phase encryption is: E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)E=LCT(LCT(LCT(P×M 1 )×M 2 )×…×M n ) 其中,E表示所述训练数据或者所述测试数据,LCT表示线性正则变换,P表示所述原始数据,M1,M2,…,Mn表示随机相位掩膜,n为正整数。Wherein, E represents the training data or the test data, LCT represents the linear canonical transformation, P represents the original data, M 1 , M 2 ,...,M n represent random phase masks, and n is a positive integer. 6.一种深度神经网络模型的构建装置,其特征在于,所述装置包括:6. A construction device of a deep neural network model, characterized in that said device comprises: 第一加密模块,用于对多组原始数据进行随机相位加密得到训练数据;The first encryption module is used to perform random phase encryption on multiple groups of raw data to obtain training data; 训练比对模块,用于利用所述训练数据训练第i-1深度神经网络模型,得到第i深度神经网络模型,及所述训练数据输入所述第i深度神经网络模型后的第i输出结果,并将所述第i输出结果与所述训练数据对应的原始数据进行比对,得到第i比对结果,所述i的初始值为1,且第0深度神经网络模型为初始模型;The training comparison module is used to use the training data to train the i-1th deep neural network model to obtain the i-th deep neural network model, and the i-th output result after the training data is input into the i-th deep neural network model , and comparing the ith output result with the original data corresponding to the training data to obtain the ith comparison result, the initial value of i is 1, and the 0th deep neural network model is the initial model; 第一确定模块,用于当所述第i比对结果满足预设收敛条件时,确定所述第i深度神经网络模型为构建的深度神经网络模型;A first determination module, configured to determine that the i-th deep neural network model is the constructed deep neural network model when the i-th comparison result satisfies a preset convergence condition; 第一返回模块,用于当所述第i比对结果不满足所述预设收敛条件时,令i=i+1,返回所述训练比对模块。The first returning module is configured to set i=i+1 and return to the training comparison module when the i-th comparison result does not satisfy the preset convergence condition. 7.根据权利要求6所述的装置,其特征在于,所述训练比对模块具体包括以下模块:7. The device according to claim 6, wherein the training comparison module specifically comprises the following modules: 第一重塑模块,用于将所述训练数据输入第i-1深度神经网络模型中,使所述训练数据在第一重塑层进行数组重塑,输出第一重塑数据,所述第i-1深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;The first reshaping module is configured to input the training data into the i-1th deep neural network model, make the training data undergo array reshaping in the first reshaping layer, and output the first reshaping data, and the first reshaping data is outputted. The i-1 deep neural network model includes the first remodeling layer, three hidden layers, output layer and the second remodeling layer; 第二重塑模块,用于所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述神经元的激活函数为线性整流函数,且所述隐藏层中的神经元个数与所述第一重塑数据的格式对应,所述第二重塑数据为所述训练数据输入所述第i-1深度神经网络模型后的第i-1输出结果,且所述第二重塑数据的格式与所述训练数据的格式相同;The second remodeling module is used to input the first remodeling data into the three-layer hidden layer composed of several neurons, and input the output layer to output processed data, and the processed data is input to the second remodeling The layer performs array reshaping, and outputs the second reshaping data, the activation function of the neuron is a linear rectification function, and the number of neurons in the hidden layer corresponds to the format of the first reshaping data, and the The second remodeling data is the i-1th output result after the training data is input into the i-1th deep neural network model, and the format of the second remodeling data is the same as the format of the training data; 计算更新模块,用于基于均方差函数与随机梯度下降函数,对所述第二重塑数据与所述训练数据对应的原始数据进行比对,得到比对结果,利用所述比对结果优化更新所述第i-1深度神经网络模型,得到第i深度神经网络模型。The calculation update module is used to compare the second remodeling data with the original data corresponding to the training data based on the mean square error function and the stochastic gradient descent function to obtain a comparison result, and use the comparison result to optimize the update The i-1th deep neural network model is obtained to obtain the i-th deep neural network model. 8.根据权利要求6所述的装置,其特征在于,所述第一确定模块之后还包括以下模块:8. The device according to claim 6, further comprising the following modules after the first determining module: 第二加密模块,用于对多组原始数据进行随机相位加密得到测试数据;The second encryption module is used to perform random phase encryption on multiple sets of raw data to obtain test data; 输入计算模块,用于所述测试数据输入构建的所述深度神经网络模型,得到测试输出结果,并计算所述测试输出结果与所述测试数据对应的原始数据之间的相关度;The input calculation module is used to input the deep neural network model constructed by the test data, obtain the test output result, and calculate the correlation between the test output result and the original data corresponding to the test data; 第二确定模块,用于当所述相关度大于等于预设相关系数时,确定所述深度神经网络模型为正确的解密模型;The second determination module is used to determine that the deep neural network model is a correct decryption model when the correlation degree is greater than or equal to a preset correlation coefficient; 第二返回模块,用于当所述相关度小于所述预设相关系数时,返回所述第一加密模块。The second return module is configured to return to the first encryption module when the correlation degree is smaller than the preset correlation coefficient. 9.根据权利要求8所述的装置,其特征在于,所述输入计算模块具体包括以下模块:9. The device according to claim 8, wherein the input calculation module specifically comprises the following modules: 第三重塑模块,用于所述测试数据输入构建的所述深度神经网络模型中,使所述测试数据在第一重塑层进行数组重塑,输出第一重塑数据,所述深度神经网络模型包括所述第一重塑层、三层隐藏层、输出层和第二重塑层;The third remodeling module is used in the deep neural network model constructed by the test data input, so that the test data is array reshaped in the first remodeling layer, and the first remodeling data is output, and the deep neural network The network model includes the first remodeling layer, three hidden layers, an output layer and a second remodeling layer; 第四重塑模块,用于所述第一重塑数据输入由若干神经元组成的三层所述隐藏层,并输入所述输出层输出处理数据,所述处理数据输入所述第二重塑层进行数组重塑,输出第二重塑数据,所述第二重塑数据为所述测试数据输入构建的所述深度神经网络模型后得到的所述测试输出结果;The fourth remodeling module is used to input the first remodeling data into the three hidden layers composed of several neurons, and input the output layer to output processed data, and the processed data is input to the second remodeling The layer performs array reshaping, and outputs second reshaping data, and the second reshaping data is the test output result obtained after the test data is input to the constructed deep neural network model; 计算模块,用于利用相关系数函数计算所述第二重塑数据与所述测试数据对应的原始数据之间的相关度。A calculation module, configured to use a correlation coefficient function to calculate the correlation between the second remodeling data and the original data corresponding to the test data. 10.根据权利要求9所述的装置,其特征在于,所述随机相位加密的计算公式为:10. The device according to claim 9, wherein the calculation formula of the random phase encryption is: E=LCT(LCT(LCT(P×M1)×M2)×…×Mn)E=LCT(LCT(LCT(P×M 1 )×M 2 )×…×M n ) 其中,E表示所述训练数据或者所述测试数据,LCT表示线性正则变换,P表示所述原始数据,M1,M2,…,Mn表示随机相位掩膜,n为正整数。Wherein, E represents the training data or the test data, LCT represents the linear canonical transformation, P represents the original data, M 1 , M 2 ,...,M n represent random phase masks, and n is a positive integer.
CN201810465595.6A 2018-05-16 2018-05-16 A method and device for constructing a deep neural network model Active CN108921282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810465595.6A CN108921282B (en) 2018-05-16 2018-05-16 A method and device for constructing a deep neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810465595.6A CN108921282B (en) 2018-05-16 2018-05-16 A method and device for constructing a deep neural network model

Publications (2)

Publication Number Publication Date
CN108921282A true CN108921282A (en) 2018-11-30
CN108921282B CN108921282B (en) 2022-05-31

Family

ID=64404069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810465595.6A Active CN108921282B (en) 2018-05-16 2018-05-16 A method and device for constructing a deep neural network model

Country Status (1)

Country Link
CN (1) CN108921282B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008914A (en) * 2019-04-11 2019-07-12 杨勇 A kind of pattern recognition system neural network based and recognition methods
CN110071798A (en) * 2019-03-21 2019-07-30 深圳大学 A kind of equivalent key acquisition methods, device and computer readable storage medium
CN110428873A (en) * 2019-06-11 2019-11-08 西安电子科技大学 A kind of chromosome G banding method for detecting abnormality and detection system
CN112603345A (en) * 2020-12-02 2021-04-06 赛诺威盛科技(北京)有限公司 Model training method, multi-energy spectrum CT scanning method, device and electronic equipment
CN112697821A (en) * 2020-12-02 2021-04-23 赛诺威盛科技(北京)有限公司 Multi-energy spectrum CT scanning method and device, electronic equipment and CT equipment
CN113723604A (en) * 2020-05-26 2021-11-30 杭州海康威视数字技术股份有限公司 Neural network training method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800042A (en) * 2012-06-14 2012-11-28 南昌大学 Multi-image encryption method based on log-polar transform
CN104009836A (en) * 2014-05-26 2014-08-27 南京泰锐斯通信科技有限公司 Encrypted data detection method and system
US20150242747A1 (en) * 2014-02-26 2015-08-27 Nancy Packes, Inc. Real estate evaluating platform methods, apparatuses, and media
CN107358293A (en) * 2017-06-15 2017-11-17 北京图森未来科技有限公司 A kind of neural network training method and device
CN107506822A (en) * 2017-07-26 2017-12-22 天津大学 A kind of deep neural network method based on Space integration pond
WO2017221152A1 (en) * 2016-06-20 2017-12-28 Telefonaktiebolaget Lm Ericsson (Publ) Method for classifying the payload of encrypted traffic flows

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800042A (en) * 2012-06-14 2012-11-28 南昌大学 Multi-image encryption method based on log-polar transform
US20150242747A1 (en) * 2014-02-26 2015-08-27 Nancy Packes, Inc. Real estate evaluating platform methods, apparatuses, and media
CN104009836A (en) * 2014-05-26 2014-08-27 南京泰锐斯通信科技有限公司 Encrypted data detection method and system
WO2017221152A1 (en) * 2016-06-20 2017-12-28 Telefonaktiebolaget Lm Ericsson (Publ) Method for classifying the payload of encrypted traffic flows
CN107358293A (en) * 2017-06-15 2017-11-17 北京图森未来科技有限公司 A kind of neural network training method and device
CN107506822A (en) * 2017-07-26 2017-12-22 天津大学 A kind of deep neural network method based on Space integration pond

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潘伟洲等: "基于人工神经网络的百度地图坐标解密方法", 《计算机工程与应用》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110071798A (en) * 2019-03-21 2019-07-30 深圳大学 A kind of equivalent key acquisition methods, device and computer readable storage medium
CN110071798B (en) * 2019-03-21 2022-03-04 深圳大学 An equivalent key acquisition method, device and computer-readable storage medium
CN110008914A (en) * 2019-04-11 2019-07-12 杨勇 A kind of pattern recognition system neural network based and recognition methods
CN110428873A (en) * 2019-06-11 2019-11-08 西安电子科技大学 A kind of chromosome G banding method for detecting abnormality and detection system
CN110428873B (en) * 2019-06-11 2021-07-23 西安电子科技大学 A kind of chromosome multiple abnormality detection method and detection system
CN113723604A (en) * 2020-05-26 2021-11-30 杭州海康威视数字技术股份有限公司 Neural network training method and device, electronic equipment and readable storage medium
CN113723604B (en) * 2020-05-26 2024-03-26 杭州海康威视数字技术股份有限公司 Neural network training method and device, electronic equipment and readable storage medium
CN112603345A (en) * 2020-12-02 2021-04-06 赛诺威盛科技(北京)有限公司 Model training method, multi-energy spectrum CT scanning method, device and electronic equipment
CN112697821A (en) * 2020-12-02 2021-04-23 赛诺威盛科技(北京)有限公司 Multi-energy spectrum CT scanning method and device, electronic equipment and CT equipment
CN112697821B (en) * 2020-12-02 2022-12-02 赛诺威盛科技(北京)股份有限公司 Multi-energy spectrum CT scanning method and device, electronic equipment and CT equipment

Also Published As

Publication number Publication date
CN108921282B (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN108921282A (en) A kind of construction method and device of deep neural network model
CN110460600B (en) Joint deep learning method capable of resisting generation of counterattack network attacks
CN111259443B (en) PSI (program specific information) technology-based method for protecting privacy of federal learning prediction stage
CN109165515A (en) Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
CN110490128A (en) A kind of hand-written recognition method based on encryption neural network
CN113761557A (en) Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm
CN109120652A (en) It is predicted based on difference WGAN network safety situation
JP2007087379A (en) Method of classifying data by computer and method of classifying by computer
CN115001651B (en) A multi-party computing method based on fully homomorphic encryption applicable to semi-honest models
CN114841363B (en) A privacy-preserving and verifiable federated learning method based on zero-knowledge proof
CN111832074A (en) Collaborative learning method and system for secure verification based on SPDZ secure multi-party computation
CN112949865A (en) Sigma protocol-based federal learning contribution degree evaluation method
CN113761217A (en) Artificial intelligence-based question set data processing method and device and computer equipment
CN116467736A (en) A verifiable privacy-preserving federated learning method and system
CN117150547A (en) A blockchain-based federated learning method suitable for privacy data protection in the medical industry
Pathak et al. Privacy Preserving Speaker Verification Using Adapted GMMs.
CN114363043A (en) Asynchronous federated learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN114386071A (en) Decentered federal clustering method and device, electronic equipment and storage medium
CN117094412A (en) Federal learning method and device aiming at non-independent co-distributed medical scene
CN117493877A (en) Hospital privacy data noise adding and optimizing protection method based on federal learning
CN115333746A (en) A GPU-based multi-party secure computing method, system and electronic equipment
CN108259180B (en) Method for quantum specifying verifier signature
CN118074917A (en) Mixed side channel attack method of Dilithium signature algorithm
CN117521102A (en) Model training method and device based on federal learning
WO2019218243A1 (en) Method and device for constructing deep neural network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220930

Address after: 620000 unit 1, building 3, Tianfu Yuncheng district a, south of the fast track around Tianfu new area, Shigao street, Renshou County, Meishan City, Sichuan Province

Patentee after: Sichuan Hisai Digital Technology Group Co.,Ltd.

Address before: No. 5, Floor 2, Building 16, No. 69, North Section of East Yangliu Road, Liucheng Town, Wenjiang District, Chengdu, Sichuan 610000

Patentee before: Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.

Effective date of registration: 20220930

Address after: No. 5, Floor 2, Building 16, No. 69, North Section of East Yangliu Road, Liucheng Town, Wenjiang District, Chengdu, Sichuan 610000

Patentee after: Chengdu Qizhizhi Intellectual Property Operation Co.,Ltd.

Address before: 518060 No. 3688 Nanhai Road, Shenzhen, Guangdong, Nanshan District

Patentee before: SHENZHEN University

TR01 Transfer of patent right