CN111882029A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN111882029A
CN111882029A CN202010575625.6A CN202010575625A CN111882029A CN 111882029 A CN111882029 A CN 111882029A CN 202010575625 A CN202010575625 A CN 202010575625A CN 111882029 A CN111882029 A CN 111882029A
Authority
CN
China
Prior art keywords
data
expanded
processed
convolution kernel
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010575625.6A
Other languages
Chinese (zh)
Inventor
陈琨
顾欣然
王蜀洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huakong Tsingjiao Information Technology Beijing Co Ltd
Original Assignee
Huakong Tsingjiao Information Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huakong Tsingjiao Information Technology Beijing Co Ltd filed Critical Huakong Tsingjiao Information Technology Beijing Co Ltd
Priority to CN202010575625.6A priority Critical patent/CN111882029A/en
Publication of CN111882029A publication Critical patent/CN111882029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The embodiment of the application provides a data processing method and a device, wherein the device is suitable for a ciphertext computing environment of a convolution arithmetic unit and comprises the following steps: the device comprises an expansion module and an operation module; the extension module is configured to: respectively expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data, wherein the parameter value of the expanded data to be processed is the same as that of the expanded convolution kernel data; the operation module is used for: and calculating elements at every two corresponding positions in the expanded input to-be-processed data and the expanded convolution kernel data to obtain output data. This application utilizes the traditional convolutional neural network of addition neural network replacement, through with in the convolutional neural network a large amount of multiplication operation replacement for the faster addition operation of speed, can show the operational speed who accelerates the model, and then promote the speed that data processing apparatus handled data.

Description

Data processing method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method and apparatus.
Background
Convolutional neural networks are important models in deep learning. The convolution layer of the core structure of the convolution neural network can realize effective feature extraction by sliding different convolution kernels on an input image and carrying out certain operation to obtain a feature map of the input image. At present, the convolutional neural network has wide application in the fields of image classification, object detection and the like.
In the related art, in order to better utilize artificial intelligence and data mining techniques, a large amount of data is often acquired from a plurality of data sources. And the data source provider generally performs encryption processing on the provided data in order to avoid the risk of data leakage. Because the convolutional neural network involves a large number of parameters and complex operations, the existing convolutional neural network is directly trained by using encrypted data, which is difficult. Therefore, how to train the convolutional neural network in a ciphertext environment becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a data processing method and device, which can directly train a convolutional neural network by using encrypted data in a ciphertext environment and can obviously improve the model training speed.
A first aspect of the embodiments of the present application provides a data processing apparatus, where the apparatus is applied to a ciphertext computing environment of a convolution operator, and the apparatus includes: the device comprises an expansion module and an operation module;
the extension module is configured to: respectively expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data, wherein the parameter value of the expanded data to be processed is the same as that of the expanded convolution kernel data;
the operation module is used for: and calculating elements at every two corresponding positions in the expanded input to-be-processed data and the expanded convolution kernel data to obtain output data.
Optionally, the operation module includes: the device comprises a subtracter, an absolute value arithmetic unit, an adder, a negation device and a dimension adjusting module;
the subtractor is configured to: subtracting the elements at every two corresponding positions in the expanded to-be-processed data and the expanded convolution kernel data to obtain corresponding difference data;
the absolute value operator is to: calculating an absolute value of each element of the difference data to obtain corresponding absolute value data;
the adder is configured to: adding each element of the absolute value data in the same preset dimension to obtain corresponding sum value data;
the inverter is used for: performing negation operation on each element of the sum value data to obtain corresponding opposite number data;
the dimension adjustment module is configured to: and carrying out dimension adjustment on the phase inversion data to obtain the output data.
Optionally, the extension module is configured to:
copying the data to be processed for corresponding times according to the number of convolution kernels to obtain the expanded data to be processed; and
and copying the convolution kernel data for corresponding times according to the sample data size and the size of the convolution kernel to obtain the expanded convolution kernel data.
Optionally, the apparatus further comprises: a pre-processing module to:
intercepting original data to be processed according to the size of a convolution kernel, sequentially arranging a plurality of intercepted elements into a line, and forming the data to be processed by a plurality of lines of elements; and
and sequentially arranging all elements in the original convolution kernel data into a line to obtain the convolution kernel data.
Optionally, the apparatus further comprises: a configuration module and a division module;
the configuration module is configured to: configuring the number of groups included in a single training batch;
the dividing module is configured to: dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub data to be processed and a plurality of groups of expanded convolution kernel data respectively according to the number of training batches and the number of groups;
the operation module is used for: and calculating the single expanded to-be-processed subdata and the single expanded convolution kernel subdata corresponding to the group number, and summarizing the operation results of the group number to obtain the output data.
Optionally, the apparatus further comprises: a memory;
the memory is to: storing the plurality of groups of expanded to-be-processed subdata and the plurality of groups of expanded convolution kernel subdata;
the configuration module is configured to: configuring the number of groups included in a single training batch according to the operation memory capacity of equipment for operating the data processing device;
the operation module is further configured to read the set of expanded to-be-processed sub data and the at least one set of expanded convolution kernel data into an operation memory of the device.
Optionally, the apparatus further comprises: an adjustment module;
the adjusting module is used for adjusting the number of the training batches and adjusting the number of groups included in a single training batch according to the adjusted number of the training batches.
Optionally, the apparatus further comprises: a transceiver module;
the receiving and transmitting module is used for receiving data in a ciphertext form to be processed by the convolution arithmetic unit and transmitting the data in the ciphertext form to the outside.
A second aspect of the embodiments of the present application provides a data processing method, which is applied to a data processing apparatus, where the apparatus is suitable for a ciphertext computing environment of a convolution operator, and the method includes:
respectively expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data, wherein the size of the expanded data to be processed is the same as that of the expanded convolution kernel data;
and calculating elements at every two corresponding positions on the extended convolution kernel data in the extended data to be processed to obtain output data.
Optionally, the calculating the elements at every two corresponding positions in the extended to-be-processed data and the extended convolution kernel data to obtain output data includes:
subtracting the elements at every two corresponding positions in the expanded to-be-processed data and the expanded convolution kernel data to obtain corresponding difference data;
calculating an absolute value of each element of the difference data to obtain corresponding absolute value data;
adding each element of the absolute value data in the same preset dimension to obtain corresponding sum value data;
performing negation operation on each element of the sum value data to obtain corresponding opposite number data;
and carrying out dimension adjustment on the phase inversion data to obtain the output data.
Optionally, the expanding the data to be processed and the convolution kernel data respectively to obtain expanded data to be processed and expanded convolution kernel data includes:
copying the data to be processed for corresponding times according to the number of convolution kernels to obtain the expanded data to be processed; and
and copying the convolution kernel data for corresponding times according to the sample data size and the size of the convolution kernel to obtain the expanded convolution kernel data.
Optionally, the method further comprises:
configuring the number of groups included in a single training batch;
dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub data to be processed and a plurality of groups of expanded convolution kernel data respectively according to the number of training batches and the number of groups;
calculating the elements at every two corresponding positions in the extended to-be-processed data and the extended convolution kernel data to obtain output data, wherein the calculating comprises the following steps:
and calculating the single expanded to-be-processed subdata and the single expanded convolution kernel subdata corresponding to the group number, and summarizing the operation results of the group number to obtain the output data.
Optionally, configuring the number of groups included in a single training batch includes:
configuring the number of groups included in a single training batch according to the operation memory capacity of equipment for operating the data processing device;
after dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub data to be processed and a plurality of groups of expanded convolution kernel data, the method further comprises:
and storing the plurality of groups of expanded to-be-processed subdata and the plurality of groups of expanded convolution kernel subdata, and reading one group of expanded to-be-processed subdata and at least one group of expanded convolution kernel subdata into an operation memory of the equipment.
Optionally, the method further comprises:
and adjusting the number of the training batches, and adjusting the number of groups included in a single training batch according to the adjusted number of the training batches.
Optionally, the method further comprises:
and receiving data in a ciphertext form to be processed by the convolution arithmetic unit, and sending the data in the ciphertext form to the outside.
According to the data processing method implemented by the data processing device provided by the embodiment of the application, the data to be processed and the convolution kernel data are respectively expanded to obtain the expanded data to be processed and the expanded convolution kernel data, wherein the size of the expanded data to be processed is the same as that of the expanded convolution kernel data. And then, calculating elements at every two corresponding positions on the expanded convolution kernel data in the expanded data to be processed to obtain output data. According to the method and the device, on one hand, the addition neural network is utilized to replace the traditional convolution neural network, a large amount of multiplication operations in the convolution neural network are replaced by addition operations with higher speed, the operation speed of the model can be obviously accelerated, and the data processing speed of the data processing device is further improved. On the other hand, the addition neural network is improved, the improved addition neural network is trained by using the encrypted sample data, and multiple rounds of absolute value operations in the addition neural network are performed in one round of operations, so that the operation speed of the model and the data processing speed of the data processing device are further increased, and the quick operation on the ciphertext data can be realized even if the processed data is the ciphertext data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart illustrating a method of data processing according to an embodiment of the present application;
FIG. 2 is a flow diagram illustrating a method of expanding pending data and convolution kernel data according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating a method of obtaining output data according to one embodiment of the present application;
fig. 4 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In some joint training (training of a model is done by multiple different users together) scenarios, sample data often needs to be obtained from multiple data sources. Some users encrypt the provided sample data to avoid the risk of data leakage, so that other users need to perform model training by using the encrypted sample data during training. For example, a user a, a user B, and a user C perform joint training on a model X, the user B may use sample data provided by the user a and the user C when training a part of the model for which the user B is responsible, and the user a and the user C may provide encrypted sample data to the user B to avoid a risk of data leakage, so that the user B must train the model based on the encrypted sample data. The traditional convolutional neural network involves a large number of parameters and complex operations, and the traditional convolutional neural network is trained by directly utilizing encrypted data, so that the traditional convolutional neural network is difficult and needs to spend large expenses.
In order to overcome the existing problems, the method adopts an addition neural network to replace the traditional convolution neural network, and realizes the training of the model under the ciphertext environment by utilizing the characteristics of low energy consumption and high operation speed of the addition neural network. The core idea of the addition neural network is to replace multiplication in the traditional convolution operation with addition, and in secret sharing, semi-homomorphic encryption and other multi-party secure computing protocols, the cost of addition is obviously less than that of multiplication, so that in a ciphertext environment, compared with the traditional convolution neural network, the addition neural network has lower energy consumption and higher training speed. A brief description of the additive neural network will be given below.
Addition, subtraction, multiplication, and division are the most basic four operations in mathematics. It is well known that multiplication is slower than addition. However, the convolution operation of the conventional deep convolution neural network is the cross correlation between the computation characteristic and the convolution kernel, and the computation process includes a large number of multiplication operations, so that the operation cost of the neural network is high. An additive neural network is a neural network that contains little multiplication. Unlike convolutional neural networks, additive neural networks use the L1 distance to measure the correlation between input features in the neural network and the convolutional kernels. Because the L1 distance only comprises addition and subtraction, a large number of multiplication operations in the traditional convolutional neural network are replaced by addition operations and subtraction operations, and the calculation cost of the neural network is greatly reduced. In addition, the addition neural network also designs an improved gradient calculation scheme with an adaptive learning rate so as to ensure the optimization speed of a convolution kernel and better network convergence effect.
Assuming a given neural network with a convolution kernel of F and a size of cin×coutX d, where d is the size of the convolution kernel, cinAs the number of input channels, coutIs the number of channels output. Assume an input feature of X and a size of cinxHxW, where H is the length of the input feature, W is the width of the input feature, cinIs the number of channels of the input signature. Then according to the conventional convolutional neural network, the value of the output feature Y located at the t-th channel, m rows, and n columns should be:
Figure BDA0002551259390000071
where S (X, F) is a distance metric function, when cross-correlation is used as the distance metric, there is S (X, F) ═ X × F.
According to the additive neural network, instead of using the cross correlation, a metric function having only addition, i.e., L1 distance, is used instead of the convolution calculation in the convolutional neural network. By using the L1 distance, the value of the output feature Y at the t-th channel, row m, column n should be:
Figure BDA0002551259390000072
obviously, the addition neural network measures the similarity between the input features and the convolution kernels by using the L1 distance, which can achieve the effect of extracting features in the neural network by using addition only.
In an additive neural network, the back-propagated gradients and learning rates are adjusted accordingly. Specifically, the partial derivative of the output feature Y to the convolution kernel F is:
Figure BDA0002551259390000073
the partial derivative of the output characteristic Y to the input characteristic X is:
Figure BDA0002551259390000081
wherein the HT (-) function is:
Figure BDA0002551259390000082
let gamma be the global learning rate of the whole neural network and Δ L (F) be the gradient of the L-th layer convolution kernell) Where α l is the learning rate of the l layer, the learning rate of the l layer is
Figure BDA0002551259390000083
Wherein k is the number of parameters of the layer, and eta is a self-set training hyper-parameter.
It can be seen that the additive neural network measures the similarity between the input features and the convolution kernel using the L1 distance. In neural networks, most of the multiplication operations result from convolution operations if the size of the output features is H '× W' × coutThen, in only one convolution layer, the additive convolution operation will be as much as H '× W' × cout×d×d×cinThe secondary multiplication operation is replaced by the addition operation, so that the training cost of the neural network in the ciphertext environment is obviously reduced.
When training the additive neural network, the data size of the sample data is set as nXThe number of convolution kernels is nFThen the size of the input feature X is nX×cinX H x W, size of convolution kernel F nF×cinX d, obtained after im2col algorithmcolAnd Fcol,XcolHas a size of nX×(H′W′)×(d2cin),FcolHas a size of nF×(d2cin). Two by two calculation of XcolThird dimension and FcolL of the second dimension1The inverse of the distance, the output tensor Y is obtained. That is to say that the first and second electrodes,let the size of the output tensor Y be nX×nFX H 'x W', then Y [ n, c, i, j]Is Xcol[n,W′i+j:]And Fcol[c,:]L of1The opposite of the distance (n, c, i, j all count from 0).
Among them, the im2col algorithm is an algorithm for processing a matrix, and is commonly used in a convolution operation. When the convolution is performed on the matrix, a convolution area with a certain size needs to be cut out from the matrix according to a specific step length. The algorithm expands the small window of each secondary processing of the convolution kernel into one row of a new matrix, wherein the row number of the new matrix is the number of times of convolution operation.
Therefore, under the ciphertext environment, the addition neural network can obviously improve the operation speed, and further reduce the training cost of the neural network.
In practical implementation, n is the same asFL of the XH 'XW' wheel1The calculation of the distance also takes a long time, so that the application further improves the traditional addition neural network, and the specific conception is as follows: all absolute value operations in the calculation process are carried out in one round of operation so as to improve the operation speed. The improved adder neural network will be described in detail below.
Assume that the size of the input feature X is nX×cinX H x W, wherein nXThe data size of the sample data; determining the size of the convolution kernel F used as nF×cinX d. Then, im2col operation is respectively carried out on the input characteristic X and the convolution kernel F to obtain XcolAnd Fcol,XcolHas a size of nX×(H′W′)×(d2cin),FcolHas a size of nF×(d2cin). Then, X is addedcolAnd FcolMaking a copy of XcolReplication of nFThen, obtain XrepeatOf size nX×nF×(H′W′)×(d2cin) Will FcolReplication of nXX H 'W' times to give FrepeatSize is (n)X×nF×(H′W′)×(d2cin)). Then, X is addedrepeatAnd FrepeatPair ofCorresponding element phase subtraction, absolute value taking, row summation and inverse number taking to obtain dual-channel output Y with size nX×nFX (H 'W'). Since the output Y is subjected to the im2col calculation, the final output is obtained by rearranging the output Y in the arrangement before the im2col calculation. And finally, adjusting the parameters obtained by training according to the back propagation gradient and the learning rate in the traditional additive neural network.
After the introduction of the conventional adder neural network and the improved adder neural network, the data processing method proposed in the present application will be described in detail below. The data processing method is applied to a data processing device, the data processing device is obtained by training an improved addition neural network by using encrypted sample data, and the data processing device is suitable for a ciphertext computing environment of a convolution arithmetic unit, and can realize quick operation on ciphertext data after the ciphertext data is input.
Fig. 1 is a flowchart illustrating a data processing method according to an embodiment of the present application. Referring to fig. 1, the data processing method provided by the present application may include the following steps:
step S11: and respectively expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data, wherein the size of the expanded data to be processed is the same as that of the expanded convolution kernel data.
In this embodiment, the data to be processed and the convolution kernel data are data obtained after im2col operation. For the convenience of the subsequent statements of the various embodiments, the application is given with XcolRepresenting data to be processed by FcolRepresenting convolution kernel data. The purpose of expanding the data to be processed and the convolution kernel data to data of the same size is: the method and the device ensure that every two elements at corresponding positions on the expanded data to be processed and the expanded convolution kernel data can be operated, in other words, when the size of the data to be processed is the same as that of the convolution kernel data, the elements at every two corresponding positions can be operated.
Illustratively, when data X is to be processedcolHas a size of nX×(H′W′)×(d2cin) Convolution kernel data FcolHas a size of nF×(d2cin) In time, data X to be processedcolThe data obtained by the expansion has a size of nX×nF×(H′W′)×(d2cin) For convolution kernel data FcolThe data obtained by the expansion has a size of (n)X×nF×(H′W′)×(d2cin) Expanded data X to be processed)colAnd convolution kernel data FcolAre the same size.
In one embodiment, step S11 may be implemented by taking the steps shown in fig. 2, where fig. 2 is a flowchart illustrating a method for expanding pending data and convolution kernel data according to an embodiment of the present application. Referring to fig. 2, step S11 may specifically include:
step S111: copying the data to be processed for corresponding times according to the number of convolution kernels to obtain the expanded data to be processed; and
step S112: and copying the convolution kernel data for corresponding times according to the sample data size and the size of the convolution kernel to obtain the expanded convolution kernel data.
Illustratively, when data X is to be processedcolHas a size of nX×(H′W′)×(d2cin) And the number of convolution kernels is nFWhen it is, X iscolReplication of nFThen, the copied data to be processed with the size of n can be obtainedX×nF×(H′W′)×(d2cin)。
When convolution kernel data FcolHas a size of nF×(d2cin) When the data amount of the sample data is n, the data amount of the sample data isXAnd the size of the convolution kernel is H 'W', F iscolReplication of nXMultiplying by x H 'W' times to obtain copied convolution kernel data with size of nX×nF×(H′W′)×(d2cin)。
Step S12: and calculating elements at every two corresponding positions on the extended convolution kernel data in the extended data to be processed to obtain output data.
In this embodiment, X is usedrepeatRepresenting the expanded data to be processed by FrepeatRepresenting the extended convolution kernel data. When X is presentrepeatAnd FrepeatAre of the same size and are all m × n, XrepeatAt each position of FrepeatIn each case one corresponding position, i.e. XrepeatAt the m-th row and the n-th column in (1), andrepeatcorresponding to the mth row and nth column. For example, when XrepeatAnd FrepeatAll being 4X 4 matrices, XrepeatFirst row and first column of (1), andrepeatcorresponds to the position of the first row and the first column in (1), XrepeatSecond row and first column of (1), andrepeatthe second row and the first column in (b).
In one embodiment, step S12 may be implemented by taking the steps shown in fig. 3, and fig. 3 is a flowchart illustrating a method for obtaining output data according to an embodiment of the present application. Referring to fig. 3, step S12 may include:
step S121: and subtracting the elements at every two corresponding positions in the expanded to-be-processed data and the expanded convolution kernel data to obtain corresponding difference data.
Step S122: and calculating an absolute value of each element of the difference data to obtain corresponding absolute value data.
Step S123: and performing addition operation on each element of the absolute value data in the same preset dimension to obtain corresponding sum value data.
Step S124: and performing negation operation on each element of the sum value data to obtain corresponding opposite number data.
Step S125: and carrying out dimension adjustment on the inverse data to obtain the output data.
For example, let XrepeatAs follows:
2 1 0 1
1 0 2 0
0 2 3 1
1 0 2 1
let FrepeatAs follows:
-1 0 1 1
-1 0 1 1
-1 0 1 1
-1 0 1 1
for XrepeatAnd FrepeatThe corresponding position in (1) can be calculated as follows according to the above steps S121 to S125:
first, with Xrepeat2 of the first row and the first column in (1), and FrepeatTaking the operation of-1 in the first row and the first column as an example, calculate the difference between 2 and-1 to obtain 3, and in turn, find the difference between corresponding elements according to each two corresponding positions to obtain the following difference table:
3 1 -1 0
2 0 1 -1
1 2 2 0
2 0 1 0
then, an absolute value is obtained for each element in the obtained difference table, and the following absolute value table is obtained:
3 1 1 0
2 0 1 1
1 2 2 0
2 0 1 0
then, the elements in the absolute value table are summed by row to obtain the following sum table:
5
4
5
3
then, each element in the sum table is subjected to negation operation, and the following inverse data table is obtained:
-5
-4
-5
-3
finally, rearranging the inverse data table to obtain the following output data:
-5 -4
-5 -3
according to the data processing method provided by the embodiment of the application, firstly, the data to be processed and the convolution kernel data are respectively expanded to obtain the expanded data to be processed and the expanded convolution kernel data, wherein the size of the expanded data to be processed is the same as that of the expanded convolution kernel data. And then, calculating elements at every two corresponding positions on the expanded convolution kernel data in the expanded data to be processed to obtain output data. The method replaces the traditional convolutional neural network with the additive neural network, and replaces a large amount of multiplication operations in the convolutional neural network with addition operations with higher speed, so that the operation speed of the model can be obviously increased, and the data processing speed of the data processing device is further improved. Secondly, the method improves the addition neural network, trains the improved addition neural network by using the encrypted sample data, and further accelerates the operation speed of the model and the data processing speed of the data processing device by putting multiple rounds of absolute value operations in the addition neural network into one round of operations, so that the quick operation of the ciphertext data can be realized even if the processed data is the ciphertext data.
With reference to the foregoing embodiments, in an implementation manner, when training the additive neural network, in order to accelerate the model training and reduce the memory usage, the sample data may be randomly divided into a plurality of fixed-size batches (batch), and the model may be trained in batches.
In training the additive neural network, the size of each batch is typically set to a power of 2. For example, the data size of the total sample data is 100000, and the whole sample data may be divided into 391 batches according to the division manner that the data size of the sample data of each batch is 256, wherein the data size of the sample data of the last batch is 160 (without affecting the training effect). The number of training batches can be specifically set according to actual requirements, and this embodiment does not specifically limit this.
In combination with the above embodiments, in an implementation, after dividing sample data into a plurality of batches of sample data, the neural network may be trained using a single batch of sample data. In this process, if the data size of the sample data of a single batch is large, the sample data of the single batch may be further divided into multiple groups of sample data, and the groups are trained. For example, when the data size of the sample data of a single batch is nXThen, the sample data may be divided into nsplitGroups, the number of sample data of each group being nmini_sizeAnd (4) respectively.
Specifically, the data processing method of the present application may further include the steps of:
the number of groups included in a single training batch is configured.
And dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub data to be processed and a plurality of groups of expanded convolution kernel data respectively according to the number of the training batches and the number of the groups.
Correspondingly, the operating the elements at every two corresponding positions in the extended to-be-processed data and the extended convolution kernel data to obtain output data may include:
and calculating the single expanded to-be-processed subdata and the single expanded convolution kernel subdata corresponding to the group number, and summarizing the operation results of the group number to obtain the output data.
In this embodiment, the expanded data X to be processed may be processed according to the data size of the total sample data, the number of training batches, and the number of groups included in a single training batchrepeatAnd convolution kernel data FrepeatAnd (5) dividing.
For example, if the data size of the total sample data is 512, the number of training batches is 2, and the number of groups included in a single training batch is 2, then the number of sample data of the first batch obtained by dividing is 256, and the number of sample data of the second batch is 256. In the sample data of the first batch, the further divided groups may further include a first batch first group and a first batch second group, where the data size of the sample data of the first batch first group is 128, and the data size of the sample data of the first batch second group is 128. In the second batch of sample data, the further divided groups may further include a second batch first group and a second batch second group, where the data size of the sample data of the second batch first group is 128, and the data size of the sample data of the second batch second group is 128.
In this embodiment, the data X to be processed after the calculation is expandedrepeatAnd extended convolution kernel data FrepeatThen, the expanded data X to be processed can be processedrepeatAnd extended convolution kernel data FrepeatAre respectively divided into nsplitGroups, the number of data in each group is nmini_size. Thus, XrepeatAnd FrepeatIs uniformly changed to nmini_size×nF×(H′W′)×(d2cin)。
Illustratively, when nsplitWhen the value of (2) is less than the threshold value, the expanded data X to be processed is processedrepeatGrouping to obtain the 1 st group of expanded sub-data X to be processedrepeat1 and 2 groups of expanded to-be-processed subdata Xrepeat2, to FrepeatGrouping to obtain the 1 st group of expanded convolution kernel data Frepeat1 nd and 2 nd groups of extended convolution kernel data Frepeat2. Thus, in the calculation, X is required to be calculatedrepeat1 and Frepeat1, calculating, and adding Xrepeat2 and Frepeat2, calculating. Due to Xrepeat1 and Frepeat1 the first result, X, is obtained after the calculationrepeat2 and Frepeat2, the second result is obtained after calculation, so that the first result and the second result are summarized to obtain total output data.
In actual implementation, n may be set according to the computing power of the hardware device running the data processing apparatussplitThe value of (c). In addition, because the number of parameters of each layer of neural network and the input size are different, n can be dynamically adjusted according to the structure of the neural networksplitThe value of (c). The embodiment of the application is nsplitThe setting of the value of (b) is not particularly limited.
In the embodiment, the calculation performance of the hardware equipment is considered, the sample data of a single batch is divided into a plurality of groups of sample data, and the sample data is trained in groups, so that the data processing method can be implemented on the hardware equipment with different performances, and the use flexibility of the data processing method is improved.
In one embodiment, configuring the number of groups that a single training batch comprises may include:
and configuring the number of groups included in a single training batch according to the operation memory capacity of the equipment for operating the data processing device.
Correspondingly, after dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub-data to be processed and a plurality of groups of expanded convolution kernel data, the method further includes:
and storing the plurality of groups of expanded to-be-processed subdata and the plurality of groups of expanded convolution kernel subdata, and reading one group of expanded to-be-processed subdata and at least one group of expanded convolution kernel subdata into an operation memory of the equipment.
In the present embodiment, n is set according to the computing power of the hardware devicesplitWhen the value of (a) is greater than the threshold value, n can be configured according to the size of the memory of the hardware device in operationsplitThe value of (c). In general, when a hardware device is running, a part of memory may be provided for temporarily caching the expanded to-be-processed data and the expanded convolution kernel data, and when the provided memory is insufficient to temporarily cache all of the expanded to-be-processed data and the expanded convolution kernel data, the expanded to-be-processed data and the expanded convolution kernel data may be divided into multiple groups of data, so that the provided memory may support the temporary caching of one group of the expanded to-be-processed data and the expanded convolution kernel data, and the other groups of the expanded to-be-processed data and the expanded convolution kernel dataThe subsequent convolution kernel data can be stored in other storage areas than the current operating memory. And when the operation of one group of expanded to-be-processed data and expanded convolution kernel data in the current operation memory of the hardware equipment is finished, the hardware equipment reads the next group of expanded to-be-processed data and expanded convolution kernel data into the operation memory to participate in the operation.
With reference to the foregoing embodiment, in an implementation manner, the data processing method of the present application may further include:
and adjusting the number of the training batches, and adjusting the number of groups included in a single training batch according to the adjusted number of the training batches.
In this embodiment, the number of training batches can be flexibly adjusted according to actual requirements. Since the total amount of sample data is not changed, when the number of training batches is adjusted, the data amount of the sample data in a single batch is changed, and thus, the number of groups included in a single training batch should also be adaptively adjusted.
With reference to the foregoing embodiment, in an implementation manner, the data processing method of the present application may further include:
and receiving data in a ciphertext form to be processed by the convolution arithmetic unit, and sending the data in the ciphertext form to the outside.
In this embodiment, the data processing apparatus may directly process the data in the form of the ciphertext and transmit the data in the form of the ciphertext to the outside, that is, the data processing apparatus does not need to perform decryption processing on the ciphertext data in the process of processing the ciphertext data.
In the present application, the ciphertext data may be text data, picture data, or the like, and the present application is not limited thereto. The following describes the data processing method of the present application in detail with a specific embodiment, taking the ciphertext data as the picture data as an example.
Assume that the input picture data X is as follows:
2 1 0 2 3
9 5 4 2 0
2 3 4 5 6
1 2 3 1 0
0 4 4 2 8
assume that feature extraction is performed on picture data X using two convolution kernels F1 and F2, where the adopted convolution kernel F1 is as follows:
-1 0 1
-1 0 1
-1 0 1
the convolution kernel F2 used is as follows:
-1 1 1
-1 2 1
-1 3 1
then, im2col calculation is performed on the feature matrix of the picture data X, the convolution kernel F1, and the convolution kernel F2 to obtain Xcol、Fcol1 and Fcol2, wherein, XcolAs follows:
2 1 0 9 5 4 2 3 4
1 0 2 5 4 2 3 4 5
0 2 3 4 2 0 4 5 6
9 5 4 2 3 4 1 2 3
5 4 2 3 4 5 2 3 14
4 2 0 4 5 6 3 1 0
2 3 4 1 2 3 0 4 4
3 4 5 2 3 1 4 4 2
4 5 6 3 1 0 4 2 8
Fcol1 is as follows:
-1 0 1 -1 0 1 -1 0 1
Fcol2 is as follows:
-1 1 1 -1 2 1 -1 3 1
to Xcol、Fcol1 and Fcol2 expansion processing, specifically, processing XcolIs replicated into Xrepeat1 and Xrepeat2, and Xrepeat1 and XrepeatThe values of 2 are all as follows:
2 1 0 9 5 4 2 3 4
1 0 2 5 4 2 3 4 5
0 2 3 4 2 0 4 5 6
9 5 4 2 3 4 1 2 3
5 4 2 3 4 5 2 3 14
4 2 0 4 5 6 3 1 0
2 3 4 1 2 3 0 4 4
3 4 5 2 3 1 4 4 2
4 5 6 3 1 0 4 2 8
f is to becol1 is replicated to Frepeat1, as follows:
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
-1 0 1 -1 0 1 -1 0 1
f is to becol2 is reproduced into Frepeat2, as follows:
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
-1 1 1 -1 2 1 -1 3 1
then, X is addedrepeat1 and Frepeat1 as a group of operation data, the specific operation process is as follows: to Xrepeat1 is in and Frepeat1, subtracting every two elements at corresponding positions in the step 1 to obtain corresponding difference data; calculating the absolute value of each element of the difference data to obtain corresponding absolute value data as follows:
Figure BDA0002551259390000181
Figure BDA0002551259390000191
then, each element of the absolute value data in the same preset dimension is subjected to addition operation to obtain corresponding sum value data; each element of the sum value data is subjected to negation operation to obtain corresponding inverse data, which is as follows:
-32
-26
-28
-33
-29
-29
-23
-28
-35
finally, dimension adjustment is carried out on the opposite number data to obtain output data, which is as follows:
-32 -26 -28
-33 -29 -29
-23 -28 -35
the output data represents a feature map extracted by the convolution kernel F1 from the input image data X.
Mixing Xrepeat2 and Frepeat2 as a group of operation data, the specific operation process is as follows: to Xrepeat2 is in contact with Frepeat2, subtracting every two elements at corresponding positions to obtain corresponding difference data; calculating the absolute value of each element of the difference data to obtain corresponding absolute value data as follows:
Figure BDA0002551259390000192
Figure BDA0002551259390000201
then, each element of the absolute value data in the same preset dimension is subjected to addition operation to obtain corresponding sum value data; each element of the sum value data is subjected to negation operation to obtain corresponding inverse data, which is as follows:
-26
-22
-22
-29
-23
-27
-17
-22
-33
finally, dimension adjustment is carried out on the opposite number data to obtain output data, which is as follows:
-26 -22 -22
-29 -23 -27
-17 -22 -33
the output data represents a feature map extracted by the convolution kernel F2 from the input image data X.
After the feature map is extracted by the convolutional layer, the neural network may further process the feature map, for example, continue to perform convolution operation by using other convolution kernels or send the feature map to a subsequent neural network layer to process the feature map, which is not limited in this application.
According to the data processing method provided by the embodiment of the application, firstly, the data to be processed and the convolution kernel data are respectively expanded to obtain the expanded data to be processed and the expanded convolution kernel data, wherein the size of the expanded data to be processed is the same as that of the expanded convolution kernel data. And then, calculating elements at every two corresponding positions on the expanded convolution kernel data in the expanded data to be processed to obtain output data. The method replaces the traditional convolutional neural network with the additive neural network, and replaces a large amount of multiplication operations in the convolutional neural network with addition operations with higher speed, so that the operation speed of the model can be obviously increased, and the data processing speed of the data processing device is further improved. Secondly, the method improves the addition neural network, trains the improved addition neural network by using the encrypted sample data, and further accelerates the operation speed of the model and the data processing speed of the data processing device by putting multiple rounds of absolute value operations in the addition neural network into one round of operations, so that the quick operation of the ciphertext data can be realized even if the processed data is the ciphertext data.
The present application also provides a data processing apparatus suitable for use in a ciphertext computing environment of a convolution operator, as shown in fig. 4. Fig. 4 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. Referring to fig. 4, the data processing apparatus includes: the device comprises an expansion module and an operation module.
The extension module is configured to: respectively expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data, wherein the parameter value of the expanded data to be processed is the same as that of the expanded convolution kernel data;
the operation module is used for: and calculating elements at every two corresponding positions in the expanded input to-be-processed data and the expanded convolution kernel data to obtain output data.
Optionally, the operation module includes: the device comprises a subtracter, an absolute value arithmetic unit, an adder, a negation device and a dimension adjusting module;
the subtractor is configured to: subtracting the elements at every two corresponding positions in the expanded to-be-processed data and the expanded convolution kernel data to obtain corresponding difference data;
the absolute value operator is to: calculating an absolute value of each element of the difference data to obtain corresponding absolute value data;
the adder is configured to: adding each element of the absolute value data in the same preset dimension to obtain corresponding sum value data;
the inverter is used for: performing negation operation on each element of the sum value data to obtain corresponding opposite number data;
the dimension adjustment module is configured to: and carrying out dimension adjustment on the phase inversion data to obtain the output data.
Optionally, the extension module is configured to:
copying the data to be processed for corresponding times according to the number of convolution kernels to obtain the expanded data to be processed; and
and copying the convolution kernel data for corresponding times according to the sample data size and the size of the convolution kernel to obtain the expanded convolution kernel data.
Optionally, the data processing apparatus further includes: a pre-processing module to:
intercepting original data to be processed according to the size of a convolution kernel, sequentially arranging a plurality of intercepted elements into a line, and forming the data to be processed by a plurality of lines of elements; and
and sequentially arranging all elements in the original convolution kernel data into a line to obtain the convolution kernel data.
Optionally, the data processing apparatus further includes:
a configuration module and a division module;
the configuration module is configured to: configuring the number of groups included in a single training batch;
the dividing module is configured to: dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub data to be processed and a plurality of groups of expanded convolution kernel data respectively according to the number of training batches and the number of groups;
the operation module is used for: and calculating the single expanded to-be-processed subdata and the single expanded convolution kernel subdata corresponding to the group number, and summarizing the operation results of the group number to obtain the output data.
Optionally, the data processing apparatus further includes:
a memory;
the memory is to: storing the plurality of groups of expanded to-be-processed subdata and the plurality of groups of expanded convolution kernel subdata;
the configuration module is configured to: configuring the number of groups included in a single training batch according to the operation memory capacity of equipment for operating the data processing device;
the operation module is further configured to read the set of expanded to-be-processed sub data and the at least one set of expanded convolution kernel data into an operation memory of the device.
Optionally, the data processing apparatus further includes: an adjustment module;
the adjusting module is used for adjusting the number of the training batches and adjusting the number of groups included in a single training batch according to the adjusted number of the training batches.
Optionally, the data processing apparatus further includes: a transceiver module;
the receiving and transmitting module is used for receiving data in a ciphertext form to be processed by the convolution arithmetic unit and transmitting the data in the ciphertext form to the outside.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The data processing method and apparatus provided by the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A data processing apparatus adapted for use in a ciphertext computing environment of a convolution operator, comprising: the device comprises an expansion module and an operation module;
the extension module is configured to: respectively expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data, wherein the parameter value of the expanded data to be processed is the same as that of the expanded convolution kernel data;
the operation module is used for: and calculating elements at every two corresponding positions in the expanded input to-be-processed data and the expanded convolution kernel data to obtain output data.
2. The data processing apparatus of claim 1, wherein the arithmetic module comprises: the device comprises a subtracter, an absolute value arithmetic unit, an adder, a negation device and a dimension adjusting module;
the subtractor is configured to: subtracting the elements at every two corresponding positions in the expanded to-be-processed data and the expanded convolution kernel data to obtain corresponding difference data;
the absolute value operator is to: calculating an absolute value of each element of the difference data to obtain corresponding absolute value data;
the adder is configured to: adding each element of the absolute value data in the same preset dimension to obtain corresponding sum value data;
the inverter is used for: performing negation operation on each element of the sum value data to obtain corresponding opposite number data;
the dimension adjustment module is configured to: and carrying out dimension adjustment on the phase inversion data to obtain the output data.
3. The data processing apparatus of claim 1, wherein the expansion module is configured to:
copying the data to be processed for corresponding times according to the number of convolution kernels to obtain the expanded data to be processed; and
and copying the convolution kernel data for corresponding times according to the sample data size and the size of the convolution kernel to obtain the expanded convolution kernel data.
4. A data processing apparatus according to any one of claims 1 to 3, further comprising: a pre-processing module to:
intercepting original data to be processed according to the size of a convolution kernel, sequentially arranging a plurality of intercepted elements into a line, and forming the data to be processed by a plurality of lines of elements; and
and sequentially arranging all elements in the original convolution kernel data into a line to obtain the convolution kernel data.
5. A data processing apparatus according to any one of claims 1 to 3, further comprising: a configuration module and a division module;
the configuration module is configured to: configuring the number of groups included in a single training batch;
the dividing module is configured to: dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub data to be processed and a plurality of groups of expanded convolution kernel data respectively according to the number of training batches and the number of groups;
the operation module is used for: and calculating the single expanded to-be-processed subdata and the single expanded convolution kernel subdata corresponding to the group number, and summarizing the operation results of the group number to obtain the output data.
6. A data processing method applied to a data processing apparatus adapted to a ciphertext computing environment of a convolution operator, the method comprising:
respectively expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data, wherein the size of the expanded data to be processed is the same as that of the expanded convolution kernel data;
and calculating elements at every two corresponding positions on the extended convolution kernel data in the extended data to be processed to obtain output data.
7. The method according to claim 6, wherein the operating on the elements at every two corresponding positions in the extended to-be-processed data and the extended convolution kernel data to obtain output data comprises:
subtracting the elements at every two corresponding positions in the expanded to-be-processed data and the expanded convolution kernel data to obtain corresponding difference data;
calculating an absolute value of each element of the difference data to obtain corresponding absolute value data;
adding each element of the absolute value data in the same preset dimension to obtain corresponding sum value data;
performing negation operation on each element of the sum value data to obtain corresponding opposite number data;
and carrying out dimension adjustment on the phase inversion data to obtain the output data.
8. The method of claim 6, wherein the expanding the data to be processed and the convolution kernel data to obtain expanded data to be processed and expanded convolution kernel data comprises:
copying the data to be processed for corresponding times according to the number of convolution kernels to obtain the expanded data to be processed; and
and copying the convolution kernel data for corresponding times according to the sample data size and the size of the convolution kernel to obtain the expanded convolution kernel data.
9. The method according to any one of claims 6-8, further comprising:
configuring the number of groups included in a single training batch;
dividing the expanded data to be processed and the expanded convolution kernel data into a plurality of groups of expanded sub data to be processed and a plurality of groups of expanded convolution kernel data respectively according to the number of training batches and the number of groups;
calculating the elements at every two corresponding positions in the extended to-be-processed data and the extended convolution kernel data to obtain output data, wherein the calculating comprises the following steps:
and calculating the single expanded to-be-processed subdata and the single expanded convolution kernel subdata corresponding to the group number, and summarizing the operation results of the group number to obtain the output data.
10. The method of claim 9, further comprising:
and receiving data in a ciphertext form to be processed by the convolution arithmetic unit, and sending the data in the ciphertext form to the outside.
CN202010575625.6A 2020-06-22 2020-06-22 Data processing method and device Pending CN111882029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010575625.6A CN111882029A (en) 2020-06-22 2020-06-22 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010575625.6A CN111882029A (en) 2020-06-22 2020-06-22 Data processing method and device

Publications (1)

Publication Number Publication Date
CN111882029A true CN111882029A (en) 2020-11-03

Family

ID=73157870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010575625.6A Pending CN111882029A (en) 2020-06-22 2020-06-22 Data processing method and device

Country Status (1)

Country Link
CN (1) CN111882029A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168991A (en) * 2022-02-10 2022-03-11 北京鹰瞳科技发展股份有限公司 Method, circuit and related product for processing encrypted data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951395A (en) * 2017-02-13 2017-07-14 上海客鹭信息技术有限公司 Towards the parallel convolution operations method and device of compression convolutional neural networks
CN108171262A (en) * 2017-12-22 2018-06-15 珠海习悦信息技术有限公司 The recognition methods of ciphertext picture/mb-type, device, storage medium and processor
CN109146061A (en) * 2018-08-09 2019-01-04 北京航空航天大学 The treating method and apparatus of neural network model
CN109190758A (en) * 2018-09-04 2019-01-11 地平线(上海)人工智能技术有限公司 Method and apparatus for the tensor data of convolutional neural networks to be unfolded
CN110399591A (en) * 2019-06-28 2019-11-01 苏州浪潮智能科技有限公司 Data processing method and device based on convolutional neural networks
CN110543901A (en) * 2019-08-22 2019-12-06 阿里巴巴集团控股有限公司 image recognition method, device and equipment
CN111047025A (en) * 2018-10-15 2020-04-21 华为技术有限公司 Convolution calculation method and device
CN111242289A (en) * 2020-01-19 2020-06-05 清华大学 Convolutional neural network acceleration system and method with expandable scale
CN111310891A (en) * 2020-01-20 2020-06-19 苏州浪潮智能科技有限公司 Convolution operation method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951395A (en) * 2017-02-13 2017-07-14 上海客鹭信息技术有限公司 Towards the parallel convolution operations method and device of compression convolutional neural networks
CN108171262A (en) * 2017-12-22 2018-06-15 珠海习悦信息技术有限公司 The recognition methods of ciphertext picture/mb-type, device, storage medium and processor
CN109146061A (en) * 2018-08-09 2019-01-04 北京航空航天大学 The treating method and apparatus of neural network model
CN109190758A (en) * 2018-09-04 2019-01-11 地平线(上海)人工智能技术有限公司 Method and apparatus for the tensor data of convolutional neural networks to be unfolded
CN111047025A (en) * 2018-10-15 2020-04-21 华为技术有限公司 Convolution calculation method and device
CN110399591A (en) * 2019-06-28 2019-11-01 苏州浪潮智能科技有限公司 Data processing method and device based on convolutional neural networks
CN110543901A (en) * 2019-08-22 2019-12-06 阿里巴巴集团控股有限公司 image recognition method, device and equipment
CN111242289A (en) * 2020-01-19 2020-06-05 清华大学 Convolutional neural network acceleration system and method with expandable scale
CN111310891A (en) * 2020-01-20 2020-06-19 苏州浪潮智能科技有限公司 Convolution operation method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
EDISON_G: "CVPR2020最佳目标检测AdderNet(加法网络)含论文及源码链接", 计算机视觉研究院, pages 1 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168991A (en) * 2022-02-10 2022-03-11 北京鹰瞳科技发展股份有限公司 Method, circuit and related product for processing encrypted data
CN114168991B (en) * 2022-02-10 2022-05-20 北京鹰瞳科技发展股份有限公司 Method, circuit and related product for processing encrypted data

Similar Documents

Publication Publication Date Title
JP7272363B2 (en) Precision privacy-preserving real-valued function evaluation
CN107340993B (en) Arithmetic device and method
Miller et al. Universal security for randomness expansion from the spot-checking protocol
US20220014355A1 (en) Oblivious Comparisons and Quicksort of Secret Shared Arithmetic Values in a Multi-Party Computing Setting
JP6629466B2 (en) Security calculation system, security calculation device, security calculation method, program
CN112200713A (en) Business data processing method, device and equipment in federated learning
CN111882029A (en) Data processing method and device
CN114095149B (en) Information encryption method, device, equipment and storage medium
Kulkarni et al. Hardware topologies for decentralized large-scale MIMO detection using newton method
US20220292362A1 (en) Secret softmax function calculation system, secret softmax calculation apparatus, secret softmax calculation method, secret neural network calculation system, secret neural network learning system, and program
US20230254115A1 (en) Protection of transformations by intermediate randomization in cryptographic operations
Abdellatef et al. Low-area and accurate inner product and digital filters based on stochastic computing
CN116633526B (en) Data processing method, device, equipment and medium
CN114036581A (en) Privacy calculation method based on neural network model
CN108804933A (en) A kind of system conversion method for big data
Pang et al. BOLT: Privacy-Preserving, Accurate and Efficient Inference for Transformers
Wang et al. Popcorn: Paillier meets compression for efficient oblivious neural network inference
WO2020037512A1 (en) Neural network calculation method and device
US20240061955A1 (en) Method and system for privacy-preserving logistic regression training based on homomorphically encrypted ciphertexts
CN111356151A (en) Data processing method and device and computer readable storage medium
EP4099609A1 (en) Computational network conversion for fully homomorphic evaluation
Lv et al. High-efficiency min-entropy estimation based on neural network for random number generators
Lupascu et al. Acceleration techniques for fully-homomorphic encryption schemes
Sarband et al. Massive machine-type communication pilot-hopping sequence detection architectures based on non-negative least squares for grant-free random access
CN113870090A (en) Method, graphics processing apparatus, system, and medium for implementing functions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination