CN114580613A - Data processing method, device, terminal and storage medium - Google Patents

Data processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN114580613A
CN114580613A CN202210195040.0A CN202210195040A CN114580613A CN 114580613 A CN114580613 A CN 114580613A CN 202210195040 A CN202210195040 A CN 202210195040A CN 114580613 A CN114580613 A CN 114580613A
Authority
CN
China
Prior art keywords
hash
data
input data
map
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210195040.0A
Other languages
Chinese (zh)
Inventor
何传龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Eeasy Electronic Tech Co ltd
Original Assignee
Zhuhai Eeasy Electronic Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Eeasy Electronic Tech Co ltd filed Critical Zhuhai Eeasy Electronic Tech Co ltd
Priority to CN202210195040.0A priority Critical patent/CN114580613A/en
Publication of CN114580613A publication Critical patent/CN114580613A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Neurology (AREA)
  • Fuzzy Systems (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data processing method, a device, a terminal and a storage medium, wherein the method comprises the steps of inputting data, calculating the hash of the input data, and obtaining the fingerprint of the input data; according to the hash of the input data, inquiring whether a convolution result of the same input data exists in a hash _ map of the data container, if yes, directly returning the convolution result, and if not, entering the next step; calculating the convolution of the input data; and storing the hash and convolution result of the input data into a data container hash _ map according to a key-value form, wherein the hash value is used as a key, the convolution result is used as a value, and then returning the convolution result. When similar data are input or convolution layers in the middle of the convolutional neural network are calculated to generate the similar data, and the data are calculated to obtain results before, recalculation is not needed, and the results which are calculated before are obtained from recorded historical results, so that the calculation amount is reduced, the processing speed is improved, and the power consumption is reduced.

Description

Data processing method, device, terminal and storage medium
Technical Field
The present invention relates to data processing technologies, and in particular, to a data processing method, an apparatus, a terminal, and a storage medium.
Background
The Convolutional Neural Network (CNN) mainly comprises a Convolutional layer, which is a data processing method for performing Convolutional operation on input data, and the Convolutional operation is mainly realized by multiplication and addition. The multiply operation instruction belongs to a multi-clock cycle instruction in a Central Processing Unit (CPU), and the more the convolutional neural network is complex, the more convolutional layers are, the more multiply operations are performed on the neural network by inference calculation, which means that the time consumption is longer.
In order to accelerate the inference speed of the convolutional neural network, various technical schemes are proposed: 1. convolution calculation is realized through a special hardware circuit, such as an NPU (Neural Network Processing Unit); 2. the number of multiplications is reduced to some extent by algorithms such as the winogrd algorithm. The implementation mode through a hardware circuit depends on hardware, and although the hardware circuit has great advantages in performance and power consumption, the hardware circuit is not flexible in application and cannot be used on equipment without related hardware; the algorithm for reducing the complexity of the convolution operation is relatively flexible to apply, but the performance and the power consumption are greatly dependent on the realization of the algorithm. For example, the winograd algorithm, although reducing the number of times of multiplication, relatively requires a large number of multiplications to be calculated.
Disclosure of Invention
In order to solve at least one technical problem in the prior art, the present invention provides a data processing method, an apparatus, a terminal and a storage medium.
In order to achieve the purpose, the technical scheme of the invention is as follows:
in a first aspect, the present invention provides a data processing method, including:
inputting data, and calculating the hash of the input data to obtain the fingerprint of the input data;
according to the hash of the input data, inquiring whether a convolution result of the same input data exists in a hash _ map of the data container, if yes, directly returning the convolution result, and if not, entering the next step;
calculating the convolution of the input data;
and storing the hash and convolution result of the input data into a data container hash _ map according to a key-value form, wherein the hash value is used as a key, the convolution result is used as a value, and then, returning the convolution result.
In a second aspect, the present invention provides a data processing apparatus comprising:
the input module is used for inputting data;
the hash calculation module is used for calculating the hash of the input data to obtain the fingerprint of the input data;
the data container hash _ map stores a convolution result, and the convolution result corresponds to the hash value;
the query module is used for querying whether a convolution result of the same input data exists in the hash _ map of the data container, if the convolution result of the same data exists in the query, the convolution result is directly returned, and otherwise, the convolution calculation module is triggered;
the convolution calculation module is used for calculating the convolution of the input data;
and the storage module is used for storing the hash and convolution result of the input data into a data container hash _ map according to a key-value form, wherein the hash value is used as a key, the convolution result is used as a value, and then the convolution result is returned.
In a third aspect, the present invention provides a data processing terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method as described above when executing the computer program.
In a fourth aspect, the invention provides a computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the method as described above
Compared with the prior art, the invention has the beneficial effects that:
the scheme of obtaining fingerprint information from input data of the convolutional neural network and inquiring historical calculation results according to the fingerprint information is used for reducing repeated calculation and improving the processing speed of the convolutional neural network; meanwhile, the hash of the input data is calculated, and the processing speed and the utilization rate of the IP are improved.
Drawings
FIG. 1 is a flow chart of the process of convolving data;
FIG. 2 is a flow chart of neural network data processing;
FIG. 3 is a schematic diagram of the data processing apparatus;
FIG. 4 is a schematic diagram of the query module;
fig. 5 is a schematic diagram of the composition of the data processing terminal.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The whole or part of the convolved input data is consistent with the whole or part of the input data, and if the result of the previous convolution calculation is stored, the previous result can be inquired and returned when the same input is met, so that the calculation is reduced, the processing speed is increased, and the power consumption is reduced. For example, if the neural network detects the images captured by the camera from the same direction at all times, because the content captured by the camera is the same, the data of the neural network in the processing process is close or consistent, and the intermediate calculation results can be multiplexed. The specific implementation is described below.
Referring to fig. 1, the present embodiment provides a data processing method applied to a convolutional layer, which includes the following steps:
establishing a data container corresponding to a hash value (hash, hash algorithm, the hash algorithm maps a binary value with any length into a smaller binary value with a fixed length, and the small binary value is called as the hash value) and a convolution result during the first operation;
inputting data, and calculating the hash of the input data to obtain the fingerprint of the input data;
according to the hash of the input data, inquiring whether a convolution result of the same input data exists in a hash _ map of the data container, if yes, directly returning the convolution result, and if not, entering the next step;
calculating the convolution of the input data;
and storing the hash and convolution result of the input data into a data container hash _ map according to a key-value form, wherein the hash value is used as a key, the convolution result is used as a value, and then returning the convolution result.
Therefore, through the steps, the fingerprint information is obtained from the input data of the convolutional neural network, and the scheme of inquiring the historical calculation result according to the fingerprint information is used for reducing repeated calculation and improving the processing speed of the convolutional neural network; meanwhile, the hash of the input data is calculated, and the processing speed and the utilization rate of the IP are improved.
For neural networks, the method implemented is shown in fig. 2:
when the neural network runs for the first time, establishing a data container corresponding to the hash value of the input data of the neural network and the identification result;
before reasoning and calculation, calculating the hash of input data to obtain the fingerprint of the input data;
inquiring whether the identification results of the same input data exist in the data container according to the hash, if the identification results of the same input data exist in the inquiry, directly returning the identification results, and if not, entering the next step;
calculating the recognition result of the input data, wherein the implementation method can comprise the convolutional layer implementation method introduced above;
and storing the hash and the recognition result of the input data into a data container according to a key-value form, wherein the hash value is used as a key, the recognition result is used as a value, and then returning the recognition result.
The following describes a detailed implementation of the present solution by taking a convolutional neural network for identifying an input image as an example.
Defining a hash table of the neural network N as T, wherein T is realized by using a hash _ map of C + + STL, key is a hash value input by the network, and value is a network output result corresponding to the hash.
The hash table of the convolution layer of the neural network N is TL1,TL2...TLnThey are also implemented using the hash map of C + + STL, key is the hash value of the convolutional layer input data, value is the convolutional layer output number corresponding to the hash valueAccordingly. The number of hash tables of a convolutional layer may be determined according to the input size of the convolutional layer, so as to prevent the convolutional layer with a smaller calculation amount from being added to the processing of the scheme, for example, the convolutional layer with an input larger than W × H is set to be added with a hash table, and the other convolutional layers are not added with a hash table.
The process of inputting one convolution layer greater than W x H C is as follows:
and creating a hash _ map container of the current convolution layer, wherein the hash _ map can be empty, and historical data can be loaded from a file. When data is input, the following processing steps are performed.
Step 1: scaling the input data to a dimension with a width and a height of 9 x 8 x C;
step 2: converting the input data into data of a single channel by adding the values of each channel and then averaging them to the final result, such as:
Figure BDA0003526903960000041
data with dimensions 9 x 8 x 1 were obtained at this time:
Figure BDA0003526903960000042
and step 3: the difference value is calculated for the two elements before and after each line of data, namely [ d00 d01 … d08]After treatment, [ d ] is obtained00-d01d01-d02 … d07-d08]And if each difference is greater than 0, taking the value as 1, otherwise taking the value as 0, and finally obtaining matrix data with the dimension of 8 x 8 and only containing 0 or 1:
Figure BDA0003526903960000043
and 4, step 4: the value of each element of the result in the step 3 is 0 or 1, each row of the matrix is regarded as a byte value, the byte value is converted into a 16-system value, and 8 rows of data obtain a hash value h with the length of 8;
and 5: querying in the hash _ map by using h, if a result is found, directly transmitting the result to the next layer as the output of the current convolutional layer, and otherwise, entering the next step; the query comparison method comprises the following steps: 1) h 'is a hash value selected from the hash _ map, and the difference of each bit in h and h' is compared; 2) and counting the number of the difference bits, considering that a result is inquired if the number of the differences is smaller than a set threshold value D, and comparing the next hash value in the hash _ map if the number of the differences is not smaller than the set threshold value D.
Step 6: calculating the convolution of the input data;
and 7: storing the convolution result and h into a hash _ map, wherein h is used as key, and the convolution result is used as value;
and 8: and outputting the convolution result to the next layer.
If the program exits, the hash map can be saved to a file for use the next time the program runs.
The neural network process is as follows:
and creating a hash _ map container of the current neural network, wherein the hash _ map can be empty, and historical data can be loaded from a file. Assuming that the input is an RGB image of W × H × 3, when data is input, the following processing steps are performed.
Step 1: scaling the input image to an image of size 9 x 8;
step 2: carrying out gray level processing on the 9 x 8 image, wherein the processing method is consistent with the above;
and step 3: the difference is calculated for the front and back elements in each line of the gray scale image, [ d ]00 d01 … d08]After treatment, [ d ] is obtained00-d01 d01-d02 … d07-d08]If each difference is greater than 0, the value is 1, otherwise, the value is 0, and the final result is data with dimension of 8 x 8;
and 4, step 4: each line is regarded as byte data, and after the byte data is converted into a 16-system format, 8 lines of data obtain a hash value h with the length of 8;
and 5: utilizing h to inquire in the hash _ map, if finding the result, directly outputting the result as the result of the current neural network, otherwise, entering the next step, and inquiring and comparing the method to be consistent with the above;
step 6: transmitting the data into a neural network, and identifying the result by the neural network, wherein the neural network can be implemented by using the convolutional layer processing method described above, that is, the layer in the neural network meeting the convolutional processing requirement can be processed by adopting the steps 1-8 described above;
and 7: storing the recognition result and h into a hash _ map, wherein h is used as key, and the convolution result is used as value;
and 8: and returning the recognition result.
If the program exits, the hash map can be saved to a file for use by the neural network next time the program is run.
In summary, compared with the prior art, the invention has the following technical advantages:
when similar data is input or a convolutional layer in the middle of a convolutional neural network is calculated to generate the similar data, and the data is calculated to obtain a result before, recalculation is not needed, and only the result which is calculated before needs to be obtained from the recorded historical result, so that the calculation amount is reduced, the processing speed is improved, and the power consumption is reduced.
Example 2:
referring to fig. 3, the data processing apparatus provided in the present embodiment includes:
an input module 31 for inputting data;
a hash calculation module 32, configured to calculate a hash of the input data to obtain a fingerprint of the input data;
the data container hash _ map33 stores a convolution result, and the convolution result corresponds to the hash value;
the query module 34 is configured to query whether a convolution result of the same input data exists in the hash _ map of the data container, and if the convolution result of the same input data exists in the query, directly return the convolution result, otherwise, trigger the convolution calculation module;
a convolution calculation module 35 for calculating a convolution of the input data;
the storage module 36 stores the hash and convolution result of the input data into a hash _ map of the data container according to a key-value form, where the hash value is used as a key and the convolution result is used as a value, and then returns the convolution result.
If the fingerprint information is obtained through the input data of the convolutional neural network, the scheme of inquiring historical calculation results according to the fingerprint information is used for reducing repeated calculation and improving the processing speed of the convolutional neural network; meanwhile, the hash of the input data is calculated, and the processing speed and the utilization rate of the IP are improved.
In one embodiment, the data container hash _ map33 is empty or is data for loading history from a file.
In an embodiment, as shown in fig. 4, the query module 34 includes:
a comparing unit 341, configured to compare the difference between h and h', for each bit; h is a hash value of the input data, and h' is a selected hash value in the hash _ map;
a statistic unit 342, configured to count the number of the difference bits, if the number of the differences is smaller than the set threshold D, it is determined that a result is obtained by the query, otherwise, a next hash value in the hash _ map is compared
Example 3:
referring to fig. 5, the data processing terminal provided in this embodiment includes a processor 51, a memory 52, and a computer program 53, such as a data processing program, stored in the memory 52 and capable of running on the processor 51. The processor 51 implements the steps of embodiment 1 described above, such as the steps shown in fig. 1, when executing the computer program 53. Alternatively, the processor 51 implements the functions of the modules/units in the above-described embodiment 2 when executing the computer program 53.
Illustratively, the computer program 53 may be partitioned into one or more modules/units, which are stored in the memory 52 and executed by the processor 51 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 53 in the data processing terminal.
The data processing terminal can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The data processing terminal may include, but is not limited to, a processor 51, a memory 52. It will be appreciated by those skilled in the art that fig. 5 is only an example of a data processing terminal and does not constitute a limitation of the data processing terminal, and may include more or less components than those shown, or combine certain components, or different components, for example, the data processing terminal may also include input output devices, network access devices, buses, etc.
The Processor 51 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 52 may be an internal memory element of the data processing terminal, such as a hard disk or a memory of the data processing terminal. The memory 52 may also be an external storage device of the data processing terminal, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the data processing terminal. Further, the memory 52 may also include both an internal storage unit and an external storage device of the data processing terminal. The memory 52 is used for storing the computer programs and other programs and data required by the data processing terminal. The memory 52 may also be used to temporarily store data that has been output or is to be output.
Example 4:
the present embodiment provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the method of embodiment 1.
The computer-readable medium can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above embodiments are only for illustrating the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention accordingly, and not to limit the protection scope of the present invention accordingly. All equivalent changes or modifications made in accordance with the spirit of the present disclosure are intended to be covered by the scope of the present disclosure.

Claims (9)

1. A method of data processing, comprising:
inputting data, and calculating the hash of the input data to obtain the fingerprint of the input data;
according to the hash of the input data, inquiring whether a convolution result of the same input data exists in a hash _ map of the data container, if yes, directly returning the convolution result, and if not, entering the next step;
calculating the convolution of the input data;
and storing the hash and convolution result of the input data into a data container hash _ map according to a key-value form, wherein the hash value is used as a key, the convolution result is used as a value, and then returning the convolution result.
2. The data processing method of claim 1, further comprising:
and establishing a data container hash _ map corresponding to the hash value and the convolution result in the first operation.
3. The data processing method of claim 2, wherein the data container hash map is empty or is data of a history loaded from a file.
4. The data processing method according to claim 1, wherein the query method for querying in the data container hash _ map is:
comparing the difference of each bit in h and h'; h is a hash value of the input data, and h' is a selected hash value in the hash _ map;
and counting the number of the difference bits, considering that a result is inquired if the number of the differences is smaller than a set threshold value D, and comparing the next hash value in the hash _ map if the number of the differences is not smaller than the set threshold value D.
5. A data processing apparatus, characterized by comprising:
the input module is used for inputting data;
the hash calculation module is used for calculating the hash of the input data to obtain the fingerprint of the input data;
the data container hash _ map stores a convolution result, and the convolution result corresponds to the hash value;
the query module is used for querying whether a convolution result of the same input data exists in the hash _ map of the data container, if the convolution result of the same data exists in the query, the convolution result is directly returned, and otherwise, the convolution calculation module is triggered;
the convolution calculation module is used for calculating the convolution of the input data;
and the storage module is used for storing the hash and convolution result of the input data into a data container hash _ map according to a key-value form, wherein the hash value is used as a key, the convolution result is used as a value, and then the convolution result is returned.
6. The data processing apparatus according to claim 5, wherein the data container hash _ map is empty or is data of a history loaded from a file.
7. The data processing apparatus of claim 5, wherein the query module comprises:
the comparison unit is used for comparing the difference of each bit in h and h'; h is a hash value of the input data, and h' is a selected hash value in the hash _ map;
and the counting unit is used for counting the number of the difference bits, considering that a result is inquired if the number of the differences is smaller than a set threshold value D, and otherwise, comparing the next hash value in the hash _ map.
8. A data processing terminal comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of the method according to any of claims 1 to 4 when executing said computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN202210195040.0A 2022-03-01 2022-03-01 Data processing method, device, terminal and storage medium Pending CN114580613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210195040.0A CN114580613A (en) 2022-03-01 2022-03-01 Data processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210195040.0A CN114580613A (en) 2022-03-01 2022-03-01 Data processing method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114580613A true CN114580613A (en) 2022-06-03

Family

ID=81776431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210195040.0A Pending CN114580613A (en) 2022-03-01 2022-03-01 Data processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114580613A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205274A (en) * 2023-04-27 2023-06-02 苏州浪潮智能科技有限公司 Control method, device, equipment and storage medium of impulse neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205274A (en) * 2023-04-27 2023-06-02 苏州浪潮智能科技有限公司 Control method, device, equipment and storage medium of impulse neural network

Similar Documents

Publication Publication Date Title
EP3657398A1 (en) Weight quantization method for a neural network and accelerating device therefor
US20170365306A1 (en) Data Processing Method and Apparatus
US11615607B2 (en) Convolution calculation method, convolution calculation apparatus, and terminal device
CN111338695A (en) Data processing method based on pipeline technology and related product
CN114580613A (en) Data processing method, device, terminal and storage medium
US20230254145A1 (en) System and method to improve efficiency in multiplicationladder-based cryptographic operations
CN116188942A (en) Image convolution method, device, equipment and storage medium
CN112506950A (en) Data aggregation processing method, computing node, computing cluster and storage medium
CN111553847B (en) Image processing method and device
CN114138231B (en) Method, circuit and SOC for executing matrix multiplication operation
US20230273826A1 (en) Neural network scheduling method and apparatus, computer device, and readable storage medium
US20230119749A1 (en) Large-precision homomorphic comparison using bootstrapping
CN113918541B (en) Preheating data processing method and device and computer readable storage medium
US20200026998A1 (en) Information processing apparatus for convolution operations in layers of convolutional neural network
US20210224632A1 (en) Methods, devices, chips, electronic apparatuses, and storage media for processing data
CN111258733B (en) Embedded OS task scheduling method and device, terminal equipment and storage medium
CN113705784A (en) Neural network weight coding method based on matrix sharing and hardware system
CN113591936A (en) Vehicle attitude estimation method, terminal device and storage medium
CN112804446A (en) Big data processing method and device based on cloud platform big data
CN111105044B (en) Discrete feature processing method and device, computer equipment and storage medium
CN113269302A (en) Winograd processing method and system for 2D and 3D convolutional neural networks
CN114579046B (en) Cloud storage similar data detection method and system
CN117456562B (en) Attitude estimation method and device
CN111201559B (en) Replacement device, replacement method, and recording medium
CN115278245A (en) Context adaptive arithmetic coding and decoding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination