CN110147880A - A kind of Neural Network Data processing structure, method, system and relevant apparatus - Google Patents

A kind of Neural Network Data processing structure, method, system and relevant apparatus Download PDF

Info

Publication number
CN110147880A
CN110147880A CN201910429293.8A CN201910429293A CN110147880A CN 110147880 A CN110147880 A CN 110147880A CN 201910429293 A CN201910429293 A CN 201910429293A CN 110147880 A CN110147880 A CN 110147880A
Authority
CN
China
Prior art keywords
neural network
network data
array
data
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910429293.8A
Other languages
Chinese (zh)
Inventor
董刚
赵雅倩
张新
杨宏斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Wave Intelligent Technology Co Ltd
Original Assignee
Suzhou Wave Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Wave Intelligent Technology Co Ltd filed Critical Suzhou Wave Intelligent Technology Co Ltd
Priority to CN201910429293.8A priority Critical patent/CN110147880A/en
Publication of CN110147880A publication Critical patent/CN110147880A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of Neural Network Data processing structure, method, system and a kind of electronic equipment and computer readable storage mediums, the structure includes: and to pass through the three-dimensional memory array of multiple output channel parallel output target datas for being pre-stored the Neural Network Data;It is connected with the three-dimensional memory array, for carrying out the computing array of convolutional calculation to the target data.Neural Network Data processing structure provided by the present application, three-dimensional memory array therein is pre-stored Neural Network Data, when computing array needs to carry out convolution algorithm to Neural Network Data, multiple output channel parallel output datas can be passed through, a large amount of calculating data can be provided in a short time for computing array, the degree of parallelism for improving Neural Network Data processing, accelerates the speed of convolutional calculation.

Description

A kind of Neural Network Data processing structure, method, system and relevant apparatus
Technical field
This application involves nerual network technique fields, more specifically to a kind of Neural Network Data processing structure, side Method, system and a kind of electronic equipment and a kind of computer readable storage medium.
Background technique
Research in terms of deep learning at present is mainly using CNN as research object.And the difference due to handling scene, to CNN Performance requirement it is also not identical, to develop multiple network structure.But CNN it is basic composition be it is fixed, it is respectively defeated Enter layer, convolutional layer, active coating, pond layer and full articulamentum.Wherein calculation amount the best part is convolutional layer, main function Exactly complete the convolution algorithm between image (feature) and neuron (filter).
For different CNN neural network structures, the data length of processing is different.For the same CNN Neural network, data length handled by every layer are also in variation.In CNN neural network, input data amount=input figure Image width degree × input picture height × input picture number of active lanes.Output data quantity=output picture traverse × output picture altitude × output image channel number.Convolutional calculation total degree=output picture traverse × output picture altitude × input picture port number Mesh × output image channel number.As it can be seen that for common CNN network structure, the data volume of input and output is very big.Example It can achieve 512 input channels such as one layer of ResNet50,512 output channels, multiplied by the size of image, data Byte quantity can achieve million grades.The rate of convolutional calculation is to measure the important indicator of CNN network performance, and this requires short A large amount of calculating data can be provided in the time for computing array.
Therefore, how can provide a large amount of data that calculate for computing array in a short time is that those skilled in the art need Technical problems to be solved.
Summary of the invention
The application's is designed to provide a kind of Neural Network Data processing structure, method, system and a kind of electronic equipment With a kind of computer readable storage medium, a large amount of calculating data can be provided for computing array in a short time.
To achieve the above object, this application provides a kind of Neural Network Data processing structures, comprising:
It is deposited for being pre-stored the Neural Network Data, and by the three-dimensional of multiple output channel parallel output target datas Store up array;
It is connected with the three-dimensional memory array, for carrying out the computing array of convolutional calculation to the target data.
Wherein, every row of the three-dimensional memory array corresponds to that row write enters to enable and row reads enabled, the equal respective column of each column It is enabled that enabled and column reading is written, the every row row's of correspondence write-in enables and row's reading is enabled.
Wherein, the three-dimensional memory array is specially the three-dimensional memory array of ping-pong structure.
To achieve the above object, this application provides a kind of Neural Network Data processing methods, comprising:
It determines pending data, and determines that the pending data corresponding all storages in three-dimensional memory array are single Member;
By the data in each storage unit by multiple output channel parallel outputs to computing array, so as to described Computing array carries out convolutional calculation to the pending data.
Wherein, the data by each storage unit are by multiple output channel parallel outputs to calculating battle array Column, comprising:
It controls each storage unit row of the row and reads the row that enabled, column column read enabled and place row It reads and enables the data in each storage unit passing through multiple output channel parallel outputs to computing array.
Wherein, the data by each storage unit are by multiple output channel parallel outputs to calculating battle array Column, comprising:
It determines configuration parameter, and the data in each storage unit is passed through by multiple outputs according to the configuration parameter Channel parallel is exported to computing array;Wherein, the configuration parameter includes at least the number of the output channel and each described Relationship between output channel.
Wherein, further includes:
The input width of three-dimensional memory array is determined according to the data-interface width of external memory space;
Neural Network Data in the external memory space is stored according to the input width to the three-dimensional storage In array.
To achieve the above object, this application provides a kind of Neural Network Data processing systems, comprising:
Determining module for determining pending data, and determines that the pending data is corresponding in three-dimensional memory array All storage units;
Output module, for the data in each storage unit to be passed through multiple output channel parallel outputs to calculating Array, so that the computing array carries out convolutional calculation to the pending data.
To achieve the above object, this application provides a kind of electronic equipment, comprising:
Memory, for storing computer program;
Processor is realized when for executing the computer program such as the step of above-mentioned Neural Network Data processing method.
To achieve the above object, this application provides a kind of computer readable storage medium, the computer-readable storages It is stored with computer program on medium, realizes that above-mentioned Neural Network Data such as is handled when the computer program is executed by processor The step of method.
By above scheme it is found that a kind of Neural Network Data processing structure provided by the present application, comprising: for being pre-stored The Neural Network Data, and pass through the three-dimensional memory array of multiple output channel parallel output target datas;With the three-dimensional Storage array is connected, for carrying out the computing array of convolutional calculation to the target data.
Neural Network Data processing structure provided by the present application, three-dimensional memory array therein are pre-stored neural network number According to, when computing array need to Neural Network Data carry out convolution algorithm when, multiple output channel parallel output numbers can be passed through According to, a large amount of calculating data can be provided in a short time for computing array, improve the degree of parallelism of Neural Network Data processing, Accelerate the speed of convolutional calculation.Disclosed herein as well is a kind of Neural Network Data processing system and a kind of electronic equipment and one Kind computer readable storage medium, is equally able to achieve above-mentioned technical effect.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited Application.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.Attached drawing is and to constitute specification for providing further understanding of the disclosure A part, be used to explain the disclosure together with following specific embodiment, but do not constitute the limitation to the disclosure.Attached In figure:
Fig. 1 is a kind of structure chart of Neural Network Data processing structure shown according to an exemplary embodiment;
Fig. 2 is the structure chart of another Neural Network Data processing structure shown according to an exemplary embodiment;
Fig. 3 is a kind of flow chart of Neural Network Data processing method shown according to an exemplary embodiment;
Fig. 4 is the flow chart of another Neural Network Data processing method shown according to an exemplary embodiment;
Fig. 5 is a kind of structure chart of Neural Network Data processing system shown according to an exemplary embodiment;
Fig. 6 is the structure chart according to a kind of electronic equipment shown in an exemplary embodiment.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
The embodiment of the present application discloses a kind of Neural Network Data processing structure, can mention in a short time for computing array For largely calculating data.
Referring to Fig. 1, a kind of structure chart of Neural Network Data processing structure shown according to an exemplary embodiment is such as schemed Shown in 1, comprising:
It is deposited for being pre-stored the Neural Network Data, and by the three-dimensional of multiple output channel parallel output target datas Store up array 100;
It is connected with the three-dimensional memory array 100, for carrying out the computing array of convolutional calculation to the target data 200。
In the present embodiment, three-dimensional memory array 100 includes multiple storage units, for being pre-stored Neural Network Data, The width of input data can be according to such as DDR (Chinese name: Double Data Rate, full name in English: Double Data Rate) the data-interface width of the external memory spaces such as random access memory is controlled by configuration parameter.Three-dimensional storage Array 100 can read the data in any storage unit under the control of configuration parameter, without considering the mode of data storage And the bit wide of data port.The data of output can according to need any combination, extremely by multiple output channel parallel outputs Computing array 200, the number of active lanes of output, data length, the correlation between beat and each output channel can pass through Configuration parameter is controlled.Under different configuration parameter control, the data way of output of multiple combinations is realized.Be conducive to CNN (Chinese name: convolutional neural networks, full name in English: Convolutional Neural Networks) dynamic adjustment structure, expands The concrete function for having opened up CNN enriches the implementation of CNN.
It should be noted that three-dimensional memory array 100 can run on FPGA (Chinese name: field programmable gate Array, full name in English: Field Programmable Gate Array) accelerate on board, due to the standard component inside FPGA Speed and hardware resource cost it is controllable, the Neural Network Data of needs can be provided for computing array, further increased The computational efficiency of computing array.
Preferably, every row of the three-dimensional memory array corresponds to that row write enters to enable and row reads enabled, and each column is corresponding Column write-in is enabled and column read it is enabled, the corresponding row's write-in of every row is enabled and row read it is enabled.In specific implementation, when some is deposited When the row that storage unit row reading of the row is enabled, column column reading is enabled and place is arranged reads enabled be all turned on, from Data are read in the storage unit, pass through multiple output channel parallel outputs to computing array.Equally when some storage unit institute When the row write being expert at enters that enabled, column column write-in is enabled and row's write-in of place row is enabled is all turned on, to the storage list Member write-in data.
On the basis of above-mentioned three-dimensional memory array, in order to further improve the handling capacity of data, as shown in Fig. 2, can To take ping-pong operation strategy, i.e. the three-dimensional memory array three-dimensional memory array that is specially ping-pong structure.For ping-pong structure Data input sequence can also be controlled according to configuration parameter.
Neural Network Data processing structure provided by the embodiments of the present application, three-dimensional memory array therein are pre-stored nerve net Network data can be defeated parallel by multiple output channels when computing array needs to carry out convolution algorithm to Neural Network Data Data out, can be provided in a short time for computing array it is a large amount of calculate data, improve Neural Network Data processing and Row degree accelerates the speed of convolutional calculation.
The embodiment of the present application discloses a kind of the embodiment of the present application and discloses a kind of Neural Network Data processing method, specifically :
Referring to Fig. 3, a kind of flow chart of Neural Network Data processing method shown according to an exemplary embodiment is such as schemed Shown in 3, comprising:
S101: determining pending data, and determines that the pending data is corresponding in three-dimensional memory array and all deposit Storage unit;
The executing subject of the present embodiment can for Neural Network Data processing processor, purpose be in a short time be meter It calculates array and a large amount of Neural Network Data is provided.In this step, it is first determined the Neural Network Data that computing array needs is (i.e. Pending data) corresponding all storage units in three-dimensional memory array, i.e., these storage units are in three-dimensional memory array Coordinate position.
S102: by the data in each storage unit by multiple output channel parallel outputs to computing array, with Toilet states computing array and carries out convolutional calculation to the pending data.
In this step, the coordinate position of the storage unit determined according to previous step, by the data in each storage unit Pass through multiple output channel parallel outputs to computing array.The number of active lanes of output, data length, beat and each output are logical Correlation between road can be controlled by configuration parameter.That is this step can include determining that configuration parameter, and according to Data in each storage unit are passed through multiple output channel parallel outputs to computing array by the configuration parameter;Its In, the configuration parameter includes at least relationship between the number and each output channel of the output channel.
In specific implementation, enabled, column column reading is read by controlling each storage unit row of the row It takes the row of enabled and place row to read to enable the data in each storage unit passing through multiple output channel parallel outputs To computing array.
Neural Network Data processing method provided by the embodiments of the present application, three-dimensional memory array therein are pre-stored nerve net Network data can be defeated parallel by multiple output channels when computing array needs to carry out convolution algorithm to Neural Network Data Data out, can be provided in a short time for computing array it is a large amount of calculate data, improve Neural Network Data processing and Row degree accelerates the speed of convolutional calculation.
The process of Neural Network Data write-in three-dimensional memory array is introduced below, specific:
Referring to fig. 4, the flow chart of a kind of Neural Network Data processing method shown according to an exemplary embodiment is such as schemed Shown in 4, comprising:
S201: the input width of three-dimensional memory array is determined according to the data-interface width of external memory space;
In this step, the input width of data can be according to the data-interface width of external memory space by configuration parameter To be controlled.
S202: the Neural Network Data in the external memory space is stored according to the input width to the three-dimensional In storage array.
In this step, three-dimensional memory array is written into according to the input width that previous step determines in Neural Network Data In.It can be written herein according to the sequence of storage unit in three-dimensional memory array, or Neural Network Data is specified Specific storage unit, herein without specifically limiting.
A kind of Neural Network Data processing system provided by the embodiments of the present application is introduced below, described below one Kind of Neural Network Data processing system can be cross-referenced with a kind of above-described Neural Network Data processing method.
Referring to Fig. 5, a kind of structure chart of Neural Network Data processing system shown according to an exemplary embodiment is such as schemed Shown in 5, comprising:
Determining module 501 for determining pending data, and determines that the pending data is right in three-dimensional memory array All storage units answered;
Output module 502, for by the data in each storage unit by multiple output channel parallel outputs extremely Computing array, so that the computing array carries out convolutional calculation to the pending data.
Neural Network Data processing system provided by the embodiments of the present application, three-dimensional memory array therein are pre-stored nerve net Network data can be defeated parallel by multiple output channels when computing array needs to carry out convolution algorithm to Neural Network Data Data out, can be provided in a short time for computing array it is a large amount of calculate data, improve Neural Network Data processing and Row degree accelerates the speed of convolutional calculation.
On the basis of the above embodiments, the output module 502 is specially to control often as a preferred implementation manner, Row's reading that a storage unit row reading of the row enables, the column reading of column is enabled and place is arranged is enabled will be each The module that data in the storage unit pass through multiple output channel parallel outputs to computing array.
On the basis of the above embodiments, the output module 502 is specially that determination is matched as a preferred implementation manner, Set parameter, and according to the configuration parameter by the data in each storage unit by multiple output channel parallel outputs extremely The module of computing array;Wherein, the configuration parameter includes at least the number and each output channel of the output channel Between relationship.
On the basis of the above embodiments, as a preferred implementation manner, further include:
Input width module is determined, for determining three-dimensional memory array according to the data-interface width of external memory space Input width;
Memory module, for by the Neural Network Data in the external memory space according to the input width store to In the three-dimensional memory array.
About the system in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Present invention also provides a kind of electronic equipment, referring to Fig. 6, a kind of electronic equipment 600 provided by the embodiments of the present application Structure chart, as shown in fig. 6, may include processor 11 and memory 12.The electronic equipment 600 can also include multimedia group Part 13, one or more of input/output (I/O) interface 14 and communication component 15.
Wherein, processor 11 is used to control the integrated operation of the electronic equipment 600, to complete above-mentioned Neural Network Data All or part of the steps in processing method.Memory 12 is for storing various types of data to support in the electronic equipment 600 operation, these data for example may include any application or method for operating on the electronic equipment 600 Instruction and the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..This is deposited Reservoir 12 can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random It accesses memory (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, disk or CD.Multimedia component 13 may include screen and audio component.Wherein shield Curtain for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include one A microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in memory It 12 or is sent by communication component 15.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O interface 14 provide interface between processor 11 and other interface modules, other above-mentioned interface modules can be keyboard, mouse, button Deng.These buttons can be virtual push button or entity button.Communication component 15 for the electronic equipment 600 and other equipment it Between carry out wired or wireless communication.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, therefore corresponding communication Component 15 may include: Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 600 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part is realized, for executing above-mentioned Neural Network Data processing method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of above-mentioned Neural Network Data processing method is realized when program instruction is executed by processor.For example, this computer-readable is deposited Storage media can be the above-mentioned memory 12 including program instruction, and above procedure instruction can be by the processor 11 of electronic equipment 600 It executes to complete above-mentioned Neural Network Data processing method.
Each embodiment is described in a progressive manner in specification, the highlights of each of the examples are with other realities The difference of example is applied, the same or similar parts in each embodiment may refer to each other.For system disclosed in embodiment Speech, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is referring to method part illustration ?.It should be pointed out that for those skilled in the art, under the premise of not departing from the application principle, also Can to the application, some improvement and modification can also be carried out, these improvement and modification also fall into the protection scope of the claim of this application It is interior.
It should also be noted that, in the present specification, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.

Claims (10)

1. a kind of Neural Network Data processing structure characterized by comprising
For being pre-stored the Neural Network Data, and the three-dimensional storage battle array for passing through multiple output channel parallel output target datas Column;
It is connected with the three-dimensional memory array, for carrying out the computing array of convolutional calculation to the target data.
2. Neural Network Data processing structure according to claim 1, which is characterized in that every row of the three-dimensional memory array It corresponds to row write to enter to enable and go to read to enable, the equal respective column write-in of each column, which enables and arrange to read, to be enabled, and every row corresponding arrange is write It is enabled to enter to enable and arrange reading.
3. Neural Network Data processing structure according to claim 1 or claim 2, which is characterized in that the three-dimensional memory array tool Body is the three-dimensional memory array of ping-pong structure.
4. a kind of Neural Network Data processing method characterized by comprising
It determines pending data, and determines pending data corresponding all storage units in three-dimensional memory array;
By the data in each storage unit by multiple output channel parallel outputs to computing array, so as to the calculating Array carries out convolutional calculation to the pending data.
5. Neural Network Data processing method according to claim 4, which is characterized in that described by each storage unit In data pass through multiple output channel parallel outputs to computing array, comprising:
The row that the row reading of the row of each storage unit is enabled, column column reading is enabled and place is arranged is controlled to read It is enabled that data in each storage unit are passed through into multiple output channel parallel outputs to computing array.
6. Neural Network Data processing method according to claim 4, which is characterized in that described by each storage unit In data pass through multiple output channel parallel outputs to computing array, comprising:
It determines configuration parameter, and the data in each storage unit is passed through by multiple output channels according to the configuration parameter Parallel output is to computing array;Wherein, the configuration parameter includes at least the number and each output of the output channel Relationship between channel.
7. the Neural Network Data processing method according to any one of claim 4 to 6, which is characterized in that further include:
The input width of three-dimensional memory array is determined according to the data-interface width of external memory space;
Neural Network Data in the external memory space is stored according to the input width to the three-dimensional memory array In.
8. a kind of Neural Network Data processing system characterized by comprising
Determining module for determining pending data, and determines pending data corresponding institute in three-dimensional memory array There is storage unit;
Output module, for by the data in each storage unit by multiple output channel parallel outputs to calculating battle array Column, so that the computing array carries out convolutional calculation to the pending data.
9. a kind of electronic equipment characterized by comprising
Memory, for storing computer program;
Processor is realized as described in any one of claim 4 to 7 when for executing the computer program at Neural Network Data The step of reason method.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program realizes that Neural Network Data is handled as described in any one of claim 4 to 7 when the computer program is executed by processor The step of method.
CN201910429293.8A 2019-05-22 2019-05-22 A kind of Neural Network Data processing structure, method, system and relevant apparatus Pending CN110147880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910429293.8A CN110147880A (en) 2019-05-22 2019-05-22 A kind of Neural Network Data processing structure, method, system and relevant apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910429293.8A CN110147880A (en) 2019-05-22 2019-05-22 A kind of Neural Network Data processing structure, method, system and relevant apparatus

Publications (1)

Publication Number Publication Date
CN110147880A true CN110147880A (en) 2019-08-20

Family

ID=67592728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910429293.8A Pending CN110147880A (en) 2019-05-22 2019-05-22 A kind of Neural Network Data processing structure, method, system and relevant apparatus

Country Status (1)

Country Link
CN (1) CN110147880A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091188A (en) * 2019-12-16 2020-05-01 腾讯科技(深圳)有限公司 Forward computing method and device for neural network and computer readable storage medium
CN111506518A (en) * 2020-04-13 2020-08-07 湘潭大学 Data storage control method and device
CN111857999A (en) * 2020-07-10 2020-10-30 苏州浪潮智能科技有限公司 Data scheduling method, device and equipment and computer readable storage medium
CN112016522A (en) * 2020-09-25 2020-12-01 苏州浪潮智能科技有限公司 Video data processing method, system and related components
CN112395247A (en) * 2020-11-18 2021-02-23 北京灵汐科技有限公司 Data processing method and storage and calculation integrated chip
CN112596684A (en) * 2021-03-08 2021-04-02 成都启英泰伦科技有限公司 Data storage method for voice deep neural network operation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339846A (en) * 2010-07-19 2012-02-01 旺宏电子股份有限公司 Semiconductor memory element possessing transistor with adjustable grid resistance value
CN103533378A (en) * 2013-10-09 2014-01-22 天津大学 Three-dimensional integer DCT (Discrete Cosine Transform) transformation system on basis of FPGA (Field Programmable Gate Array) and transformation method thereof
CN103578552A (en) * 2012-08-10 2014-02-12 三星电子株式会社 Nonvolatile memory device and operating method with variable memory cell state definitions
CN103681679A (en) * 2012-08-30 2014-03-26 成都海存艾匹科技有限公司 Three-dimensional offset-printed memory
US20160092122A1 (en) * 2014-09-30 2016-03-31 Sandisk Technologies Inc. Method and apparatus for wear-levelling non-volatile memory
CN106898371A (en) * 2017-02-24 2017-06-27 中国科学院上海微系统与信息技术研究所 Three-dimensional storage reading circuit and its wordline and bit-line voltage collocation method
CN107220704A (en) * 2016-03-21 2017-09-29 杭州海存信息技术有限公司 Integrated neural network processor containing three-dimensional memory array
CN107316014A (en) * 2016-03-07 2017-11-03 杭州海存信息技术有限公司 Have the memory of image identification function concurrently
CN108596331A (en) * 2018-04-16 2018-09-28 浙江大学 A kind of optimization method of cell neural network hardware structure
CN109446996A (en) * 2018-10-31 2019-03-08 北京智慧眼科技股份有限公司 Facial recognition data processing unit and processing method based on FPGA

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339846A (en) * 2010-07-19 2012-02-01 旺宏电子股份有限公司 Semiconductor memory element possessing transistor with adjustable grid resistance value
CN103578552A (en) * 2012-08-10 2014-02-12 三星电子株式会社 Nonvolatile memory device and operating method with variable memory cell state definitions
CN103681679A (en) * 2012-08-30 2014-03-26 成都海存艾匹科技有限公司 Three-dimensional offset-printed memory
CN103533378A (en) * 2013-10-09 2014-01-22 天津大学 Three-dimensional integer DCT (Discrete Cosine Transform) transformation system on basis of FPGA (Field Programmable Gate Array) and transformation method thereof
US20160092122A1 (en) * 2014-09-30 2016-03-31 Sandisk Technologies Inc. Method and apparatus for wear-levelling non-volatile memory
CN107316014A (en) * 2016-03-07 2017-11-03 杭州海存信息技术有限公司 Have the memory of image identification function concurrently
CN107220704A (en) * 2016-03-21 2017-09-29 杭州海存信息技术有限公司 Integrated neural network processor containing three-dimensional memory array
CN106898371A (en) * 2017-02-24 2017-06-27 中国科学院上海微系统与信息技术研究所 Three-dimensional storage reading circuit and its wordline and bit-line voltage collocation method
CN108596331A (en) * 2018-04-16 2018-09-28 浙江大学 A kind of optimization method of cell neural network hardware structure
CN109446996A (en) * 2018-10-31 2019-03-08 北京智慧眼科技股份有限公司 Facial recognition data processing unit and processing method based on FPGA

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗玉平等: "《高清晰度电视3D-DCT变换硬件核解决方案》", 《图形与图像》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091188A (en) * 2019-12-16 2020-05-01 腾讯科技(深圳)有限公司 Forward computing method and device for neural network and computer readable storage medium
CN111091188B (en) * 2019-12-16 2022-03-25 腾讯科技(深圳)有限公司 Forward computing method and device for neural network and computer readable storage medium
CN111506518A (en) * 2020-04-13 2020-08-07 湘潭大学 Data storage control method and device
CN111857999A (en) * 2020-07-10 2020-10-30 苏州浪潮智能科技有限公司 Data scheduling method, device and equipment and computer readable storage medium
CN111857999B (en) * 2020-07-10 2023-01-10 苏州浪潮智能科技有限公司 Data scheduling method, device and equipment and computer readable storage medium
CN112016522A (en) * 2020-09-25 2020-12-01 苏州浪潮智能科技有限公司 Video data processing method, system and related components
CN112016522B (en) * 2020-09-25 2022-06-07 苏州浪潮智能科技有限公司 Video data processing method, system and related components
CN112395247A (en) * 2020-11-18 2021-02-23 北京灵汐科技有限公司 Data processing method and storage and calculation integrated chip
CN112395247B (en) * 2020-11-18 2024-05-03 北京灵汐科技有限公司 Data processing method and memory and calculation integrated chip
CN112596684A (en) * 2021-03-08 2021-04-02 成都启英泰伦科技有限公司 Data storage method for voice deep neural network operation

Similar Documents

Publication Publication Date Title
CN110147880A (en) A kind of Neural Network Data processing structure, method, system and relevant apparatus
US20130262774A1 (en) Method and apparatus to manage object based tier
TWI554883B (en) Systems and methods for segmenting data structures in a memory system
KR20210045509A (en) Accessing data in multi-dimensional tensors using adders
Van Houdt Performance of garbage collection algorithms for flash-based solid state drives with hot/cold data
CN106201923B (en) Method for reading and writing data and device
US9335947B2 (en) Inter-processor memory
CN108885596A (en) Data processing method, equipment, dma controller and computer readable storage medium
CN105573917B (en) Techniques for selecting an amount of reserved space
CN110688256B (en) Metadata power-on recovery method and device, electronic equipment and storage medium
CN109754359A (en) A kind of method and system that the pondization applied to convolutional neural networks is handled
CN107832151A (en) A kind of cpu resource distribution method, device and equipment
CN112906865B (en) Neural network architecture searching method and device, electronic equipment and storage medium
CN109472361A (en) Neural network optimization
CN109074335A (en) Data processing method, equipment, dma controller and computer readable storage medium
CN105528183B (en) A kind of method and storage equipment of storing data
CN103714010B (en) Storage device write-in method and storage device
US20090055616A1 (en) Maintaining reserved free space for segmented logical volumes
CN112396072B (en) Image classification acceleration method and device based on ASIC (application specific integrated circuit) and VGG16
CN109902821A (en) A kind of data processing method, device and associated component
CN104951243B (en) Storage extended method and device in virtual storage system
US8458405B2 (en) Cache bank modeling with variable access and busy times
CN114444673A (en) Artificial neural network memory system based on artificial neural network data locality
US20120079188A1 (en) Method and apparatus to allocate area to virtual volume based on object access type
CN106155910A (en) A kind of methods, devices and systems realizing internal storage access

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190820