CN110399330A - A kind of data buffer storage processing method, system, device and storage medium - Google Patents

A kind of data buffer storage processing method, system, device and storage medium Download PDF

Info

Publication number
CN110399330A
CN110399330A CN201910561850.1A CN201910561850A CN110399330A CN 110399330 A CN110399330 A CN 110399330A CN 201910561850 A CN201910561850 A CN 201910561850A CN 110399330 A CN110399330 A CN 110399330A
Authority
CN
China
Prior art keywords
data
matrix
minor
predetermined number
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910561850.1A
Other languages
Chinese (zh)
Inventor
李拓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Wave Intelligent Technology Co Ltd
Original Assignee
Suzhou Wave Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Wave Intelligent Technology Co Ltd filed Critical Suzhou Wave Intelligent Technology Co Ltd
Priority to CN201910561850.1A priority Critical patent/CN110399330A/en
Publication of CN110399330A publication Critical patent/CN110399330A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/781On-chip cache; Off-chip memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a kind of data buffer storage processing methods, comprising: determines to data cached corresponding minor matrix;Wherein, wherein minor matrix be the predetermined number for being divided former caching matrix minor matrix, predetermined number is no less than two;It will be to data cached caching to different minor matrixs;The data in each minor matrix are read parallel.It can be seen that, the application in advance divides former caching matrix for multiple minor matrixs, in data buffer storage, it can will cache to data cached to different minor matrixs, due to carry out data buffer storage operation that can simultaneously to different minor matrixs, parallel buffer data can reduce the data buffer storage occupied time into multiple minor matrixs;In reading data, data read operation can be carried out simultaneously to different minor matrixs, therefore the data in each minor matrix can be read parallel, to compare serial data reading, parallel read data can reduce the time consumed by reading data.

Description

A kind of data buffer storage processing method, system, device and storage medium
Technical field
The present invention relates to chip caching technologies, more specifically to a kind of data buffer storage processing method, system, device And computer readable storage medium.
Background technique
In a chip design, it such as the dependence that the time of two module process datas in fruit chip is not fixed, just needs A set of flow control mechanism is set, to realize the speed of data transmitted by the module worked as and send data beyond reception data When the data processing speed of module, the module pause for sending data is allowed to send data.Under flow control mechanism, two intermodules All data, it is necessary to after the completion of waiting the data processing sent, receive the module transmission for receiving data Notice, then data are sent by the module of transmission data.
In order to improve the data transmission bauds of two modules in chip, caching is often set between two module interfaces, When the module for receiving data, which needs to suspend, receives data, data can be first sent to caching by the module for sending data, work as reception When the module recovery of data, the data of caching can be directly handled, send the module of data without waiting after receiving notice The data of transmission.
The caching of two intermodules needs specific size, and logically can be understood as caching is a caching square Battle array, width are configured according to module interface, i.e. the width of caching matrix is identical as interface width, that is, a data Length, depth can be designed with the buffer size of cache resources red in composite chip and two intermodules.
In the chip application scenarios of some complexity, it may appear that the unfixed data of many length, but the width cached It is fixed, therefore, when carrying out data buffer storage, it is necessary to data be split according to width, then will successively be split As a result sequence is cached in caching, is read out when reading the data, then in the slave caching of sequence.For example, interface width is 8bit is directly transmitted if the data to be sent are 8bit, if 32bit when data, is greater than interface width, is then needed to carry out It splits, obtains 4 8bit data, send in four times, when reading, then read in four times.In this case, data buffer storage and number All it is serially to carry out according to reading, a large amount of time can be consumed.
Therefore, data buffer storage processing speed how is improved, is those skilled in the art's problem to be solved.
Summary of the invention
The purpose of the present invention is to provide a kind of data buffer storage processing method, system, device and computer-readable storage mediums Matter is serially to carry out that a large amount of time can be consumed to solve the problem of data buffer storage and reading data all.
To achieve the above object, the embodiment of the invention provides following technical solutions:
A kind of data buffer storage processing method, comprising:
It determines to data cached corresponding minor matrix;Wherein, wherein the minor matrix is to divide former caching matrix The minor matrix of obtained predetermined number, the predetermined number are no less than two;
It will be described to data cached caching to different minor matrixs;
The data in each minor matrix are read parallel.
Wherein, the width of the minor matrix is the width of the former caching matrix;The depth of the minor matrix is the original The depth of the caching matrix/predetermined number;The predetermined number is the former caching matrix largest buffered data volume/width The result that rounds up of degree.
Wherein, the predetermined number is the result/presupposition multiple that rounds up.
It is wherein, described to cache to data cached to different minor matrixs, comprising:
By every predetermined number to data cached row successively parallel buffer to each minor matrix;It is wherein described wait cache Width size to data cached described in data behavior.
Wherein, by every predetermined number to data cached row successively parallel buffer to each minor matrix, comprising:
It determines described to each data line label to data cached row in data cached;
Using each data line label with the remainder result of the predetermined number as corresponding with the data line label The minor matrix label to data cached row;
It each described will be cached to data cached to right according to each minor matrix label to data cached row The minor matrix answered.
The application also provides a kind of data buffer storage processing system, comprising:
Minor matrix determining module, for determining to data cached corresponding minor matrix;Wherein, wherein the minor matrix is will The minor matrix for the predetermined number that former caching matrix are divided, the predetermined number are no less than two;
Cache module, for that described will cache to data cached to different minor matrixs;
Read module, for reading the data in each minor matrix parallel.
Wherein, the width of the minor matrix is the width of the former caching matrix;At the beginning of the depth of the minor matrix is depth Begin as a result, the depth initial results are depth/predetermined number of the former caching matrix;The predetermined number is described The result that rounds up of former caching matrix largest buffered data volume/width.
Wherein, the predetermined number is the result/presupposition multiple that rounds up.
The application also provides a kind of data buffer storage processing unit, comprising:
Memory, for storing computer program;
Processor, the step of data buffer storage processing method is realized when for executing the computer program.
The application also provides a kind of computer readable storage medium, and calculating is stored on the computer readable storage medium The step of machine program, the computer program realizes data buffer storage processing method when being executed by processor.
It can be seen that a kind of data buffer storage processing method provided by the present application, comprising: determine to data cached corresponding small Matrix;Wherein, described default wherein the minor matrix is the minor matrix for the predetermined number for being divided former caching matrix Number is no less than two;It will be described to data cached caching to different minor matrixs;The number in each minor matrix is read parallel According to.It can be seen that the application in advance divides former caching matrix for multiple minor matrixs, and the total capacity of minor matrix and former caching Matrix is identical, can will be to data cached caching to different minor matrixs, due to can be simultaneously to different in data buffer storage The carry out data buffer storage operation of minor matrix, therefore compared to serial caching in data cached to one caching matrix, parallel buffer Data are into multiple minor matrixs, it is possible to reduce the data buffer storage occupied time;It, can be to different small squares in reading data Battle array carries out data read operation simultaneously, therefore can read the data in each minor matrix parallel, to read number compared to serial According to parallel read data can reduce the time consumed by reading data.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of data buffer storage processing method flow chart disclosed by the embodiments of the present invention;
Fig. 2 is a kind of specific data buffer storage processing method flow chart disclosed by the embodiments of the present invention;
Fig. 3 is a kind of data buffer storage schematic diagram disclosed by the embodiments of the present invention;
Fig. 4 is a kind of specific data buffer storage processing method flow chart disclosed by the embodiments of the present invention;
Fig. 5 is a kind of data buffer storage processing system structural schematic diagram disclosed by the embodiments of the present invention;
Fig. 6 is a kind of data buffer storage processing device structure diagram disclosed by the embodiments of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a kind of data buffer storage processing method, system, device and computer readable storage medium, It is serially to carry out that a large amount of time can be consumed to solve the problem of data buffer storage and reading data all.
Referring to Fig. 1, a kind of data buffer storage processing method provided in an embodiment of the present invention is specifically included:
S101 is determined to data cached corresponding minor matrix;Wherein, wherein the minor matrix is to carry out former caching matrix The minor matrix of obtained predetermined number is divided, the predetermined number is no less than two.
In the present solution, two original caching matrix of intermodule are split, be split as capacity it is constant at least two A minor matrix.
It should be noted that cache resources are very limited in the chips, therefore in order not to occupying in this programme more Cache resources are split to former caching matrix, to obtain multiple caching matrix, and there is no increase for whole volume.
When carrying out data buffer storage, data access logic is can determine whether first to data cached corresponding minor matrix.
S102, will be described to data cached caching to different minor matrixs.
Due in the present solution, for have multiple caching matrix for data cached, i.e., multiple minor matrixs, therefore cache It is no longer that will will be cached to data cached to not to data cached serial caching into a caching matrix when data In same minor matrix.
Due to it is data cached be to be cached in different minor matrixs, and can simultaneously to different minor matrix into Row data caching, therefore compared to serial caching in data cached to one caching matrix, parallel buffer data are to multiple In minor matrix, it is possible to reduce the data buffer storage occupied time.
S103 reads the data in each minor matrix parallel.
Due to it is data cached be cached in different minor matrixs, and can to different minor matrixs simultaneously into Row data read operation, therefore the data in each minor matrix can be read parallel, so that serial data reading is compared, it is parallel to read Access is according to can reduce the time consumed by reading data.
It can be seen that a kind of data buffer storage processing method provided in this embodiment, in advance by former caching matrix divide in order to Multiple minor matrixs, and the total capacity of minor matrix is identical as former caching matrix can will be to data cached caching in data buffer storage To different minor matrixs, due to carry out data buffer storage operation that can simultaneously to different minor matrixs, compared to serial caching To which in data cached to one caching matrix, parallel buffer data are into multiple minor matrixs, it is possible to reduce occupied by data buffer storage Time;In reading data, data read operation can be carried out simultaneously to different minor matrixs, therefore can read parallel every Data in a minor matrix, to compare serial data reading, parallel read data can reduce the time consumed by reading data.
On the basis of the above embodiments, the embodiment of the present application makes further limit and explanation to technical solution.Tool Body is as follows:
The width of the minor matrix is the width of the former caching matrix;The depth of the minor matrix is that depth is initially tied Fruit, the depth initial results are depth/predetermined number of the former caching matrix;The predetermined number is described former slow Deposit matrix largest buffered data volume/width result that rounds up.
In the present solution, minor matrix and former caching matrix equivalent width, be still interface width, and adjust to depth Whole, the depth of minor matrix is depth/predetermined number of former caching matrix.
For example, the width 8bit of former caching matrix, former caching matrix largest buffered data volume is 30, then predetermined number is As a result, i.e. 4, the depth of former caching matrix is 4 for 30/8 upward evidence obtaining, then the width of each minor matrix is 8bit, depth 4/ 4, i.e., 1.
On the basis of the above embodiments, the embodiment of the present application makes further limit and explanation to technical solution.Tool Body is as follows:
The predetermined number is the result/presupposition multiple that rounds up.
It should be noted that if former caching matrix largest buffered data volume is very big, former caching matrix largest buffered data Amount/width result that rounds up can reach tens even more, and the number of that at this moment minor matrix will be excessive, and control is caused to patrol Complexity is collected, overall performance reduces, therefore, to avoid such problem, presupposition multiple determines according to actual conditions is needed in this programme, It is the above-mentioned result/presupposition multiple that rounds up by predetermined number value.
For example, the width 8bit of former caching matrix, former caching matrix largest buffered data volume is 320, then former caching matrix Largest buffered data volume/width round up result be 40, presupposition multiple 10, then final predetermined number be 40/ 10, i.e., 4.
A kind of specific data buffer storage processing method provided by the embodiments of the present application is introduced below, it is described below A kind of data buffer storage processing method can be cross-referenced with above-described embodiment.
Referring to fig. 2, a kind of specific data buffer storage processing method provided by the embodiments of the present application, specifically includes:
S201 is determined to data cached corresponding minor matrix;Wherein, wherein the minor matrix is to carry out former caching matrix The minor matrix of obtained predetermined number is divided, the predetermined number is no less than two.
S202, by every predetermined number to data cached row successively parallel buffer to each minor matrix;It is wherein described To width size described in data cached behavior to data cached.
On the basis of the above embodiments, the depth of minor matrix may be greater than 1, then in same matrix, caching or reading Will exist when adjacent data and wait situation, e.g., the depth of minor matrix is 2, there are 8 bit data to data cached, is then caching When, if cached in sequence, the 1st, the 2nd data can be buffered in a minor matrix, the 2nd data can be 1st data buffer storage just can be carried out caching when completing, when reading similarly.In the present solution, in order to avoid such waiting situation, into One step improves the speed of data buffer storage, can be by every predetermined number to data cached parallel buffer to each minor matrix.
Referring to Fig. 3, in Fig. 3,8 are shared to data cached row, each data cached behavior 8bit, minor matrix width is 8bit, depth 2.In the present solution, first caching period is by 0,1,2,3, this 4 data lines are cached respectively to 4 small squares Battle array, second caching period, this 4 data lines were cached respectively to 4 minor matrixs by 4,5,6,7.
S203 reads the data in each minor matrix parallel.
Referring to fig. 4, in a specific embodiment, S202 is specifically included:
S301 is determined described to each data line label to data cached row in data cached.
Data row is cached for convenience, in the present solution, determine the line number of each data line first, in Fig. 3, The line number of 8 data lines is 0 to 7.
S302, using the remainder result of each data line label and the predetermined number as with the data line label The corresponding minor matrix label to data cached row.
In the present solution, using the line number of data line and the remainder result of predetermined number as the small square that cache the data line Battle array label.For example, the line number of data line be 0, predetermined number 4, then remainder result be 0%4, i.e., 0, by the data line cache to Minor matrix 0 similarly carries out the calculating of other data lines, and this is no longer going to repeat them.Final buffered results may refer to Fig. 3.
It should be noted that the example of above-mentioned Fig. 3 is only a kind of situation, there can be other situations in practice, for example, can To be that if data line has 9, then the depth of minor matrix 0 can be added 1, other minor matrixs are constant the case where being misaligned.
S303, will be each described to data cached caching according to each minor matrix label to data cached row To corresponding minor matrix.
It should be noted that the caching matrix of script, accessing operation, which all passes through a controller logic, to be carried out.In this programme In, access controller is logically divided into two layers, and the controller of first layer is responsible for the data exchange with interface, in deposit data, according to Above-described embodiment is sent to data different minor matrixs, access according to when, sent out according to above-described embodiment to one or more minor matrixs It send and takes request of data.The second layer is exactly that each minor matrix has independent controller, and the function of this controller is substantially with originally The controller function of caching matrix is consistent, according to the request of the access data received, operates to the caching of control.
When carrying out the storage of data, two layers of controller logic can bring certain additional loss, but because complexity Not high, can determine can complete within the same clock cycle with traditional single caching matrix scheme, and the loss of bring performance can To ignore.And when carrying out reading data, if what is read is the data of an interface width, that situation is with storing class Seemingly, and if what is read is the data of multiple interface widths, the effect that minor matrix is read parallel can be realized.
For example, it is assumed that interface width 8bit has 4 minor matrixs, if the data to be read are 32bit, we In case, the data of this 32 bit are respectively to have put 8bit in each minor matrix certainly, by the scheduling of controller logic, by 1 The time cycle read is cached, the data of 32bit can be obtained, and the scheme of traditional single caching matrix needs 4 cachings The time cycle of reading.
If the data read are 40bit, have a minor matrix and house 2 row data, whole read access time It is exactly 2 caching read access time periods, and traditional approach needs 5.
In short, the longer the application advantage of data is more obvious on improving the performance for reading data, it is assumed that interface width N, Data length is X, and minor matrix number is L, if not considering that the overhead of controller logic scheduling (does not in general exceed 1 caching read cycle), performance improves the case where ratio depends on alignment, up toTimes, it is minimum to be aboutTimes.
A kind of data buffer storage processing system provided by the embodiments of the present application is introduced below, a kind of number described below It can be cross-referenced according to caching process system and any of the above-described embodiment.
Referring to Fig. 5, a kind of data buffer storage processing system provided by the embodiments of the present application is specifically included:
Minor matrix determining module 401, for determining to data cached corresponding minor matrix;Wherein, wherein the minor matrix Minor matrix for the predetermined number for being divided former caching matrix, the predetermined number are no less than two.
Cache module 402, for that described will cache to data cached to different minor matrixs.
Read module 403, for reading the data in each minor matrix parallel.
Optionally, the width of the minor matrix is the width of the former caching matrix;The depth of the minor matrix is depth Initial results, the depth initial results are depth/predetermined number of the former caching matrix;The predetermined number is institute State the result that rounds up of former caching matrix largest buffered data volume/width.
Optionally, the predetermined number is the result/presupposition multiple that rounds up.
Optionally, the cache module 402 is specifically used for every predetermined number to data cached row successively parallel buffer To each minor matrix;To width size described in data cached behavior to data cached described in wherein.
Optionally, the cache module 402 includes:
Data line number determining unit, it is described to each data to data cached row in data cached for determining Line label;
Minor matrix number determining unit, for making the remainder result of each data line label and the predetermined number For the minor matrix label to data cached row corresponding with the data line label;
Cache unit, for will be each described wait cache according to each minor matrix label to data cached row Data buffer storage is to corresponding minor matrix.
The data buffer storage processing system of the present embodiment is for realizing data buffer storage processing method above-mentioned, therefore data buffer storage The embodiment part of the visible data buffer storage processing method hereinbefore of specific embodiment in processing system, for example, minor matrix Determining module 401, cache module 402, read module 403 are respectively used to realize and walk in a kind of above-mentioned data buffer storage processing method Rapid S101, S102, S103, so, specific embodiment is referred to the description of corresponding various pieces embodiment, herein not It repeats again.
A kind of data buffer storage processing unit provided by the embodiments of the present application is introduced below, a kind of number described below It can be cross-referenced according to buffer processing device and any of the above-described embodiment.
Referring to Fig. 6, a kind of data buffer storage processing unit provided by the embodiments of the present application is specifically included:
Memory 100, for storing computer program;
Processor 200, the step of any of the above-described data buffer storage processing method is realized when for executing the computer program.
Specifically, memory 100 includes non-volatile memory medium, built-in storage.Non-volatile memory medium storage There are operating system and computer-readable instruction, which is that the operating system and computer in non-volatile memory medium can The operation of reading instruction provides environment.
Further, the data buffer storage processing unit in the present embodiment can also include:
Input interface 300, for obtaining the computer program of extraneous importing, and the computer program that will acquire save to In the memory 100, it can be also used for the various instructions and parameter that obtain extraneous terminal device transmission, and be transmitted to processor In 200, so that processor 200 is handled accordingly using above-mentioned various instructions and parametric evolving.In the present embodiment, the input is connect Mouth 300 can specifically include but be not limited to USB interface, serial line interface, speech input interface, fingerprint input interface, hard disk and reads Interface etc..
Output interface 400, the various data for generating processor 200 are exported to coupled terminal device, with Other terminal devices convenient for being connected with output interface 400 can get the various data of the generation of processor 200.The present embodiment In, the output interface 400 can specifically include but be not limited to USB interface, serial line interface etc..
Communication unit 500, for establishing remote linkage between data buffer storage processing unit and other nodes, in order to connect Receive transaction, and synchronous block data.
Keyboard 600, the various parameters data or instruction inputted and tapping keycap in real time for obtaining user.
Display 700 carries out real-time display for the relevant information to data caching process process, in order to which user is timely Ground understands current data caching process situation.
Mouse 800 can be used for assisting user input data and simplify the operation of user.
Present invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, the computer Step provided by above-described embodiment may be implemented when program is executed by processor.The storage medium may include: USB flash disk, movement Hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), the various media that can store program code such as magnetic or disk.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other The difference of embodiment, the same or similar parts in each embodiment may refer to each other.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (10)

1. a kind of data buffer storage processing method characterized by comprising
It determines to data cached corresponding minor matrix;Wherein, wherein the minor matrix is to be divided to obtain by former caching matrix Predetermined number minor matrix, the predetermined number is no less than two;
It will be described to data cached caching to different minor matrixs;
The data in each minor matrix are read parallel.
2. the method according to claim 1, wherein the width of the minor matrix is the width of the former caching matrix Degree;The depth of the minor matrix is depth/predetermined number of the former caching matrix;The predetermined number is described former slow Deposit matrix largest buffered data volume/width result that rounds up.
3. according to the method described in claim 2, it is characterized in that, the predetermined number is the result that rounds up/default Multiple.
4. according to the method described in claim 2, it is characterized in that, described will be to data cached caching to different minor matrixs, packet It includes:
By every predetermined number to data cached row successively parallel buffer to each minor matrix;It is wherein described to data cached Width size to data cached described in behavior.
5. according to the method described in claim 4, it is characterized in that, successively delaying parallel to data cached row by every predetermined number It deposits to each minor matrix, comprising:
It determines described to each data line label to data cached row in data cached;
Using the remainder result of each data line label and the predetermined number as institute corresponding with the data line label State the minor matrix label to data cached row;
It each described will be cached to data cached to corresponding according to each minor matrix label to data cached row Minor matrix.
6. a kind of data buffer storage processing system characterized by comprising
Minor matrix determining module, for determining to data cached corresponding minor matrix;Wherein, wherein the minor matrix is that original is slow The minor matrix for the predetermined number that matrix is divided is deposited, the predetermined number is no less than two;
Cache module, for that described will cache to data cached to different minor matrixs;
Read module, for reading the data in each minor matrix parallel.
7. according to the method described in claim 6, it is characterized in that, the width of the minor matrix is the width of the former caching matrix Degree;The depth of the minor matrix is depth initial results, the depth initial results be the former caching matrix depth/it is described Predetermined number;The predetermined number is the result that rounds up of the former caching matrix largest buffered data volume/width.
8. the method according to the description of claim 7 is characterized in that the predetermined number is the result that rounds up/default Multiple.
9. a kind of data buffer storage processing unit characterized by comprising
Memory, for storing computer program;
Processor realizes the data buffer storage processing side as described in any one of claim 1 to 5 when for executing the computer program The step of method.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program realizes the data buffer storage processing method as described in any one of claim 1 to 5 when the computer program is executed by processor The step of.
CN201910561850.1A 2019-06-26 2019-06-26 A kind of data buffer storage processing method, system, device and storage medium Withdrawn CN110399330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561850.1A CN110399330A (en) 2019-06-26 2019-06-26 A kind of data buffer storage processing method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561850.1A CN110399330A (en) 2019-06-26 2019-06-26 A kind of data buffer storage processing method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN110399330A true CN110399330A (en) 2019-11-01

Family

ID=68323486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561850.1A Withdrawn CN110399330A (en) 2019-06-26 2019-06-26 A kind of data buffer storage processing method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN110399330A (en)

Similar Documents

Publication Publication Date Title
CN106951388A (en) A kind of DMA data transfer method and system based on PCIe
CN102112973B (en) Conditioning unit, coherent system, coordination approach, SIC (semiconductor integrated circuit) and image processing apparatus
CN103019838B (en) Multi-DSP (Digital Signal Processor) platform based distributed type real-time multiple task operating system
CN103645994A (en) Data processing method and device
CN102065568B (en) Data descriptor-based medium access control (MAC) software and hardware interaction method and hardware realization device
CN106598889A (en) SATA (Serial Advanced Technology Attachment) master controller based on FPGA (Field Programmable Gate Array) sandwich plate
CN104699654A (en) Interconnection adapting system and method based on CHI on-chip interaction bus and QPI inter-chip interaction bus
US9588931B2 (en) Communication bridging between devices via multiple bridge elements
CN103002046A (en) Multi-system data copying remote direct memory access (RDMA) framework
CN106662895A (en) Computer device and data read-write method for computer device
CN109086168A (en) A kind of method and its system using hardware backup solid state hard disk writing rate
CN103677968A (en) Transaction processing method, transaction coordinator device and transaction participant device and system
CN116225992A (en) NVMe verification platform and method supporting virtualized simulation equipment
KR101994929B1 (en) Method for operating collective communication and collective communication system using the same
CN103885900B (en) Data access processing method, PCIe device and user equipment
CN114399035A (en) Method for transferring data, direct memory access device and computer system
CN105718396A (en) I<2>C bus device with big data master device transmission function and communication method thereof
CN102708079B (en) Be applied to the method and system of the control data transmission of microcontroller
CN110069565A (en) A kind of method and device of distributed data base batch data processing
CN110399330A (en) A kind of data buffer storage processing method, system, device and storage medium
CN115994115A (en) Chip control method, chip set and electronic equipment
CN111443898A (en) Method for designing flow program control software based on priority queue and finite-state machine
CN101488119B (en) Address interpretation method, apparatus and single-board
CN109062843A (en) A kind of date storage method and system based on iic bus
CN105117353A (en) FPGA with general data interaction module and information processing system using same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191101