CN108170373B - Data caching method and device and data transmission system - Google Patents

Data caching method and device and data transmission system Download PDF

Info

Publication number
CN108170373B
CN108170373B CN201711377201.3A CN201711377201A CN108170373B CN 108170373 B CN108170373 B CN 108170373B CN 201711377201 A CN201711377201 A CN 201711377201A CN 108170373 B CN108170373 B CN 108170373B
Authority
CN
China
Prior art keywords
data
cache module
cache
module
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711377201.3A
Other languages
Chinese (zh)
Other versions
CN108170373A (en
Inventor
海明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN201711377201.3A priority Critical patent/CN108170373B/en
Publication of CN108170373A publication Critical patent/CN108170373A/en
Application granted granted Critical
Publication of CN108170373B publication Critical patent/CN108170373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The disclosure relates to a data caching method, a data caching device and a data transmission system. The data caching device comprises: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor. The technical scheme relies on the data locality principle, avoids the waiting problem caused by bandwidth mismatching between the random access memory and the data processors, and improves the data reading and writing efficiency among the data processors and between the data processors and the random access memory by mining parallel operation on the basis of meeting the data locality.

Description

Data caching method and device and data transmission system
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data caching method, an apparatus, and a data transmission system.
Background
In the development history of computer systems over the past 10 years, the improvement of computing capability is much larger than the improvement of memory bandwidth, so the distance between memory performance and computing performance is gradually expanding. This is also a challenge that deep neural network accelerators typically face. The high-bandwidth local cache relying on the data locality principle can solve the problem of mismatching between the memory bandwidth and the computing capacity to a certain extent.
Disclosure of Invention
The embodiment of the disclosure provides a data caching method, a data caching device and a data transmission system. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a data caching apparatus, including: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor;
the first cache module acquires and caches first data of a first data volume from the random access memory at each clock; when the data cached in the first cache module reaches a second data volume, sending the second data of the second data volume to the second cache module, wherein the first data volume is equal to the bandwidth proportion occupied by the second data volume multiplied by a first bandwidth proportion, and the first bandwidth proportion is the bandwidth proportion occupied by the interactive data of the first cache module and the second cache module;
the second cache module caches the second data and sends the second data to the third cache module;
the third cache module caches the second data and sends third data of a third data volume to the data processor at each clock, wherein the second data volume is n times of the third data volume, and n is an integer greater than 1.
Optionally, the third cache module receives and caches fourth data of a third data amount sent by the data processor at each clock; when the data cached by the third cache module reaches a second data volume, sending fifth data of the second data volume to the second cache module;
the second cache module sends the fifth data to the first cache module;
the first cache module sends sixth data of the first data amount to the random access memory at each clock.
Optionally, a second bandwidth ratio occupied by the third cache module and the second cache module interacting data is 1 minus the first bandwidth ratio.
Optionally, the first buffer module and the third buffer module execute a first-in first-out read-write rule.
Optionally, the data caching apparatus includes: the data processing device comprises at least two first data cache modules and at least two third data cache modules.
Optionally, when the data caching device is connected to at least two data processors, a third data caching module connected to the first data processor receives and caches seventh data of a third data volume sent by the first data processor, and when the data cached by the third caching module reaches a second data volume, sends eighth data of the second data volume to the second caching module;
and the second cache module sends the eighth data to the second data processor through a third data cache module connected with the second data processor. According to a second aspect of the embodiments of the present disclosure, there is provided a data caching method applied to a data caching device, where the data caching device includes: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor;
the method comprises the following steps:
at each clock, the first cache module acquires and caches first data of a first data volume from a random access memory;
when the data cached in the first cache module reaches a second data volume, sending the second data of the second data volume to the second cache module, wherein the first data volume is equal to the second data volume multiplied by a first bandwidth proportion occupied by the first cache module;
the second cache module caches the second data and sends the second data to the third cache module;
the third caching module caches the second data;
at each clock, the third cache module sends third data of a third data volume to the data processor, where the second data volume is n times of the third data volume, and n is an integer greater than 1.
Optionally, the method further includes:
at each clock, the third cache module receives and caches fourth data of a third data volume sent by the data processor;
when the data cached by the third cache module reaches a second data volume, sending fifth data of the second data volume to the second cache module;
the second cache module sends the fifth data to the first cache module;
the first cache module caches the fifth data;
at each clock, the first cache module sends sixth data of the first data amount to the random access memory.
According to a third aspect of the embodiments of the present disclosure, there is provided a data transmission system including: the system comprises a random access memory, a data cache device and a data processor which are connected in sequence;
the data caching device comprises: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor;
the first cache module acquires and caches first data of a first data volume from the random access memory at each clock; when the data cached in the first cache module reaches a second data volume, sending the second data of the second data volume to the second cache module, wherein the first data volume is equal to the second data volume multiplied by a first bandwidth proportion occupied by the first cache module;
the second cache module caches the second data and sends the second data to the third cache module;
the third cache module caches the second data and sends third data of a third data volume to the data processor at each clock, wherein the second data volume is n times of the third data volume, and n is an integer greater than 1.
Optionally, the data caching device is connected to at least two data processors, and the data volume sent by the third caching module is different for different data processors.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the data locality principle is relied on, the waiting problem caused by bandwidth mismatching between the random access memory and the data processors is avoided, and on the basis of meeting the data locality, the efficiency of data reading and writing among a plurality of data processors and between the plurality of data processors and the random access memory is improved by mining parallel operation. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a block diagram illustrating a data caching system in accordance with an exemplary embodiment.
FIG. 2a is a block diagram illustrating a data caching system, according to another example embodiment.
FIG. 2b is a block diagram illustrating a data caching system, according to another example embodiment.
FIG. 3 is a flow diagram illustrating a data caching method according to an example embodiment.
Fig. 4 is a flow chart illustrating a data caching method according to another example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a block diagram illustrating a data caching system in accordance with an exemplary embodiment. As shown in fig. 1, the data buffer apparatus 10 includes: the cache system comprises a first cache module 11, a second cache module 12 and a third cache module 13 which are sequentially connected, wherein the first cache module 11 is connected with a random access memory 20, and the third cache module 13 is connected with a data processor 30.
A first cache module 11 that acquires and caches first data of a first data amount from the random access memory 20 at each clock; when the data cached in the first cache module 11 reaches a second data volume, sending the second data of the second data volume to the second cache module 12, where the first data volume is equal to the second data volume multiplied by a first bandwidth proportion, and the first bandwidth proportion is a bandwidth proportion occupied by the interactive data of the first cache module 11 and the second cache module 12. For example, the first bandwidth ratio is 1/8, the first data size is 64bits, and the second data size is 512 bits.
And the second cache module 12 caches the second data and sends the second data to the third cache module 13.
The third buffer module 13 buffers the second data, and sends third data of a third data amount to the data processor 30 at each clock, where the second data amount is n times of the third data amount, and n is an integer greater than 1. For example, if n is 4, the third data amount is 128 bits.
The second CACHE module 12 is a second level CACHE (L2 CACHE), and the data read/write rule based on bandwidth allocation, for example, the first bandwidth ratio is 1/8, so that 1/8 bandwidth of the second level CACHE is used for reading/writing data from/to the Random Access Memory (RAM)20, and the remaining 7/8 bandwidth is used for interacting data with the data processor.
For example, the first cache module 11 acquires 64bits of data from the RAM20 every clock, and after acquiring 512bits of data, sends the 512bits of second data to the second cache module 12. The second buffer module 12 sends the second data of 512bits to the third buffer module 13. Since the data read and written by the data processing apparatus 30 is 128bits per clock, the third buffer module 13 can send the second data to the data processor 30 in four clocks. For the second buffer module 12, it only takes one clock to receive data from the first buffer module 11, and the received data can be directly transmitted to the third buffer module without waiting.
In this way, the second cache module 12 is prevented from waiting for data transmission, so that the second cache module 12 reads and writes data in each clock, and the utilization rate of the second cache module 12 and the efficiency of data reading and writing are further improved.
Optionally, the third buffer module 13 receives and buffers fourth data of a third data amount sent by the data processor 30 at each clock; when the data cached by the third cache module 13 reaches the second data volume, the fifth data of the second data volume is sent to the second cache module 12. And the second cache module 12 sends the fifth data to the first cache module 11. The first buffer module 11 sends the sixth data of the first data amount to the random access memory 20 at each clock.
In this embodiment, the data read and written by each clock of the data processor 30 is 128bits, the third cache module 13 caches the data received from the data processor 30 to 512bits, and then forwards the data to the second cache module 12, the second cache module sends the 512bits of data to the first cache module 11, each clock of the RAM can only read and write 64bits of data, and then the first cache module 11 sends the 512bits of data to the RAM by 8 clocks.
Therefore, the data transmission waiting of the second cache module 12 is avoided, so that the second cache module 12 reads and writes data in each clock, and the utilization rate of the second cache module 12 and the data reading and writing efficiency are further improved.
In the embodiment, the data locality principle is relied on, the waiting problem caused by bandwidth mismatching between the random access memory and the data processor is avoided, and on the basis of meeting the data locality, the data reading and writing efficiency among a plurality of data processors and between the plurality of data processors and the random access memory is improved by mining parallel operation.
Optionally, the second bandwidth ratio occupied by the data exchanged between the third cache module 13 and the second cache module 12 is 1 minus the first bandwidth ratio. For example, the first bandwidth ratio is 1/8, and the second bandwidth ratio is 7/8.
Optionally, the data caching apparatus includes: the data processing device comprises at least two first data cache modules and at least two third data cache modules. FIG. 2a is a block diagram illustrating a data caching system, according to another example embodiment. As shown in fig. 2a, the data buffering device 10 includes: two first data cache modules 11a and 11b, four third cache modules 13a, 13b, 13c and 13 d.
The first cache module 11a obtains and caches first data of a first data amount from the random access memory 20 at each clock; when the data cached in the first cache module 11a reaches the second data amount, the second data of the second data amount is sent to the second cache module 12.
And the second cache module 12 caches the second data and sends the second data to the third cache modules 13a and 13 c.
The third buffer modules 13a and 13c buffer the second data and transmit third data of a third data amount to the data processor 30 at each clock.
Optionally, the third buffer modules 13b and 13d receive and buffer fourth data of a third data amount sent by the data processor 30 at each clock; when the data cached by the third cache modules 13b and 13d reaches the second data volume, the fifth data of the second data volume is sent to the second cache module 12. The second cache module 12 sends the fifth data to the first cache module 11 b. The first buffer module 11b transmits the sixth data of the first data amount to the random access memory 20 at every clock.
Optionally, the first buffer module and the third buffer module execute a First In First Out (FIFO) read-write rule.
FIG. 2b is a block diagram illustrating a data caching system, according to another example embodiment. As shown in fig. 2b, optionally, when the data caching apparatus 10 is connected to at least two data processors 30a and 30b, the third data caching module 13b connected to the first data processor 30a receives and caches seventh data of a third data amount sent by the first data processor 30a, and when the data cached by the third caching module 13b reaches the second data amount, sends eighth data of the second data amount to the second caching module 12. The second cache module 12 sends the eighth data to the second data processor 30b through the third data cache module 13c connected to the second data processor 30 b.
Therefore, data can be directly exchanged between different data processors through the second cache module without transmitting the data back to the RAM, the waiting problem caused by bandwidth mismatching between the data processors and the second cache module is solved, and the data reading and writing efficiency among the data processors is improved.
According to a second aspect of the embodiments of the present disclosure, there is provided a data caching method applied to a data caching device, where the data caching device includes: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor.
FIG. 3 is a flow diagram illustrating a data caching method according to an example embodiment. As shown in fig. 3, the method comprises the steps of:
step S31, at each clock, the first cache module obtains and caches the first data of the first data amount from the random access memory;
step S32, when the data cached in the first cache module reaches a second data volume, sending the second data of the second data volume to the second cache module, where the first data volume is equal to the second data volume multiplied by the first bandwidth proportion occupied by the first cache module;
step S33, the second cache module caches the second data, and sends the second data to the third cache module;
step S34, the third cache module caches the second data;
in step S35, at each clock, the third cache module sends third data of a third data size to the data processor, where the second data size is n times of the third data size, and n is an integer greater than 1.
For example, the first bandwidth ratio is 1/8, the first data size is 64bits, and the second data size is 512 bits. And n is 4, the third data amount is 128 bits. The first cache module 11 acquires 64bits of data from the RAM20 every clock, and after acquiring 512bits of data, sends the 512bits of second data to the second cache module 12. The second buffer module 12 sends the second data of 512bits to the third buffer module 13. Since the data read and written by the data processing apparatus 30 is 128bits per clock, the third buffer module 13 can send the second data to the data processor 30 in four clocks. For the second buffer module 12, it only takes one clock to receive data from the first buffer module 11, and the received data can be directly transmitted to the third buffer module without waiting.
Therefore, the second cache module 12 is prevented from waiting for data transmission, so that the second cache module 12 reads and writes data in each clock, and the utilization rate of the second cache module 12 and the efficiency of data reading and writing are further improved.
Fig. 4 is a flow chart illustrating a data caching method according to another example embodiment. As shown in fig. 4, the method further includes:
step S41, in each clock, the third cache module receives and caches fourth data of a third data amount sent by the data processor;
step S42, when the data cached by the third cache module reaches the second data size, sending fifth data of the second data size to the second cache module;
step S43, the second cache module sends the fifth data to the first cache module;
step S44, the first cache module caches the fifth data;
at each clock, the first cache module sends the sixth data of the first data amount to the random access memory, step S45.
In this embodiment, the data read and written by each clock of the data processor 30 is 128bits, the third cache module 13 caches the data received from the data processor 30 to 512bits, and then forwards the data to the second cache module 12, the second cache module sends the 512bits of data to the first cache module 11, each clock of the RAM can only read and write 64bits of data, and then the first cache module 11 sends the 512bits of data to the RAM by 8 clocks.
Therefore, data transmission waiting of the second cache module is avoided, the second cache module reads and writes data in each clock, and the utilization rate of the second cache module and the data reading and writing efficiency are further improved.
Fig. 1 is a flow chart illustrating a data transmission system according to an example embodiment. As shown in fig. 1, according to a third aspect of the embodiments of the present disclosure, there is provided a data transmission system including: a random access memory 20, a data cache device 10 and a data processor 30 which are connected in sequence;
the data cache device 10 includes: the cache comprises a first cache module 11, a second cache module 12 and a third cache module 13 which are sequentially connected, wherein the first cache module 11 is connected with a random access memory 20, and the third cache module 13 is connected with a data processor 30;
a first cache module 11 that acquires and caches first data of a first data amount from the random access memory 20 at each clock; when the data cached in the first cache module 11 reaches a second data volume, sending the second data of the second data volume to the second cache module 12, wherein the first data volume is equal to the second data volume multiplied by a first bandwidth proportion occupied by the first cache module;
the second cache module 12 caches the second data and sends the second data to the third cache module 13;
the third buffer module 13 buffers the second data, and sends third data of a third data amount to the data processor 30 at each clock, where the second data amount is n times of the third data amount, and n is an integer greater than 1.
Optionally, the data caching device 10 is connected to at least two data processors 30, and the amount of data sent by the third caching module is different for different data processors 30. For example, as shown in fig. 2b, if the data processor 30a and the data processor 30b have different bandwidths, the amount of data that the data processor 30a interacts with the third cache modules 13a, 13b per clock may be different from the amount of data that the data processor 30b interacts with the data processors 13c and 13d per clock.
In addition, data can be exchanged among the plurality of data processors through the second cache module 12 without returning the data to the RAM.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A data caching apparatus, comprising: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor;
the first cache module acquires and caches first data of a first data volume from the random access memory at each clock; when the data cached in the first cache module reaches a second data volume, sending the second data of the second data volume to the second cache module, wherein the first data volume is equal to the bandwidth proportion occupied by the second data volume multiplied by a first bandwidth proportion, and the first bandwidth proportion is the bandwidth proportion occupied by the interactive data of the first cache module and the second cache module;
the second cache module caches the second data and sends the second data to the third cache module;
the third cache module caches the second data and sends third data of a third data volume to the data processor at each clock, wherein the second data volume is n times of the third data volume, and n is an integer greater than 1.
2. The apparatus of claim 1,
the third cache module receives and caches fourth data of a third data volume sent by the data processor at each clock; when the data cached by the third cache module reaches a second data volume, sending fifth data of the second data volume to the second cache module;
the second cache module sends the fifth data to the first cache module;
the first cache module sends sixth data of the first data amount to the random access memory at each clock.
3. The apparatus of claim 1, wherein a second bandwidth ratio occupied by the third buffer module and the second buffer module for data interaction is 1 minus the first bandwidth ratio.
4. The apparatus of claim 3, wherein the first buffer module and the third buffer module execute a first-in-first-out read-write rule.
5. The apparatus according to any one of claims 1-4, wherein the data buffering means comprises: the cache comprises at least two first cache modules and at least two third cache modules.
6. The apparatus according to claim 5, wherein when the data caching apparatus is connected to at least two data processors, a third caching module connected to a first data processor receives and caches seventh data of a third data amount sent by the first data processor, and when the data cached by the third caching module reaches a second data amount, sends eighth data of the second data amount to the second caching module;
and the second cache module sends the eighth data to the second data processor through a third cache module connected with the second data processor.
7. A data caching method is applied to a data caching device, and the data caching device comprises the following steps: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor;
the method comprises the following steps:
at each clock, the first cache module acquires and caches first data of a first data volume from a random access memory;
when the data cached in the first cache module reaches a second data volume, sending the second data of the second data volume to the second cache module, wherein the first data volume is equal to the bandwidth proportion occupied by the second data volume multiplied by a first bandwidth proportion, and the first bandwidth proportion is the bandwidth proportion occupied by the interactive data of the first cache module and the second cache module;
the second cache module caches the second data and sends the second data to the third cache module;
the third caching module caches the second data;
at each clock, the third cache module sends third data of a third data volume to the data processor, where the second data volume is n times of the third data volume, and n is an integer greater than 1.
8. The method of claim 7, further comprising:
at each clock, the third cache module receives and caches fourth data of a third data volume sent by the data processor;
when the data cached by the third cache module reaches a second data volume, sending fifth data of the second data volume to the second cache module;
the second cache module sends the fifth data to the first cache module;
the first cache module caches the fifth data;
at each clock, the first cache module sends sixth data of the first data amount to the random access memory.
9. A data transmission system, comprising: the system comprises a random access memory, a data cache device and a data processor which are connected in sequence;
the data caching device comprises: the cache comprises a first cache module, a second cache module and a third cache module which are sequentially connected, wherein the first cache module is connected with a random access memory, and the third cache module is connected with a data processor;
the first cache module acquires and caches first data of a first data volume from the random access memory at each clock; when the data cached in the first cache module reaches a second data volume, sending the second data of the second data volume to the second cache module, wherein the first data volume is equal to the bandwidth proportion occupied by the second data volume multiplied by a first bandwidth proportion, and the first bandwidth proportion is the bandwidth proportion occupied by the interactive data of the first cache module and the second cache module;
the second cache module caches the second data and sends the second data to the third cache module;
the third cache module caches the second data and sends third data of a third data volume to the data processor at each clock, wherein the second data volume is n times of the third data volume, and n is an integer greater than 1.
10. The system of claim 9, wherein the data buffer device is connected to at least two data processors, and the amount of data sent by the third buffer module is different for different data processors.
CN201711377201.3A 2017-12-19 2017-12-19 Data caching method and device and data transmission system Active CN108170373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711377201.3A CN108170373B (en) 2017-12-19 2017-12-19 Data caching method and device and data transmission system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711377201.3A CN108170373B (en) 2017-12-19 2017-12-19 Data caching method and device and data transmission system

Publications (2)

Publication Number Publication Date
CN108170373A CN108170373A (en) 2018-06-15
CN108170373B true CN108170373B (en) 2021-01-05

Family

ID=62522493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711377201.3A Active CN108170373B (en) 2017-12-19 2017-12-19 Data caching method and device and data transmission system

Country Status (1)

Country Link
CN (1) CN108170373B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3968054A4 (en) * 2019-06-20 2022-05-11 Huawei Technologies Co., Ltd. Radar system
CN113259247B (en) * 2020-02-11 2022-11-25 华为技术有限公司 Cache device in network equipment and data management method in cache device
CN112214425B (en) * 2020-08-24 2022-07-15 Oppo广东移动通信有限公司 Data transmission method, data transmission device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103080910A (en) * 2010-09-09 2013-05-01 日本电气株式会社 Storage system
CN103546394A (en) * 2013-10-25 2014-01-29 杭州华三通信技术有限公司 Communication device
CN103795653A (en) * 2012-10-30 2014-05-14 江西南昌供电公司 Data caching method, device and optical network unit

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7194582B1 (en) * 2003-05-30 2007-03-20 Mips Technologies, Inc. Microprocessor with improved data stream prefetching
US10169240B2 (en) * 2016-04-08 2019-01-01 Qualcomm Incorporated Reducing memory access bandwidth based on prediction of memory request size

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103080910A (en) * 2010-09-09 2013-05-01 日本电气株式会社 Storage system
CN103795653A (en) * 2012-10-30 2014-05-14 江西南昌供电公司 Data caching method, device and optical network unit
CN103546394A (en) * 2013-10-25 2014-01-29 杭州华三通信技术有限公司 Communication device

Also Published As

Publication number Publication date
CN108170373A (en) 2018-06-15

Similar Documents

Publication Publication Date Title
CN108170373B (en) Data caching method and device and data transmission system
US10116746B2 (en) Data storage method and network interface card
US8037221B2 (en) Dynamic allocation of DMA buffers in input/output adaptors
CN110419034B (en) Data access method and device
US7469309B1 (en) Peer-to-peer data transfer method and apparatus with request limits
US11822811B2 (en) Method, electronic device and computer program product for processing data
US20130251006A1 (en) Data packet flow control across an asynchronous clock domain boundary
CN107025184B (en) Data management method and device
CN109478171B (en) Improving throughput in openfabics environment
CN109697034A (en) A kind of method for writing data, device, electronic equipment and storage medium
US7822040B2 (en) Method for increasing network transmission efficiency by increasing a data updating rate of a memory
CN104991883A (en) Sending and receiving apparatuses with chip interconnection and sending and receiving method and system
CN106462506B (en) Method, system, and medium for controlled buffer injection of incoming data
CN110740138A (en) Data transmission method and device
CN111859225B (en) Program file access method, apparatus, computing device and medium
CN109992560B (en) Communication method and communication system
KR20160109733A (en) Storage apparatus and method for processing a plurality of client data
KR102335798B1 (en) Storage apparatus and method for processing a plurality of client data
EP3234786B1 (en) Scalable synchronization mechanism for distributed memory
US20240103766A1 (en) Method, electronic device, and computer progam product for asynchronously accessing data
CN111213130A (en) Performance improvements for decentralized location based deduplication
US20240103767A1 (en) Method, electronic device, and computer program product for synchronously accessing data
CN114840458B (en) Read-write module, system on chip and electronic equipment
CN108563605A (en) A kind of method, equipment and the computer storage media of adjustment electronic reader refresh rate
WO2023102682A1 (en) Communication apparatus and message transmission method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 101, 1st floor, building 1, Xisanqi building materials City, Haidian District, Beijing 100096

Applicant after: Yunzhisheng Intelligent Technology Co.,Ltd.

Address before: 100191 a503, 5th floor, Mudan science and technology building, No.2 Huayuan Road, Haidian District, Beijing

Applicant before: BEIJING UNISOUND INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant