CN108287667A - A kind of method and its device accessing data - Google Patents

A kind of method and its device accessing data Download PDF

Info

Publication number
CN108287667A
CN108287667A CN201810069191.5A CN201810069191A CN108287667A CN 108287667 A CN108287667 A CN 108287667A CN 201810069191 A CN201810069191 A CN 201810069191A CN 108287667 A CN108287667 A CN 108287667A
Authority
CN
China
Prior art keywords
data
caching
target data
spilling
master cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810069191.5A
Other languages
Chinese (zh)
Inventor
杨瑞君
祝可
高波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technology
Original Assignee
Shanghai Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technology filed Critical Shanghai Institute of Technology
Priority to CN201810069191.5A priority Critical patent/CN108287667A/en
Publication of CN108287667A publication Critical patent/CN108287667A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

This application provides a kind of methods accessing data, including:Receive the data access request of user;It is cached according to the data access request concurrent access master cache and N number of spilling, wherein the access speed of the master cache is cached more than the spilling, and for the spilling caching for caching the data overflowed in the master cache, N is positive integer;When in the master cache there are the data access request be directed toward target data when, the target data is read from the master cache;When the target data is not present in the master cache, but N number of i-th overflowed in caching overflows in caching there are when the target data, is overflowed in caching from described i-th and reads the target data;When the target data is not present in the master cache and N number of spilling caching, the target data is read from main memory.Therefore, the application provides a kind of method accessing data, can improve the efficiency of digital independent.

Description

A kind of method and its device accessing data
Technical field
This application involves computer realms, and more particularly, to a kind of method and its device accessing data.
Background technology
Increase with the clock speed of processor, main memory becomes much larger, may when processor accesses main memory There is longer delay cycle.It can implement to cache grade to reduce by frequently accessing delay and performance bottle caused by main memory Neck.And cache as one or more miniature high-speed associative storages, reduce the average time for accessing main memory.Work as processing Device reads in main memory or when writing position, and processor first checks for whether there is data copy in buffer memory, such as Fruit exists, and processor is directed toward buffer memory rather than slow main memory.
Regrettably, the size of caching is usually smaller and limits the smaller subset of data in storage main memory.And In buffering course, inside is easy the flexibility that frequently " missing " data cache simultaneously and is relatively low, and causes to being delayed and handling Device performance adversely affects.
Therefore, how to improve reduces the data losing issue of caching, and the flexibility cached and availability, to increase Processor performance is current very urgent problem.
Invention content
The application provides a kind of method accessing data, can improve the efficiency of digital independent.
On the one hand, a kind of method accessing data is provided, including:Receive the data access request of user;According to described Data access request concurrent access master cache and N number of spilling caching, wherein the access speed of the master cache is more than the spilling Caching, for the spilling caching for caching the data overflowed in the master cache, N is positive integer;Exist when in the master cache When the target data that the data access request is directed toward, the target data is read from the master cache;When the master cache In the target data is not present, but N number of i-th overflowed in caching overflows in caching there are when the target data, It is overflowed in caching from described i-th and reads the target data;It is not present when in the master cache and N number of spilling caching When the target data, the target data is read from main memory.
With reference to first aspect, described according to the data access in the first possible realization method of first aspect Before asking concurrent access master cache and N number of spilling caching, the method further includes:Pointed by the data access request When target data is junk data, determine that the target data is dangerous data;Pointed by the data access request Target data is not junk data, determine the target data be secure data, wherein when judge the target data for safely When data, the access of the target data is carried out according to the data access request, otherwise, the visit without the target data It asks.
With reference to first aspect and its above-mentioned realization method, described in second of possible realization method of first aspect Before the data access request concurrent access master cache and N number of spilling caching, the method further includes:Judge the mesh It marks whether size of data is more than threshold value, when the target data size is more than the threshold value, the target data is split into M A data packet, in order to carry out data access one by one to the M data packet, wherein M is the positive integer more than 1.
With reference to first aspect and its above-mentioned realization method, in the third possible realization method of first aspect, the N Each the corresponding respective caching number of caching, N number of i+1 overflowed in caching are overflowed caching and are used in a spilling caching It caches described i-th and overflows the data overflowed in caching.
Second aspect provides a kind of device accessing data, including:Receiving unit, the receiving unit are used for receiving The data access request at family;Processing unit, the processing unit are used for according to the data access request concurrent access master cache It is cached with N number of spilling, wherein the access speed of the master cache is cached more than the spilling, and the spilling caching is for caching The data overflowed in the master cache, N are positive integer;The processing unit is additionally operable to, when there are the numbers in the master cache According to access request be directed toward target data when, the target data is read from the master cache;The processing unit is additionally operable to, When being not present the target data in the master cache, but N number of i-th overflowed in caching overflow in caching exist it is described When target data, is overflowed in caching from described i-th and read the target data;The processing unit is additionally operable to, as the master When the target data is not present in caching and N number of spilling caching, the target data is read from main memory.
In conjunction with second aspect, in the first possible realization method of second aspect, the processing unit is additionally operable to:When When target data pointed by the data access request is junk data, determine that the target data is dangerous data;When Target data pointed by the data access request is not junk data, determines that the target data is secure data, wherein When judging the target data for secure data, the access of the target data is carried out according to the data access request, it is no Then, the access without the target data.
It is described in second of possible realization method of second aspect in conjunction with second aspect and its above-mentioned realization method Processing unit is additionally operable to:Judge whether the target data size is more than threshold value, when the target data size is more than the threshold When value, the target data is split into M data packet, in order to carry out data access one by one to the M data packet, In, M is the positive integer more than 1.
In conjunction with second aspect and its above-mentioned realization method, in the third possible realization method of second aspect, the N Each the corresponding respective caching number of caching, N number of i+1 overflowed in caching are overflowed caching and are used in a spilling caching It caches described i-th and overflows the data overflowed in caching.
The present embodiment is by being arranged multiple spilling cachings and a master cache, then when accessing data by first to visiting The size and safety for asking data are judged, are judged size of data after safety, and the excessive influence service of data is prevented Then data parallel is sent to master cache and overflows caching, realizes the concurrent access to data, compared with prior art by device performance The data losing issue for reducing caching improves flexibility and the availability of caching.
Description of the drawings
Fig. 1 shows the schematic flow chart of the method for the application one embodiment.
Fig. 2 shows the schematic block diagrams of the device of the application one embodiment.
Fig. 3 shows the schematic block diagram of another embodiment device of the application.
Specific implementation mode
Below in conjunction with attached drawing, the technical solution in the application is described.
Fig. 1 shows that the schematic flow chart of the method for the application one embodiment, the executive agent of this method can be Server, as shown in Figure 1, the method 100 of the access data includes:
Step 110, the data access request of user is received.
Step 120, it is cached according to the data access request concurrent access master cache and N number of spilling, wherein the master is slow The access speed deposited is cached more than the spilling, and the spilling caching is just for caching the data overflowed in the master cache, N Integer.
Step 130, slow from the master when the target data being directed toward there are the data access request in the master cache Deposit the middle reading target data;
In there is no the target datas, but N number of i-th of spilling overflowed in caching caches in the master cache There are when the target data, is overflowed in caching from described i-th and read the target data;
When the target data is not present in the master cache and N number of spilling caching, read from main memory Take the target data.
That is, when receiving data access request, while master cache and N number of spilling caching are accessed, due to main slow Therefore the access speed deposited, if take the lead in is accessed target data in master cache, is just read from master cache better than caching is overflowed Take the target data, if target data is not found in master cache, but overflow cache in target data is accessed, then from It overflows in caching and reads target data.
The present embodiment is realized the concurrent access to data, is compared by the way that multiple spilling cachings and a master cache is arranged The prior art reduces the data losing issue of caching, improves flexibility and the availability of caching.
Optionally, described according to the data access request concurrent access master cache and N as the application one embodiment Before a spilling caching, the method further includes:When target data pointed by the data access request is junk data, Determine that the target data is dangerous data;Target data pointed by the data access request is not junk data, Determine that the target data is secure data, wherein when judging the target data for secure data, visited according to the data Ask that request carries out the access of the target data, otherwise, the access without the target data.
That is, the data safety judgement is by judging whether the request data is junk data.
Optionally, described according to the data access request concurrent access master cache and N as the application one embodiment Before a spilling caching, the method further includes:Judge whether the target data size is more than threshold value, when the target data Size be more than the threshold value when, the target data is split into M data packet, in order to the M data packet one by one into Row data access, wherein M is the positive integer more than 1.
That is, when the data volume of access is more than preset range, data volume will be accessed and divided, so sequence carries out It accesses;When the data volume of access is less than or equal to preset range, it is ranked up according to the time sequencing that accessed end receives, according to Secondary access data.
Wherein, data volume size preset range is provided according to practical application scene environment, needs binding performance test data It obtains, the application does not limit;It should also be understood that accessing data volume is divided into M data packet, the size of each data packet is according to reality Border situation setting, the application do not limit.
Optionally, as the application one embodiment, N number of spilling each caches corresponding respective caching in caching and compiles Number, N number of i+1 overflowed in caching overflows caching and overflows the data overflowed in caching for caching described i-th.
The present embodiment is by being arranged multiple spilling cachings and a master cache, then when accessing data by first to visiting The size and safety for asking data are judged, are judged size of data after safety, and the excessive influence service of data is prevented Then data parallel is sent to master cache and overflows caching, realizes the concurrent access to data, compared with prior art by device performance The data losing issue for reducing caching improves flexibility and the availability of caching.
Fig. 2 shows the schematic block diagrams of the device of the application one embodiment.
As shown in Fig. 2, a kind of device for concurrently accessing master cache and spilling caching provided in this embodiment, including::
Master cache 6 carries out caching process for the data to main memory;
More than one spilling caching 7, and each overflows caching 7 and carries out being ranked sequentially label, and next overflow is delayed 7 are deposited for storing an overflow data for overflowing caching 7, and first overflows caching 7 and overflowed from master cache 6 for storing Data;
Data generating unit 1 is accessed, request data is generated according to user demand;
Data safety judging unit 2, for judging whether the access data are safe;
Data volume determination unit 3 is accessed, whether the data volume for judging to access is more than preset range;If more than default model It encloses, is ranked up according to the time sequencing that accessed end receives, transfer the first access data, remainder data is entered into Accreditation Waiting Area; Subsequently into synchronization of access;
Request data is synchronized to the master cache 6 and spilling caching 7 and sends out concurrent access by data access unit 4 Instruction;
Data determination unit 5, for determining whether the master cache 6 stores the data of request;Determine all spilling cachings 7 Whether the data of the request are stored;And when the master cache 6 and the data overflowed caching 7 and do not store the request When, access main memory 8.
Further, the data safety judging unit 2 be by judge the request data whether be junk data step Suddenly.
Further, described to access whether the data volume for judging to access in data volume determination unit 3 is more than in preset range Preset range is the maximum visit capacity that can be born according to server to determine.
Further, serial number there are one being all provided in all spillings caching 7, and overflow caching 7 when parallel synchronous accesses When, each number for overflowing caching 7 is first accessed, then synchronizes access in sequence.
Device as shown in Figure 2, a kind of access method for concurrently accessing master cache and spilling caching of the present embodiment Include the following steps:
S1, request data is generated according to user demand;Request data is sent to data safety judging unit 2;By data Analysis unit 2 carries out safety monitoring to the data of access;After judging safety, into next step;
S2, size judgement is carried out to the data volume of access by access data volume determination unit 3, when the data volume of access is more than When preset range, data volume will be accessed and divided, so sequentially accessed;Subsequently into next step S3;When the data of access When amount is less than or equal to preset range, it is ranked up according to the time sequencing that accessed end receives, transfers the first access data, it will Remainder data enters Accreditation Waiting Area;Subsequently into next step S3;
S3, by request data synchronize to the master cache 6 and the spilling caching 7 send out concurrent access instruction;And enter Step;
S4, determine whether the master cache 6 stores the data of request;Determine whether all spilling cachings 7 store described ask The data asked;And when the master cache 6 and the spilling caching 7 do not store the data of the request when, access primary storage Device 8;After access, the data for accessing and having been accessed in column are deleted, and judge whether access data access all complete in access column At, it is to exit, it is no, it transfers and accesses data return to step S3.
Further, judging whether the data volume accessed is more than the preset range in preset range can be held according to server The maximum visit capacity received determines.
Further, the data safety judgement is by the way that the step of whether request data is junk data judged.
Further, in step s3 when parallel synchronous, which accesses, overflows caching 7, each volume for overflowing caching 7 is first accessed Number, access is then synchronized in sequence.
Fig. 3 shows the schematic block diagram of the device of another embodiment of the application.As shown in figure 3, the device 300 includes:
Receiving unit 310, the receiving unit 310 are used to receive the data access request of user;
Processing unit 320, the processing unit 320 are used for according to the data access request concurrent access master cache and N number of Overflow caching, wherein the access speed of the master cache is cached more than the spilling, and the spilling caching is for caching the master The data overflowed in caching, N are positive integer;
Single 320 yuan of the processing is additionally operable to, when there are the number of targets that the data access request is directed toward in the master cache According to when, the target data is read from the master cache;
The processing unit 320 is additionally operable to, when the target data, but N number of spilling are not present in the master cache There are when the target data in i-th of spilling caching in caching, is overflowed in caching from described i-th and read the number of targets According to;
The processing unit 320 is additionally operable to, when there is no the targets in the master cache and N number of spilling caching When data, the target data is read from main memory.
Optionally, as the application one embodiment, the processing unit 320 is additionally operable to:When the data access request When pointed target data is junk data, determine that the target data is dangerous data;When the data access request Pointed target data is not junk data, determines that the target data is secure data, wherein when judging the number of targets According to for secure data when, the access of the target data is carried out according to the data access request, otherwise, without the target The access of data.
Optionally, as the application one embodiment, the processing unit 320 is additionally operable to:Judge that the target data is big It is small whether to be more than threshold value, when the target data size is more than the threshold value, the target data is split into M data Packet, in order to carry out data access one by one to the M data packet, wherein M is the positive integer more than 1.
Optionally, as the application one embodiment, N number of spilling each caches corresponding respective caching in caching and compiles Number, N number of i+1 overflowed in caching overflows caching and overflows the data overflowed in caching for caching described i-th.
It should be understood that aforementioned advantages may be implemented in the embodiment shown in Fig. 2 or Fig. 3, for brevity, repeatedly no longer It repeats.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be the indirect coupling by some interfaces, device or unit It closes or communicates to connect, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be People's computer, server or the second equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. a kind of method accessing data, which is characterized in that including:
Receive the data access request of user;
It is cached according to the data access request concurrent access master cache and N number of spilling, wherein the access speed of the master cache It is cached more than the spilling, for the spilling caching for caching the data overflowed in the master cache, N is positive integer;
When in the master cache there are the data access request be directed toward target data when, from the master cache read described in Target data;
When in the master cache be not present the target data, but it is described it is N number of overflow caching in i-th overflow caching in exist When the target data, is overflowed in caching from described i-th and read the target data;
When the target data is not present in the master cache and N number of spilling caching, institute is read from main memory State target data.
2. according to the method described in claim 1, it is characterized in that, described main slow according to the data access request concurrent access It deposits before being cached with N number of spilling, the method further includes:
When target data pointed by the data access request is junk data, determine that the target data is uneasy total According to;
Target data pointed by the data access request is not junk data, determines that the target data is safe number According to, wherein when judging the target data for secure data, the target data is carried out according to the data access request It accesses, otherwise, the access without the target data.
3. method according to claim 1 or 2, which is characterized in that described according to the data access request concurrent access Before master cache and N number of spilling caching, the method further includes:
Judge whether the target data size is more than threshold value, it, will be described when the target data size is more than the threshold value Target data splits into M data packet, in order to carry out data access one by one to the M data packet, wherein M is more than 1 Positive integer.
4. according to the method in any one of claims 1 to 3, which is characterized in that each cached in N number of spilling caching Corresponding respective caching number, the i+1 spilling caching during N number of spilling caches delay for caching described i-th and overflowing Deposit the data of middle spilling.
5. a kind of device accessing data, which is characterized in that including:
Receiving unit, the receiving unit are used to receive the data access request of user;
Processing unit, the processing unit are used to be cached according to the data access request concurrent access master cache and N number of spilling, Wherein, the access speed of the master cache is cached more than the spilling, and the spilling caching overflows for caching in the master cache The data gone out, N are positive integer;
The processing unit is additionally operable to, when in the master cache there are the data access request be directed toward target data when, from The target data is read in the master cache;
The processing unit is additionally operable to, in there is no the target datas, but N number of spilling caches in the master cache There are when the target data in i-th of spilling caching, is overflowed in caching from described i-th and read the target data;
The processing unit is additionally operable to, when the target data is not present in the master cache and N number of spilling caching, The target data is read from main memory.
6. device according to claim 5, which is characterized in that the processing unit is additionally operable to:
When target data pointed by the data access request is junk data, determine that the target data is uneasy total According to;
Target data pointed by the data access request is not junk data, determines that the target data is safe number According to, wherein when judging the target data for secure data, the target data is carried out according to the data access request It accesses, otherwise, the access without the target data.
7. device according to claim 5 or 6, which is characterized in that the processing unit is additionally operable to:
Judge whether the target data size is more than threshold value, it, will be described when the target data size is more than the threshold value Target data splits into M data packet, in order to carry out data access one by one to the M data packet, wherein M is more than 1 Positive integer.
8. device according to any one of claims 5 to 7, which is characterized in that each cached in N number of spilling caching Corresponding respective caching number, the i+1 spilling caching during N number of spilling caches delay for caching described i-th and overflowing Deposit the data of middle spilling.
CN201810069191.5A 2018-01-24 2018-01-24 A kind of method and its device accessing data Pending CN108287667A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810069191.5A CN108287667A (en) 2018-01-24 2018-01-24 A kind of method and its device accessing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810069191.5A CN108287667A (en) 2018-01-24 2018-01-24 A kind of method and its device accessing data

Publications (1)

Publication Number Publication Date
CN108287667A true CN108287667A (en) 2018-07-17

Family

ID=62835684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810069191.5A Pending CN108287667A (en) 2018-01-24 2018-01-24 A kind of method and its device accessing data

Country Status (1)

Country Link
CN (1) CN108287667A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040205300A1 (en) * 2003-04-14 2004-10-14 Bearden Brian S. Method of detecting sequential workloads to increase host read throughput
CN101788891A (en) * 2010-03-15 2010-07-28 江苏大学 Quick and safe storage method based on disk and safe disk
CN102298641A (en) * 2011-09-14 2011-12-28 清华大学 Method for uniformly storing files and structured data based on key value bank
CN104169892A (en) * 2012-03-28 2014-11-26 华为技术有限公司 Concurrently accessed set associative overflow cache
CN106776893A (en) * 2016-11-30 2017-05-31 浪潮通信信息系统有限公司 A kind of data output method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040205300A1 (en) * 2003-04-14 2004-10-14 Bearden Brian S. Method of detecting sequential workloads to increase host read throughput
CN101788891A (en) * 2010-03-15 2010-07-28 江苏大学 Quick and safe storage method based on disk and safe disk
CN102298641A (en) * 2011-09-14 2011-12-28 清华大学 Method for uniformly storing files and structured data based on key value bank
CN104169892A (en) * 2012-03-28 2014-11-26 华为技术有限公司 Concurrently accessed set associative overflow cache
CN106776893A (en) * 2016-11-30 2017-05-31 浪潮通信信息系统有限公司 A kind of data output method and device

Similar Documents

Publication Publication Date Title
RU2597520C2 (en) Memory controller and method of operating such memory controller
US20160132541A1 (en) Efficient implementations for mapreduce systems
US20200136971A1 (en) Hash-table lookup with controlled latency
CN105677580A (en) Method and device for accessing cache
CN105677236B (en) A kind of storage device and its method for storing data
WO2016141735A1 (en) Cache data determination method and device
US10152434B2 (en) Efficient arbitration for memory accesses
CN107341114A (en) A kind of method of directory management, Node Controller and system
CN107133183B (en) Cache data access method and system based on TCMU virtual block device
CN104965793B (en) A kind of cloud storage data node device
US5761716A (en) Rate based memory replacement mechanism for replacing cache entries when the cache is full
CN109656730A (en) A kind of method and apparatus of access cache
CN105183398B (en) A kind of storage device, electronic equipment and data processing method
US8135911B2 (en) Managing a region cache
CN108650306A (en) A kind of game video caching method, device and computer storage media
CN108228476A (en) A kind of data capture method and device
EP3274844B1 (en) Hierarchical cost based caching for online media
CN108287667A (en) A kind of method and its device accessing data
CN116107635A (en) Command distributor, command distribution method, scheduler, chip, board card and device
EP3580661B1 (en) Data processing
CN109582233A (en) A kind of caching method and device of data
CN109992198B (en) Data transmission method of neural network and related product
US7421536B2 (en) Access control method, disk control unit and storage apparatus
CN103838679B (en) A kind of method for caching and processing and device
CN110362769A (en) A kind of data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180717