CN104461932B - Directory cache management method for big data application - Google Patents
Directory cache management method for big data application Download PDFInfo
- Publication number
- CN104461932B CN104461932B CN201410611086.1A CN201410611086A CN104461932B CN 104461932 B CN104461932 B CN 104461932B CN 201410611086 A CN201410611086 A CN 201410611086A CN 104461932 B CN104461932 B CN 104461932B
- Authority
- CN
- China
- Prior art keywords
- data
- shared
- caching
- directory
- final stage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007726 management method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 7
- 230000008034 disappearance Effects 0.000 claims description 9
- 238000012423 maintenance Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 5
- 238000012217 deletion Methods 0.000 claims description 4
- 230000037430 deletion Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000012986 modification Methods 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims 1
- 230000006399 behavior Effects 0.000 claims 1
- 230000001934 delay Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 208000018672 Dilatation Diseases 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a directory cache management method for big data application, and belongs to directory cache management methods. The method comprises the steps that shared flag bits and data block pointers are added in a last stage shared cache, the shared flag bits are used for distinguishing whether data are private data or shared data, the data block pointers are used for tracing the positions of the private data in a private cache, and a directory cache is used for maintaining the consistency of the shared data; the data are divided into the private data or the shared data based on the last stage shared cache and the directory cache; the private data do not occupy the space of the directory cache, and the private cache is used for maintaining the consistency of direction data; the shared data occupy the space of the directory cache, and the directory cache is used for maintaining the consistency of the data. According to the directory cache management method for the big data application, the conflict and replacing frequency of the directory cache can be lowered, the memory access delay of the private data is shortened, and the performance of a multi-core processor system is improved.
Description
Technical field
The present invention relates to a kind of Directory caching management method, specifically a kind of Directory caching towards big data application
Management method.
Background technology
With process required for developing rapidly for the fields such as shopping at network, search, Internet of Things and data mining, data center
Data volume drastically rapidly increase.Data center adopts Scale-Out mode dilatations, simple to operate, low cost to be increasingly becoming
The main flow direction of Future Data center development.However, compared with tradition application, the application of Scale-Out types has its special
Part, the first data set is big, and the application of Scale-Out types needs data set to be processed often in gigabit(M)More than, much surpass
Go out buffer memory capacity on current processor piece;Second is limited shared, and the application of Scale-Out types has data sharing, main collection
In it is shared and task communication cooperation shared in instruction, data set is about several million;3rd locality, Scale-Out types
Using there is locality, for there is higher data reusing in the data of small range.
Traditional Cache coherency protocol is designed for legacy application, answering for Scale-Out types
With inefficient, there are 2 main causes:First catalogue is shaken;Second postpones height.Catalogue shakes Producing reason, catalogue
Replacement policy lack data access information in privately owned caching, the limited directory capacity of substantial amounts of data contention causes to replace
The data that are well used in privately owned caching.High access delay Producing reason is, for private data is not required in itself
Safeguard data consistency, and in traditional design, need to compete Directory caching, and when Directory caching is replaced presence compared with
Long time delay, has a strong impact on systematic function.
The content of the invention
The technical assignment of the present invention is to provide a kind of conflict that can reduce Directory caching and replaces number of times, reduces privately owned
The memory access latency of data, a kind of Directory caching manager towards big data application for improving the performance of multi-core processor system
Method.
The technical assignment of the present invention is realized in the following manner:
A kind of Directory caching management method towards big data application, in final stage shared buffer memory(Last-Level-Cache,
LLC)Middle to increase shared flag bit and data block pointer, it is private data or shared number that shared flag bit is used for distinguishing data
According to data block pointer is used for following the trail of position of the private data in privately owned caching, Directory caching(Directory Cache, DC)
For safeguarding the uniformity of shared data;Based on final stage shared buffer memory and Directory caching data are divided into private data and are shared
Data;Private data is not take up the space of Directory caching, by directory maintenance data consistency in privately owned caching;Shared data is accounted for
Directory caching space is used, data consistency is safeguarded by Directory caching.
Shared flag bit is as follows with the control process of data block pointer:Shared flag bit is used for indicating that the data are on piece
No in shared state, when cache blocks are in disarmed state, flag bit does not work, first when cache blocks are in effective status
The cache blocks acquiescence of secondary foundation is privately owned, and now, data block pointer is used for indexing data of the cache blocks in privately owned caching
Backup, if without backup in privately owned caching, data block pointer points to empty index(NULL).
Directory caching is used for safeguarding the uniformity of shared data, specifically refers to:Directory caching is one and is linked using group
The caching of structure, each cache blocks include label(TAG)With shared vector(Shared-Vector);Shared vector is used for recording altogether
Enjoy position of the data in privately owned caching.
Data are divided into by private data and shared data based on final stage shared buffer memory and Directory caching, are specifically referred to:
Data access is divided into data block with deletion condition according to the hit of final stage shared buffer memory and Directory caching
Following four class:
(1), LLC-Hit and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in shared
State, now the shared flag bit of the data is in shared state in LLC;
(2)The data block of LLC-Hit and DC-Miss, wherein DC-Miss illustrate the data block be private data, in private
There is state, now the shared flag bit of the data block is in privately owned state in LLC, but data block pointer may not be effective, it is allowed to
From processor closer to privately owned caching in without data backup;
(3), LLC-Miss and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in altogether
State is enjoyed, the organizational form of now LLC-Miss explanations LLC is without inclusion relation(Non-Inclusive)Or mutex relation
(Exclusive)'s;
(4), LLC-Miss and DC-Miss data block, illustrate the data block on piece without backup, to re-establish slow
Deposit, for newly-established data cached acquiescence is private data;
In the data block of above-mentioned four classes situation, there are two class data blocks(1)(3)In shared state, this two classes data block
Need the space for taking Directory caching;Other two class(2)(4)In privately owned state, the space of final stage shared buffer memory is taken, do not accounted for
With the space of Directory caching.
Private data is not take up the space of Directory caching, by directory maintenance data consistency in privately owned caching;Shared number
According to Directory caching space is taken, data consistency is safeguarded by Directory caching, specifically referred to:
(1), for any one secondary data access, first access final stage shared buffer memory, if final stage shared buffer memory disappearance,
Directory caching is accessed, if Directory caching disappearance, illustrate current access request is one group of new data, is fetched data from internal memory
Final stage shared buffer memory is backfilling into, the private data on mark in final stage shared buffer memory, data block pointer points to the place for sending request
Reason device core, in returning data to the privately owned caching of higher, the data block does not take up room in Directory caching;
(2), for any one secondary data access, if final stage shared buffer memory disappearance, need access Directory caching, if
Hit in Directory caching, then illustrate current access request is one group of shared data;If write operation, need to all of
Shared data sends nullified message, then exits from Directory caching, and to memory read data final stage shared buffer memory is stored in
In, and the privately owned caching of higher is returned to, and private data is labeled as in final stage shared buffer memory, data block pointer is pointed to and sent
The processor core of request;If read operation, corresponding position makes marks in the shared vector of Directory caching, reads from internal memory
Data are stored in final stage shared buffer memory, and return to the privately owned caching of higher, and shared number is labeled as in final stage shared buffer memory
According to data block pointer points to empty index(NULL);
(3), any one secondary data accessed, if the hit of final stage shared buffer memory, and is labeled as private data, but please
The processor core that the processor core asked is pointed to from data block pointer is different(Access request can not possibly be sent out from same processor core
Go out);If write request, first uniformity is safeguarded according to consistency protocol(If dirty data needs write-back, if totally
Data need the nullified data), then, data block pointer is changed, data are ultimately written to final stage shared buffer memory;If reading
Request, needs to make the state of data into shared state, and data block pointer is pointed to into empty index(NULL), it is then, slow in catalogue
Deposit one record shared vector of middle occupancy;
(4), any one secondary data accessed, if the hit of final stage shared buffer memory, and be labeled as shared state;If
Write operation, accesses first Directory caching, and according to the data in the nullified all privately owned cachings of shared vector, then modification is labeled as
Private data, data block pointer is pointed to the processor core for sending request, writes data to final stage shared buffer memory and returned data
To the privately owned caching of higher;If read operation, Directory caching is accessed, change shared vector, returned data.
A kind of Directory caching management method towards big data application of the present invention has advantages below:Catalogue can be reduced
The conflict of caching and replacement number of times, reduce the memory access latency of private data, improve the performance of multi-core processor system.Thus,
Have good value for applications.
Specific embodiment
With reference to detailed below a kind of Directory caching management method work towards big data application of the specific embodiment to the present invention
Carefully illustrate.
Embodiment 1:
A kind of Directory caching management method towards big data application of the present invention, in final stage shared buffer memory(Last-
Level-Cache, LLC)Middle to increase shared flag bit and data block pointer, it is private data that shared flag bit is used for distinguishing data
Or shared data, data block pointer be used for follow the trail of position of the private data in privately owned caching, Directory caching(Directory
Cache, DC)For safeguarding the uniformity of shared data;Data are divided into into privately owned based on final stage shared buffer memory and Directory caching
Data and shared data;Private data is not take up the space of Directory caching, by directory maintenance data consistency in privately owned caching;
Shared data takes Directory caching space, and by Directory caching data consistency is safeguarded.
Shared flag bit is as follows with the control process of data block pointer:Shared flag bit is used for indicating that the data are on piece
No in shared state, when cache blocks are in disarmed state, flag bit does not work, first when cache blocks are in effective status
The cache blocks acquiescence of secondary foundation is privately owned, and now, data block pointer is used for indexing data of the cache blocks in privately owned caching
Backup, if without backup in privately owned caching, data block pointer points to empty index;
Directory caching is used for safeguarding the uniformity of shared data, specifically refers to:Directory caching is one and is linked using group
The caching of structure, each cache blocks include label(TAG)With shared vector(Shared-Vector);Shared vector is used for recording altogether
Enjoy position of the data in privately owned caching.
Data are divided into by private data and shared data based on final stage shared buffer memory and Directory caching, are specifically referred to:
Data access is divided into data block with deletion condition according to the hit of final stage shared buffer memory and Directory caching
Following four class:
(1), LLC-Hit and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in shared
State, now the shared flag bit of the data is in shared state in LLC;
(2)The data block of LLC-Hit and DC-Miss, wherein DC-Miss illustrate the data block be private data, in private
There is state, now the shared flag bit of the data block is in privately owned state in LLC, but data block pointer may not be effective, it is allowed to
From processor closer to privately owned caching in without data backup;
(3), LLC-Miss and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in altogether
State is enjoyed, the organizational form of now LLC-Miss explanations LLC is without inclusion relation(Non-Inclusive)Or mutex relation
(Exclusive)'s;
(4), LLC-Miss and DC-Miss data block, illustrate the data block on piece without backup, to re-establish slow
Deposit, for newly-established data cached acquiescence is private data;
In the data block of above-mentioned four classes situation, there are two class data blocks(1)(3)In shared state, this two classes data block
Need the space for taking Directory caching;Other two class(2)(4)In privately owned state, the space of final stage shared buffer memory is taken, do not accounted for
With the space of Directory caching.
Embodiment 2:
A kind of Directory caching management method towards big data application of the present invention, in final stage shared buffer memory(Last-
Level-Cache, LLC)Middle to increase shared flag bit and data block pointer, it is private data that shared flag bit is used for distinguishing data
Or shared data, data block pointer be used for follow the trail of position of the private data in privately owned caching, Directory caching(Directory
Cache, DC)For safeguarding the uniformity of shared data;Data are divided into into privately owned based on final stage shared buffer memory and Directory caching
Data and shared data;Private data is not take up the space of Directory caching, by directory maintenance data consistency in privately owned caching;
Shared data takes Directory caching space, and by Directory caching data consistency is safeguarded.
Final stage shared buffer memory(Last-Level-Cache, LLC)Comprising three partial datas:Label(Tag), data(Data)
With mode bit(VUDA).Label(Tag)For being compared with memory access address, check whether cache blocks hit;Data(Data)
For cache data;Mode bit(VUDA)In four represent respectively:V(Valid, data are effective)、U(Used, data quilt
Accessed)、D(Whether Dirty, data are dirty)、A(Allocated, cache blocks are allocated).Two parts data are increased newly:
Data block pointer(Pointer)With shared flag bit(SV).Wherein, data block pointer(Pointer)For record data at which
There is backup in individual processor core;Two in shared flag bit represent respectively:S(Shared, data block is in shared state)、N
(NULL, data block pointer points to empty index).
Shared flag bit is as follows with the control process of data block pointer:Shared flag bit is used for indicating that the data are on piece
No in shared state, when cache blocks are in disarmed state, flag bit does not work, first when cache blocks are in effective status
The cache blocks acquiescence of secondary foundation is privately owned, and now, data block pointer is used for indexing data of the cache blocks in privately owned caching
Backup, if without backup in privately owned caching, data block pointer points to empty index;
Directory caching is used for safeguarding the uniformity of shared data, specifically refers to:Directory caching is one and is linked using group
The caching of structure, each cache blocks include label(TAG)With shared vector(Shared-Vector);Shared vector is used for recording altogether
Enjoy position of the data in privately owned caching.
Data are divided into by private data and shared data based on final stage shared buffer memory and Directory caching, are specifically referred to:
Data access is divided into data block with deletion condition according to the hit of final stage shared buffer memory and Directory caching
Following four class:
(1), LLC-Hit and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in shared
State, now the shared flag bit of the data is in shared state in LLC;
(2)The data block of LLC-Hit and DC-Miss, wherein DC-Miss illustrate the data block be private data, in private
There is state, now the shared flag bit of the data block is in privately owned state in LLC, but data block pointer may not be effective, it is allowed to
From processor closer to privately owned caching in without data backup;
(3), LLC-Miss and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in altogether
State is enjoyed, the organizational form of now LLC-Miss explanations LLC is without inclusion relation(Non-Inclusive)Or mutex relation
(Exclusive)'s;
(4), LLC-Miss and DC-Miss data block, illustrate the data block on piece without backup, to re-establish slow
Deposit, for newly-established data cached acquiescence is private data;
In the data block of above-mentioned four classes situation, there are two class data blocks(1)(3)In shared state, this two classes data block
Need the space for taking Directory caching;Other two class(2)(4)In privately owned state, the space of final stage shared buffer memory is taken, do not accounted for
With the space of Directory caching.
Private data is not take up the space of Directory caching, by directory maintenance data consistency in privately owned caching;Shared number
According to Directory caching space is taken, data consistency is safeguarded by Directory caching, specifically referred to:
(1), for any one secondary data access, first access final stage shared buffer memory, if final stage shared buffer memory disappearance,
Directory caching is accessed, if Directory caching disappearance, illustrate current access request is one group of new data, is fetched data from internal memory
Final stage shared buffer memory is backfilling into, the private data on mark in final stage shared buffer memory, data block pointer points to the place for sending request
Reason device core, in returning data to the privately owned caching of higher, the data block does not take up room in Directory caching;
(2), for any one secondary data access, if final stage shared buffer memory disappearance, need access Directory caching, if
Hit in Directory caching, then illustrate current access request is one group of shared data;If write operation, need to all of
Shared data sends nullified message, then exits from Directory caching, and to memory read data final stage shared buffer memory is stored in
In, and the privately owned caching of higher is returned to, and private data is labeled as in final stage shared buffer memory, data block pointer is pointed to and sent
The processor core of request;If read operation, corresponding position makes marks in the shared vector of Directory caching, reads from internal memory
Data are stored in final stage shared buffer memory, and return to the privately owned caching of higher, and shared number is labeled as in final stage shared buffer memory
According to data block pointer points to empty index(NULL);
(3), any one secondary data accessed, if the hit of final stage shared buffer memory, and is labeled as private data, but please
The processor core that the processor core asked is pointed to from data block pointer is different(Access request can not possibly be sent out from same processor core
Go out);If write request, first uniformity is safeguarded according to consistency protocol(If dirty data needs write-back, if totally
Data need the nullified data), then, data block pointer is changed, data are ultimately written to final stage shared buffer memory;If reading
Request, needs to make the state of data into shared state, and data block pointer is pointed to into empty index(NULL), it is then, slow in catalogue
Deposit one record shared vector of middle occupancy;
(4), any one secondary data accessed, if the hit of final stage shared buffer memory, and be labeled as shared state;If
Write operation, accesses first Directory caching, and according to the data in the nullified all privately owned cachings of shared vector, then modification is labeled as
Private data, data block pointer is pointed to the processor core for sending request, writes data to final stage shared buffer memory and returned data
To the privately owned caching of higher;If read operation, Directory caching is accessed, change shared vector, returned data.
By specific embodiment above, the those skilled in the art can readily realize the present invention.But should
Work as understanding, the present invention is not limited to above-mentioned 2 kind specific embodiment.On the basis of disclosed embodiment, the technology
The technical staff in field can be combined different technical characteristics, so as to realize different technical schemes.
Claims (5)
1. a kind of Directory caching management method towards big data application, it is characterised in that increase shared in final stage shared buffer memory
Flag bit and data block pointer, it is private data or shared data that shared flag bit is used for distinguishing data, and data block pointer is used
To follow the trail of position of the private data in privately owned caching, Directory caching is used for safeguarding the uniformity of shared data;It is common based on final stage
Enjoy caching and data are divided into private data and shared data by Directory caching;Private data is not take up the space of Directory caching,
By directory maintenance data consistency in privately owned caching;Shared data takes Directory caching space, and by Directory caching number is safeguarded
According to uniformity.
2. a kind of Directory caching management method towards big data application according to claim 1, it is characterised in that shared
Flag bit is as follows with the control process of data block pointer:Whether shared flag bit is used for indicating the data on piece in shared shape
State, when cache blocks are in disarmed state, flag bit does not work, when cache blocks are in effective status, the caching set up first
Block acquiescence is privately owned, and now, data block pointer is used for indexing data backup of the cache blocks in privately owned caching, if privately owned
Without backup in caching, data block pointer points to empty index.
3. a kind of Directory caching management method towards big data application according to claim 1, it is characterised in that catalogue
Caching is used for safeguarding the uniformity of shared data, specifically refers to:Directory caching is a caching using group associative structure, each
Cache blocks include label and shared vector;Shared vector is used for recording position of the shared data in privately owned caching.
4. a kind of Directory caching management method towards big data application according to claim 1, it is characterised in that be based on
Data are divided into private data and shared data by final stage shared buffer memory and Directory caching, are specifically referred to:
Data access is divided into data block with deletion condition according to the hit of final stage shared buffer memory and Directory caching as follows
Four classes:
(1), LLC-Hit and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in shared shape
State, now the shared flag bit of the data is in shared state in LLC;
(2)The data block of LLC-Hit and DC-Miss, wherein DC-Miss illustrate the data block be private data, in privately owned shape
State, now the shared flag bit of the data block is in privately owned state in LLC;
(3), LLC-Miss and DC-Hit data block, wherein DC-Hit illustrate the data block be shared data, in shared shape
State, the organizational form of now LLC-Miss explanations LLC is without inclusion relation or mutex relation;
(4), LLC-Miss and DC-Miss data block, illustrate the data block on piece without backup, to re-establish caching,
For newly-established data cached acquiescence is private data;
In the data block of above-mentioned four classes situation, there are two class data blocks(1)(3)In shared state, this two classes data block needs
Take the space of Directory caching;Other two class(2)(4)In privately owned state, the space of final stage shared buffer memory is taken, be not take up mesh
The space of record caching.
5. a kind of Directory caching management method towards big data application according to claim 1, it is characterised in that privately owned
Data are not take up the space of Directory caching, by directory maintenance data consistency in privately owned caching;Shared data takes catalogue and delays
Space is deposited, data consistency is safeguarded by Directory caching, specifically referred to:
(1), for any one secondary data access, first access final stage shared buffer memory, if final stage shared buffer memory disappearance, access
Directory caching, if Directory caching disappearance, illustrate current access request is one group of new data, backfill of fetching data from internal memory
To final stage shared buffer memory, the private data on mark in final stage shared buffer memory, data block pointer points to the processor for sending request
Core, in returning data to the privately owned caching of higher, the data block does not take up room in Directory caching;
(2), for any one secondary data access, if final stage shared buffer memory disappearance, need access Directory caching, if in mesh
Hit in record caching, then illustrate current access request is one group of shared data;If write operation, need to be shared to all of
The nullified message of data is activation, then exits from Directory caching, is stored in final stage shared buffer memory to memory read data, and
The privately owned caching of higher is returned to, private data is labeled as in final stage shared buffer memory, data block pointer is pointed to and sends request
Processor core;If read operation, corresponding position makes marks in the shared vector of Directory caching, from memory read data
It is stored in final stage shared buffer memory, and returns to the privately owned caching of higher, shared data is labeled as in final stage shared buffer memory, number
Empty index is pointed to according to block pointer;
(3), any one secondary data accessed, if the hit of final stage shared buffer memory, and is labeled as private data, but request
The processor core that processor core is pointed to from data block pointer is different;If write request, one is safeguarded according to consistency protocol first
Cause property, then, changes data block pointer, is ultimately written data to final stage shared buffer memory;If read request, need data
State makes shared state into, and data block pointer is pointed to into empty index, then, take in Directory caching a record share to
Amount;
(4), any one secondary data accessed, if the hit of final stage shared buffer memory, and be labeled as shared state;If writing behaviour
Make, Directory caching is accessed first, according to the data in the nullified all privately owned cachings of shared vector, then modification is labeled as privately owned
Data, data block pointer is pointed to the processor core for sending request, is write data to final stage shared buffer memory and is returned data to more
High-rise privately owned caching;If read operation, Directory caching is accessed, change shared vector, returned data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410611086.1A CN104461932B (en) | 2014-11-04 | 2014-11-04 | Directory cache management method for big data application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410611086.1A CN104461932B (en) | 2014-11-04 | 2014-11-04 | Directory cache management method for big data application |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104461932A CN104461932A (en) | 2015-03-25 |
CN104461932B true CN104461932B (en) | 2017-05-10 |
Family
ID=52908018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410611086.1A Active CN104461932B (en) | 2014-11-04 | 2014-11-04 | Directory cache management method for big data application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104461932B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106815174B (en) * | 2015-11-30 | 2019-07-30 | 大唐移动通信设备有限公司 | Data access control method and Node Controller |
CN107229593B (en) * | 2016-03-25 | 2020-02-14 | 华为技术有限公司 | Cache consistency operation method of multi-chip multi-core processor and multi-chip multi-core processor |
US10482024B2 (en) * | 2017-07-20 | 2019-11-19 | Alibaba Group Holding Limited | Private caching for thread local storage data access |
CN109726017B (en) * | 2017-10-30 | 2023-05-26 | 阿里巴巴集团控股有限公司 | Method and device for sharing cache between application programs |
CN108415854A (en) * | 2018-02-11 | 2018-08-17 | 中国神华能源股份有限公司 | Data collecting system based on shared buffer memory and method |
CN117014504B (en) * | 2023-08-11 | 2024-04-16 | 北京市合芯数字科技有限公司 | Data transmission method, device, equipment, medium and product |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049392A (en) * | 2012-10-17 | 2013-04-17 | 华为技术有限公司 | Method and device for achieving cache catalogue |
CN103279428A (en) * | 2013-05-08 | 2013-09-04 | 中国人民解放军国防科学技术大学 | Explicit multi-core Cache consistency active management method facing flow application |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8812793B2 (en) * | 2006-06-19 | 2014-08-19 | International Business Machines Corporation | Silent invalid state transition handling in an SMP environment |
-
2014
- 2014-11-04 CN CN201410611086.1A patent/CN104461932B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049392A (en) * | 2012-10-17 | 2013-04-17 | 华为技术有限公司 | Method and device for achieving cache catalogue |
CN103279428A (en) * | 2013-05-08 | 2013-09-04 | 中国人民解放军国防科学技术大学 | Explicit multi-core Cache consistency active management method facing flow application |
Also Published As
Publication number | Publication date |
---|---|
CN104461932A (en) | 2015-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104461932B (en) | Directory cache management method for big data application | |
CN106462495B (en) | Memory Controller and processor-based system and method | |
CN106462494B (en) | The Memory Controller compressed using memory capacity and relevant processor-based system and method | |
CN105550155B (en) | Snoop filter for multicomputer system and related snoop filtering method | |
CN102331993B (en) | Data migration method of distributed database and distributed database migration system | |
CN102981963B (en) | A kind of implementation method of flash translation layer (FTL) of solid-state disk | |
US20200117368A1 (en) | Method for achieving data copying in ftl of solid state drive, system and solid state drive | |
CN102541983B (en) | Method for synchronously caching by multiple clients in distributed file system | |
CN105335098A (en) | Storage-class memory based method for improving performance of log file system | |
CN103150136B (en) | Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache | |
CN101510176B (en) | Control method of general-purpose operating system for accessing CPU two stage caching | |
CN106201335B (en) | Storage system | |
CN104166634A (en) | Management method of mapping table caches in solid-state disk system | |
CN103246616A (en) | Global shared cache replacement method for realizing long-short cycle access frequency | |
CN107784121A (en) | Lowercase optimization method of log file system based on nonvolatile memory | |
CN110321301A (en) | A kind of method and device of data processing | |
CN105095113B (en) | A kind of buffer memory management method and system | |
CN107832013A (en) | A kind of method for managing solid-state hard disc mapping table | |
CN102682110A (en) | High-performance cache design method orienting to massive spatial information | |
CN110968269A (en) | SCM and SSD-based key value storage system and read-write request processing method | |
CN101714065A (en) | Method for managing mapping information of flash controller | |
CN106055679A (en) | Multi-level cache sensitive indexing method | |
CN111580754B (en) | Write-friendly flash memory solid-state disk cache management method | |
CN106909323A (en) | The caching of page method of framework is hosted suitable for DRAM/PRAM mixing and mixing hosts architecture system | |
CN103729309B (en) | A kind of catalogue Cache coherence methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |