CN105302497B - A kind of buffer memory management method and system - Google Patents

A kind of buffer memory management method and system Download PDF

Info

Publication number
CN105302497B
CN105302497B CN201510822300.2A CN201510822300A CN105302497B CN 105302497 B CN105302497 B CN 105302497B CN 201510822300 A CN201510822300 A CN 201510822300A CN 105302497 B CN105302497 B CN 105302497B
Authority
CN
China
Prior art keywords
caching
read
bandwidth
shared
write process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510822300.2A
Other languages
Chinese (zh)
Other versions
CN105302497A (en
Inventor
魏盟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201510822300.2A priority Critical patent/CN105302497B/en
Publication of CN105302497A publication Critical patent/CN105302497A/en
Application granted granted Critical
Publication of CN105302497B publication Critical patent/CN105302497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of buffer memory management method and systems, are each user's operation batch operation identifier number;It calculates separately each operation mark and numbers the shared bandwidth of corresponding operation caching;When preset bandwidth threshold is 0, each operation mark is numbered into corresponding read-write process, the priority that the descending size distribution caching of shared bandwidth uses is cached according to operation;When preset bandwidth threshold is greater than 0, the operation mark that shared bandwidth is greater than or equal to preset bandwidth threshold is numbered corresponding read-write process to hang up, the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered into corresponding read-write process, the priority that the descending distribution caching of shared bandwidth uses is cached according to operation, the priority used according to the read or write speed of each thread distribution caching, it prevents the read-write of single file from postponing that entire service feature is caused to decline, efficiently caching can be managed.

Description

A kind of buffer memory management method and system
Technical field
The present invention relates to field of distributed storage, more particularly to a kind of buffer memory management method and system.
Background technique
With the rapid development of network application, network information data amount is increasing, and the mass data storage of PB rank becomes It obtains more and more important.The local disk storage mode of traditional sense has been unable to meet large capacity, high reliability, the height of existing application The requirement such as performance, extending transversely, to meet these new demands, network-based distributed memory system has obtained extensive attention.
By taking a kind of typical distributed file system as an example, bottom is mounted in such a way that object stores (OSD) After client node, mapped outward by the Network File System NFS or universal network file system CIFS agreement of standard, It connects and uses for terminal user.When user initiates to be directed to the read-write requests of certain file, data can arrive first at client node, It is put into client-cache, the node position where each slice object of file is obtained by file slice and consistency Hash calculation It sets, then read and write and operation return code or file data is fed back into front end user.In whole process, client is delayed The mechanism of depositing plays an important role, and in read operation, it is responsible for starting and pre-reads, by a certain range after file destination reading position Data be put into spatial cache in advance be getting faster so that sequence is read, and in write operation, it is responsible for spelling small block data It dresses up chunk data and unifies rule, improve performance.As it can be seen that existing in the case where mixing read-write or multiple files operate simultaneously The problem of buffer scheduling, traditional distributed file system are only capable of accomplishing simply to introduce caching, can not accomplish adaptively, to apply The read-write delay of inefficiency, single file may cause the performance decline of entire business.
Summary of the invention
In view of this, the main purpose of the present invention is to provide a kind of buffer memory management method and system, it can be efficiently right Caching is managed.
To achieve the above object, the present invention provides a kind of buffer memory management methods, comprising:
The read-write operation instruction for responding each user, is each user's operation batch operation identifier number;
It calculates separately each operation mark and numbers the shared bandwidth of corresponding operation caching;
Judge whether preset bandwidth threshold is 0;
When preset bandwidth threshold is 0, each operation mark is numbered into corresponding read-write process, according to shared by operation caching The priority that the descending size distribution caching of bandwidth uses;
When preset bandwidth threshold is greater than 0, judge that each operation mark numbers whether corresponding operation caches shared bandwidth Less than preset bandwidth threshold, the operation mark that shared bandwidth is greater than or equal to preset bandwidth threshold is numbered into corresponding read-write Process is hung up, and the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered corresponding read-write process, slow according to operation Deposit the priority that the descending distribution caching of shared bandwidth uses.
Preferably, the operation mark that shared bandwidth is greater than or equal to preset bandwidth threshold is numbered into corresponding read-write process After hang-up further include:
When the shared bandwidth for the read-write process being suspended is restored to be less than preset bandwidth threshold, is waken up and be added not It distributes in the read-write process of caching.
Preferably, each operation mark is numbered into corresponding read-write process, it is descending big to cache shared bandwidth according to operation Small distribution caches the priority used
In the read-write process of unallocated caching, the maximum operation mark number of shared bandwidth is cached for current operation and is corresponded to Read-write process distribute caching, cache according to operation that shared bandwidth is descending to be sequentially allocated default point for other read-write processes Caching with threshold value.
Preferably, the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered into corresponding read-write process, according to Operation caches the priority that the descending distribution caching of shared bandwidth uses
In the read-write process that shared bandwidth is less than preset bandwidth threshold and unallocated caching, institute is cached for current operation It accounts for the maximum operation mark of bandwidth and numbers corresponding read-write process distribution caching, institute is cached according to operation for other read-write processes Account for the descending caching for being sequentially allocated default allocation threshold of bandwidth.
Preferably, after the priority that the descending distribution caching of shared bandwidth uses being cached according to operation further include:
The read-write process of the allocated caching is written and read.
The present invention also provides a kind of cache management systems, comprising:
Operation mark number distribution module, the read-write operation for responding each user are instructed, are distributed for each user's operation Operation mark number;
Bandwidth calculation module numbers the shared bandwidth of corresponding operation caching for calculating separately each operation mark;
Distribution module is cached, for judging whether preset bandwidth threshold is 0;It, will be each when preset bandwidth threshold is 0 Operation mark numbers corresponding read-write process, is used according to the descending size distribution caching of the shared bandwidth of operation caching preferential Grade;When preset bandwidth threshold is greater than 0, it is pre- to judge whether the corresponding shared bandwidth of operation caching of each operation mark number is less than If bandwidth threshold, the operation mark that shared bandwidth is greater than or equal to preset bandwidth threshold is numbered into corresponding read-write process and is hung It rises, the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered into corresponding read-write process, according to shared by operation caching The priority that the descending distribution caching of bandwidth uses.
Preferably, the cache management system further include:
Read-write operation module is written and read for the read-write process to the allocated caching.
Using a kind of buffer memory management method provided by the invention and system, compiled for each user's operation batch operation mark Number;It calculates separately each operation mark and numbers the shared bandwidth of corresponding operation caching;It, will be each when preset bandwidth threshold is 0 Operation mark numbers corresponding read-write process, is used according to the descending size distribution caching of the shared bandwidth of operation caching preferential Grade;When preset bandwidth threshold is greater than 0, shared bandwidth is greater than or equal to the operation mark number pair of preset bandwidth threshold The read-write process answered is hung up, and the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered corresponding read-write process, root The priority that the descending distribution caching of shared bandwidth uses is cached according to operation, is made according to the read or write speed of each thread distribution caching Priority prevents the read-write of single file from postponing that entire service feature is caused to decline, efficiently can carry out pipe to caching Reason.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of buffer memory management method embodiment of the present invention;
Fig. 2 is the structural schematic diagram of distributed file system of the present invention;
Fig. 3 is a kind of specific flow chart of buffer memory management method embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of cache management system embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
The present invention provides a kind of buffer memory management method, Fig. 1 shows the process of buffer memory management method embodiment of the present invention Figure, comprising:
Step S101: the read-write operation instruction of each user is responded, is each user's operation batch operation identifier number;
As there are two user, then distributing ID for user 1 is 1, be the distribution of user 2 ID is 2.
Step S102: it calculates separately each operation mark and numbers the shared bandwidth of corresponding operation caching;
Calculate separately bandwidth shared by the read-write process that ID is 1 and ID is 2.
Step S103: judge whether preset bandwidth threshold is 0, if it is, entering step S104, is otherwise entered step S105;
Step S104: when preset bandwidth threshold is 0, each operation mark is numbered into corresponding read-write process, according to behaviour Cache the priority that the descending size distribution caching of shared bandwidth uses;
The priority that the descending size distribution caching of shared bandwidth uses is cached according to operation specifically: unallocated slow In the read-write process deposited, it is slow that the corresponding read-write process distribution of the maximum operation mark number of shared bandwidth is cached for current operation It deposits, the descending caching for being sequentially allocated default allocation threshold of shared bandwidth is cached according to operation for other read-write processes.
Step S105: when preset bandwidth threshold is greater than 0, judge that each operation mark is numbered shared by corresponding operation caching Whether bandwidth is less than preset bandwidth threshold, and shared bandwidth is greater than or equal to the operation mark number pair of preset bandwidth threshold The read-write process answered is hung up, and the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered corresponding read-write process, root The priority that the descending distribution caching of shared bandwidth uses is cached according to operation.
When the shared bandwidth for the read-write process being suspended is restored to be less than preset bandwidth threshold, is waken up and be added not Distribute caching read-write process in, by shared bandwidth be less than preset bandwidth threshold operation mark number it is corresponding read and write into Journey caches the priority that the descending distribution caching of shared bandwidth uses according to operation specifically: be less than in shared bandwidth default Bandwidth threshold and the read-write process of unallocated caching in, cache the maximum operation mark number pair of shared bandwidth for current operation The read-write process distribution caching answered, for other read-write processes according to operation cache shared bandwidth it is descending be sequentially allocated it is default The caching of allocation threshold.
In the present embodiment, client issues read write command, carries out caching monitoring and executes read-write operation again, theory structure is as schemed Shown in 2.
Assuming that pre-set bandwidths threshold value is a, and when first terminal user initiates to be directed to the read-write operation of file A, caching prison Control module will receive message, which is denoted as No. 0 operation ID and is recorded, and later, caching monitoring module can pass through timing It records the data volume that No. 0 operation is loaded into caching and calculates its caching occupied bandwidth x, in conjunction with module on startup from system The user defined code stream bandwidth a value read in configuration file to the operation carry out speed limit, if a be 0, be considered as operation enjoy it is whole Block caching, it is without restriction, if a is the value greater than 0, issued to client host process for this read-write thread when x is not less than a Pending signal, so that its moment is entered dormant state, caching occupied bandwidth waited to be waken up when revert to a value or less again, It continues to execute.In addition, specified code stream bandwidth a value can be arranged online in read-write carries out, caching monitoring module can obtain immediately It takes and United Dispatching is carried out according to current operation ID number, so that setting is come into force immediately, administrator is facilitated to operate;
When second terminal user initiates to be directed to the read-write operation of file B, caching monitoring module can increase 1 to operate ID Mode is recorded, so as to subsequent scheduling.If a value is greater than 0 at this time, identical as above-mentioned regulating strategy, if but a value If 0, it just will appear the problem of two operations competition caches, in this case, caching monitoring module can be according to calculated 1 Number operation caching occupied bandwidth y with No. 0 operation x be compared, with this come distinguish two operate temperatures, to divide excellent First grade, the high preferential right for enjoying caching of temperature, and temperature is lower only maintains a preset threshold b, is still logical It crosses above-mentioned regulating strategy to realize, with increasing for read-write thread, the treatment scale for caching monitoring module is also increased accordingly, but whole Regulating strategy be constant;
Fig. 3 is the overall flow figure of the present embodiment, caches what the descending distribution caching of shared bandwidth used according to operation The read-write process of the allocated caching is written and read after priority.
It is each user's operation batch operation identifier number using a kind of buffer memory management method provided in this embodiment;Point Each operation mark is not calculated numbers the shared bandwidth of corresponding operation caching;When preset bandwidth threshold is 0, each operation is marked Know and number corresponding read-write process, the priority that the descending size distribution caching of shared bandwidth uses is cached according to operation;When When preset bandwidth threshold is greater than 0, the operation mark number that shared bandwidth is greater than or equal to preset bandwidth threshold is corresponding Read-write process is hung up, and the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered corresponding read-write process, according to behaviour The priority that the descending distribution caching of shared bandwidth uses is cached, is used according to the read or write speed of each thread distribution caching Priority prevents the read-write of single file from postponing that entire service feature is caused to decline, can efficiently be managed to caching.
The present invention also provides a kind of cache management system, Fig. 4 shows the knot of cache management system embodiment of the present invention Structure schematic diagram, comprising:
Operation mark number distribution module 101, the read-write operation for responding each user instructs, for each user's operation point It is numbered with operation mark;
Bandwidth calculation module 102 is connected, for calculating separately each behaviour with the operation mark number distribution module 101 It makes a check mark and numbers the shared bandwidth of corresponding operation caching;
Distribution module 103 is cached, is connected with the bandwidth calculation module 102, for judging that preset bandwidth threshold is No is 0;When preset bandwidth threshold is 0, each operation mark is numbered into corresponding read-write process, shared band is cached according to operation The priority that the descending size distribution caching of width uses;When preset bandwidth threshold is greater than 0, each operation mark number is judged Corresponding operation caches whether shared bandwidth is less than preset bandwidth threshold, and shared bandwidth is greater than or equal to preset bandwidth threshold The operation mark of value is numbered corresponding read-write process and is hung up, and the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered Corresponding read-write process caches the priority that the descending distribution caching of shared bandwidth uses according to operation.
It is each user's operation batch operation identifier number using a kind of cache management system provided in this embodiment;Point Each operation mark is not calculated numbers the shared bandwidth of corresponding operation caching;When preset bandwidth threshold is 0, each operation is marked Know and number corresponding read-write process, the priority that the descending size distribution caching of shared bandwidth uses is cached according to operation;When When preset bandwidth threshold is greater than 0, the operation mark number that shared bandwidth is greater than or equal to preset bandwidth threshold is corresponding Read-write process is hung up, and the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered corresponding read-write process, according to behaviour The priority that the descending distribution caching of shared bandwidth uses is cached, is used according to the read or write speed of each thread distribution caching Priority prevents the read-write of single file from postponing that entire service feature is caused to decline, can efficiently be managed to caching.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other. For system class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, related place ginseng See the part explanation of embodiment of the method.
Finally, it is to be noted that, herein, the terms "include", "comprise" or its any other variant are intended to Cover non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or setting Standby intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in the process, method, article or apparatus that includes the element.
Method and system provided by the present invention is described in detail above, specific case used herein is to this The principle and embodiment of invention is expounded, method of the invention that the above embodiments are only used to help understand and Its core concept;At the same time, for those skilled in the art in specific embodiment and is answered according to the thought of the present invention With in range, there will be changes, in conclusion the contents of this specification are not to be construed as limiting the invention.

Claims (7)

1. a kind of buffer memory management method characterized by comprising
The read-write operation instruction for responding each user, is each user's operation batch operation identifier number;
It calculates separately each operation mark and numbers the shared bandwidth of corresponding operation caching;
Judge whether preset bandwidth threshold is 0;
When preset bandwidth threshold is 0, each operation mark is numbered into corresponding read-write process, shared bandwidth is cached according to operation The priority that descending size distribution caching uses;
When preset bandwidth threshold is greater than 0, judge that each operation mark numbers corresponding operation and caches whether shared bandwidth is less than The operation mark that shared bandwidth is greater than or equal to preset bandwidth threshold is numbered corresponding read-write process by preset bandwidth threshold It hangs up, the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered into corresponding read-write process, institute is cached according to operation Account for the priority that the descending distribution caching of bandwidth uses.
2. buffer memory management method according to claim 1, which is characterized in that shared bandwidth is greater than or equal to preset band After the operation mark of wide threshold value numbers corresponding read-write process hang-up further include:
When the shared bandwidth for the read-write process being suspended is restored to be less than preset bandwidth threshold, is waken up and be added unallocated In the read-write process of caching.
3. buffer memory management method according to claim 2, which is characterized in that by each operation mark number it is corresponding read and write into Journey, caching the priority that the descending size distribution caching of shared bandwidth uses according to operation includes:
In the read-write process of unallocated caching, the shared maximum operation mark of bandwidth is cached for current operation and numbers corresponding reading Write process distributes caching, is sequentially allocated default distribution threshold according to the shared bandwidth of operation caching is descending for other read-write processes The caching of value.
4. buffer memory management method according to claim 2, which is characterized in that shared bandwidth is less than preset bandwidth threshold Operation mark number corresponding read-write process, the priority that the descending distribution caching of shared bandwidth uses is cached according to operation Include:
In the read-write process that shared bandwidth is less than preset bandwidth threshold and unallocated caching, shared band is cached for current operation The maximum operation mark of width numbers corresponding read-write process distribution caching, caches shared band according to operation for other read-write processes The descending caching for being sequentially allocated default allocation threshold of width.
5. buffer memory management method according to claim 1, which is characterized in that it is descending to cache shared bandwidth according to operation After the priority that distribution caching uses further include:
The read-write process of the allocated caching is written and read.
6. a kind of cache management system characterized by comprising
Operation mark number distribution module, the read-write operation for responding each user instruct, and are each user's operation batch operation Identifier number;
Bandwidth calculation module numbers the shared bandwidth of corresponding operation caching for calculating separately each operation mark;
Distribution module is cached, for judging whether preset bandwidth threshold is 0;When preset bandwidth threshold is 0, by each operation The corresponding read-write process of identifier number caches the priority that the descending size distribution caching of shared bandwidth uses according to operation; When preset bandwidth threshold is greater than 0, it is default to judge whether the corresponding shared bandwidth of operation caching of each operation mark number is less than Bandwidth threshold, the operation mark that shared bandwidth is greater than or equal to preset bandwidth threshold is numbered into corresponding read-write process and is hung It rises, the operation mark that shared bandwidth is less than preset bandwidth threshold is numbered into corresponding read-write process, according to shared by operation caching The priority that the descending distribution caching of bandwidth uses.
7. cache management system according to claim 6, which is characterized in that further include:
Read-write operation module is written and read for the read-write process to the allocated caching.
CN201510822300.2A 2015-11-24 2015-11-24 A kind of buffer memory management method and system Active CN105302497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510822300.2A CN105302497B (en) 2015-11-24 2015-11-24 A kind of buffer memory management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510822300.2A CN105302497B (en) 2015-11-24 2015-11-24 A kind of buffer memory management method and system

Publications (2)

Publication Number Publication Date
CN105302497A CN105302497A (en) 2016-02-03
CN105302497B true CN105302497B (en) 2019-09-24

Family

ID=55199810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510822300.2A Active CN105302497B (en) 2015-11-24 2015-11-24 A kind of buffer memory management method and system

Country Status (1)

Country Link
CN (1) CN105302497B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893118B (en) * 2016-03-30 2019-11-26 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN106302733A (en) * 2016-08-16 2017-01-04 浪潮(北京)电子信息产业有限公司 A kind of distributed type assemblies merges implementation method and the device of NFS protocol
CN107171869A (en) * 2017-07-11 2017-09-15 郑州云海信息技术有限公司 A kind of method and system that bandwidth control is carried out to CEPH file system
CN107682280A (en) * 2017-09-22 2018-02-09 郑州云海信息技术有限公司 The method, apparatus and equipment of QOS flows control based on NFS
CN107704213B (en) * 2017-11-02 2021-08-31 郑州云海信息技术有限公司 Automatic service quality management method and device for storage array
CN109062514B (en) * 2018-08-16 2021-08-31 郑州云海信息技术有限公司 Bandwidth control method and device based on namespace and storage medium
CN110955512B (en) * 2018-09-27 2023-05-30 阿里巴巴集团控股有限公司 Cache processing method, device, storage medium, processor and computing equipment
CN110286949A (en) * 2019-06-27 2019-09-27 深圳市网心科技有限公司 Process based on the read-write of physical host storage device hangs up method and relevant device
CN111414245B (en) * 2020-03-26 2023-06-13 北京小米移动软件有限公司 Method, device and medium for controlling flash memory read-write rate
CN112162695A (en) * 2020-09-09 2021-01-01 Oppo(重庆)智能科技有限公司 Data caching method and device, electronic equipment and storage medium
CN113064553B (en) * 2021-04-02 2023-02-17 重庆紫光华山智安科技有限公司 Data storage method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535957B1 (en) * 1999-11-09 2003-03-18 International Business Machines Corporation System bus read data transfers with bus utilization based data ordering
WO2012045265A1 (en) * 2010-10-08 2012-04-12 电信科学技术研究院 Buffer space allocation method and device
CN102523154A (en) * 2011-12-08 2012-06-27 华为技术有限公司 Ethernet data processing method and system and optical transport network processing chip
CN102611605A (en) * 2011-01-20 2012-07-25 华为技术有限公司 Scheduling method, device and system of data exchange network
CN103368867A (en) * 2012-03-26 2013-10-23 国际商业机器公司 Method and system of cached object communicating with secondary site through network
CN104301933A (en) * 2014-10-17 2015-01-21 中国人民解放军理工大学 Method for calculating bandwidth and distributing bandwidth in wireless ad hoc network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2270791B (en) * 1992-09-21 1996-07-17 Grass Valley Group Disk-based digital video recorder
CN101122886B (en) * 2007-09-03 2010-06-09 杭州华三通信技术有限公司 Method and device for dispensing cache room and cache controller
CN101706827B (en) * 2009-08-28 2011-09-21 四川虹微技术有限公司 Method for caching file of embedded browser
CN104714898B (en) * 2013-12-16 2018-08-21 深圳市国微电子有限公司 A kind of distribution method and device of Cache

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535957B1 (en) * 1999-11-09 2003-03-18 International Business Machines Corporation System bus read data transfers with bus utilization based data ordering
WO2012045265A1 (en) * 2010-10-08 2012-04-12 电信科学技术研究院 Buffer space allocation method and device
CN102611605A (en) * 2011-01-20 2012-07-25 华为技术有限公司 Scheduling method, device and system of data exchange network
CN102523154A (en) * 2011-12-08 2012-06-27 华为技术有限公司 Ethernet data processing method and system and optical transport network processing chip
CN103368867A (en) * 2012-03-26 2013-10-23 国际商业机器公司 Method and system of cached object communicating with secondary site through network
CN104301933A (en) * 2014-10-17 2015-01-21 中国人民解放军理工大学 Method for calculating bandwidth and distributing bandwidth in wireless ad hoc network

Also Published As

Publication number Publication date
CN105302497A (en) 2016-02-03

Similar Documents

Publication Publication Date Title
CN105302497B (en) A kind of buffer memory management method and system
US10387202B2 (en) Quality of service implementation in a networked storage system with hierarchical schedulers
US11157457B2 (en) File management in thin provisioning storage environments
CN111124277B (en) Deep learning data set caching method, system, terminal and storage medium
US20170177221A1 (en) Dynamic core allocation for consistent performance in a non-preemptive scheduling environment
CN102571959B (en) System and method for downloading data
CN105338078B (en) Date storage method and device for storage system
CN104573119B (en) Towards the Hadoop distributed file system storage methods of energy-conservation in cloud computing
CN105335513B (en) A kind of distributed file system and file memory method
CN105187464B (en) Method of data synchronization, apparatus and system in a kind of distributed memory system
US20190163371A1 (en) Next generation storage controller in hybrid environments
CN105159775A (en) Load balancer based management system and management method for cloud computing data center
CN103475732A (en) Distributed file system data volume deployment method based on virtual address pool
CN110661824B (en) Flow control method of server in distributed cluster and storage medium
US20220179585A1 (en) Management of Idle Time Compute Tasks in Storage Systems
WO2019011262A1 (en) Method and apparatus for resource allocation
CN102982182A (en) Data storage planning method and device
CN104182487A (en) Unified storage method supporting various storage modes
CN104702702A (en) System and method for downloading data
CN108073723A (en) A kind of file in distributed type assemblies storage is from compressing method and equipment
CN109391487A (en) A kind of configuration update method and system
CN109299043A (en) Method, device, equipment and storage medium for deleting large files of distributed cluster system
CN108459821A (en) A kind of method and device of data buffer storage
CN105760391B (en) Method, data node, name node and system for dynamically redistributing data
CN109408597A (en) A kind of power grid metering big data storage system and its creation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant