CN108628547A - A kind of memory system data accesses, processor and caching allocation management method - Google Patents
A kind of memory system data accesses, processor and caching allocation management method Download PDFInfo
- Publication number
- CN108628547A CN108628547A CN201810215973.5A CN201810215973A CN108628547A CN 108628547 A CN108628547 A CN 108628547A CN 201810215973 A CN201810215973 A CN 201810215973A CN 108628547 A CN108628547 A CN 108628547A
- Authority
- CN
- China
- Prior art keywords
- processor
- caching
- access
- data access
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0631—Configuration or reconfiguration of storage systems by allocating resources to storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
Abstract
The present invention relates to a kind of access of memory system data, processor and caching allocation management method, this method is when business host initiates data access to storage system, it is cached to the first of first processor according to the ownership meeting default request of book, and when the adjusting performance program of storage system finds first processor and high the first buffer occupancy, first processor can temporarily transfer second processor and the second cache resources carry out response data access request, this method can reduce first processor and the first buffer occupancy it is excessively high caused by data access performance decline the problem of, to realize the dynamic equalization of multiple processors and caching to data access response.
Description
Technical field
The invention patent relates to computer field of storage.
Background technology
Storage system refers to various storage devices, control unit and the management information by storage program and data in computer
The equipment of scheduling(Hardware)And algorithm(Software)The system formed.The main memory of computer cannot meet access speed simultaneously
Soon, the big and at low cost requirement of memory capacity, must have in a computer speed from slow to fast, the descending multistage layer of capacity
External memory, with optimal control dispatching algorithm and rational cost, constituting has the acceptable storage system of performance.
The main memory of computer cannot meet that access speed is fast, memory capacity is big and requirement at low cost simultaneously, count
Must have in calculation machine speed from slow to fast, the descending multilevel hierarchy memory of capacity, with optimal control dispatching algorithm and
Rational cost, constituting has the acceptable storage system of performance.The status of the performance of storage system in a computer is increasingly heavy
It wants, main cause is:1. von Karman architecture is built on the basis of stored program concept, accessing operation accounts for about center
Processor(CPU)70% or so of time.2. storage management and the quality of tissue influence overall efficiency.3. at modern information
Reason, such as the requirement of image procossing, database, knowledge base, speech recognition, multimedia to storage system are very high.
The business that storage system externally provides can be divided into block storage, file storage and object storage, block storage and refer at one
RAID(Redundant array of independent disks)It concentrates, one group of disc driver is added in a controller, then provides fixed size
RAID blocks are as LUN(Logical unit number)Volume.File storage is also known as NAS(Network Attached Storage:Network is attached
Belong to storage)Be a kind of by distribution, independent Data Integration it is large-scale, centralized management data center, in order to different masters
The technology that machine and application server access.Object storage object stores, also referred to as object-based storage, is for describing
It solves and the generic term of the method for processing discrete unit, these discrete units is called object.Just as file, object
Including data, but with file unlike, object there will be no hierarchical structure in a layer structure.Each object is one
In the same rank of a flat address space referred to as storage pool, an object will not belong to the next stage of another object.
Patent of invention content
The invention patent relates to a kind of access of memory system data, processor and caching allocation management methods, and this method is in business
When host initiates data access to storage system, delay to the first of first processor according to the ownership meeting default request of book
It deposits, and when the adjusting performance program of storage system finds first processor and high the first buffer occupancy, first processor meeting
It temporarily transfers second processor and the second cache resources carrys out response data access request, this method can reduce first processor and first and delay
Deposit occupancy it is excessively high caused by data access performance decline the problem of, with realize multiple processors and caching to data access ring
The dynamic equalization answered.
Description of the drawings
Fig. 1 is that a kind of memory system data of patent of the present invention accesses, processor and caching allocation management method structure show
It is intended to.
Specific implementation mode
In order to make the object, technical solution and advantage of patent of the present invention be more clearly understood, below in conjunction with attached drawing and implementation
Example, is further elaborated patent of the present invention.It should be appreciated that specific embodiment described herein is only used to explain
Patent of the present invention, is not intended to limit the present invention patent.
Referring to Fig. 1, a kind of memory system data that Fig. 1 is patent of the present invention accesses, processor and caching allocate management of
Method structural schematic diagram.
A kind of memory system data accesses, processor and caching allocation management method, which is characterized in that the method data
It accesses(10), access link(11), book(12), adjusting performance program(13), caching a(14a), caching b(14b), processing
Device a(15a), processor b(15b), storage system(16)With business host(17), this method storage system(16)By accessing chain
Road(11)By book(12)It is mapped to business host(17)To provide data access(10)Access service, business host(17)
Issue data access(10)When access request, since requested data is rolled up(12)Belong to processor a(15a)With caching a(14a),
Then default data accesses(10)Access request is by processor a(15a)With caching a(14a)Response, but work as adjusting performance program(13)
Monitor processor a(15a)With caching a(14a)Occupancy is apparently higher than processor b(15b)With caching b(14b)Occupancy
When, processor a(15a)Processor b can be temporarily transferred(15b)With caching b(14b)Resource come response data access(10).
A kind of memory system data accesses, processor and caching allocation management method, which is characterized in that this method is applicable in
In book(12)Belong to processor a(15a)With caching a(14a), while being also applied for book(12)Also place can be belonged to
Manage device b(15b)With caching b(14b).
A kind of memory system data accesses, processor and caching allocation management method, which is characterized in that this method performance tune
Whole program(13)Monitor book(12)Belong to processor a(15a)With caching a(14a), and processor a(15a)And caching
a(14a)Occupancy is non-to be apparently higher than processor b(15b)With caching b(14b)When, then data access(10)People's access request by
Manage device a(15a)With caching a(14a)Response.
A kind of memory system data accesses, processor and caching allocation management method, which is characterized in that this method processor
a(15a)With processor b(15b), caching a(14a)With caching b(14b)Between be to have internal communication link.
The foregoing is merely the preferred embodiments of patent of the present invention, are not intended to limit the invention patent, all at this
All any modification, equivalent and improvement etc., should be included in patent of the present invention made by within the spirit and principle of patent of invention
Protection domain within.
Claims (4)
1. a kind of memory system data access, processor and caching allocation management method, which is characterized in that the method data are visited
It asks(10), access link(11), book(12), adjusting performance program(13), caching a(14a), caching b(14b), processor a
(15a), processor b(15b), storage system(16)With business host(17), this method storage system(16)Pass through access link
(11)By book(12)It is mapped to business host(17)To provide data access(10)Access service, business host(17)Under
Send out data access(10)When access request, since requested data is rolled up(12)Belong to processor a(15a)With caching a(14a), then
Default data accesses(10)Access request is by processor a(15a)With caching a(14a)Response, but work as adjusting performance program(13)Prison
Measure processor a(15a)With caching a(14a)Occupancy is apparently higher than processor b(15b)With caching b(14b)Occupancy when,
Processor a(15a)Processor b can be temporarily transferred(15b)With caching b(14b)Resource come response data access(10).
2. a kind of memory system data access, processor and caching allocation management method according to claim 1, feature
It is, this method is suitable for book(12)Belong to processor a(15a)With caching a(14a), while being also applied for data
Volume(12)Also processor b can be belonged to(15b)With caching b(14b).
3. a kind of memory system data access, processor and caching allocation management method according to claim 1, feature
It is, this method adjusting performance program(13)Monitor book(12)Belong to processor a(15a)With caching a(14a), and
Processor a(15a)With caching a(14a)Occupancy is non-to be apparently higher than processor b(15b)With caching b(14b)When, then data access
(10)People's access request is by processor a(15a)With caching a(14a)Response.
4. a kind of memory system data access, processor and caching allocation management method according to claim 1, feature
It is, this method processor a(15a)With processor b(15b), caching a(14a)With caching b(14b)Between be to have intercommunication
Link.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810215973.5A CN108628547A (en) | 2018-03-16 | 2018-03-16 | A kind of memory system data accesses, processor and caching allocation management method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810215973.5A CN108628547A (en) | 2018-03-16 | 2018-03-16 | A kind of memory system data accesses, processor and caching allocation management method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108628547A true CN108628547A (en) | 2018-10-09 |
Family
ID=63706256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810215973.5A Pending CN108628547A (en) | 2018-03-16 | 2018-03-16 | A kind of memory system data accesses, processor and caching allocation management method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108628547A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866321A (en) * | 2010-06-13 | 2010-10-20 | 北京北大众志微系统科技有限责任公司 | Adjustment method and system for cache management strategy |
CN106326143A (en) * | 2015-06-18 | 2017-01-11 | 华为技术有限公司 | Cache distribution, data access and data sending method, processor and system |
US20170038999A1 (en) * | 2015-08-05 | 2017-02-09 | Qualcomm Incorporated | System and method for flush power aware low power mode control in a portable computing device |
-
2018
- 2018-03-16 CN CN201810215973.5A patent/CN108628547A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866321A (en) * | 2010-06-13 | 2010-10-20 | 北京北大众志微系统科技有限责任公司 | Adjustment method and system for cache management strategy |
CN106326143A (en) * | 2015-06-18 | 2017-01-11 | 华为技术有限公司 | Cache distribution, data access and data sending method, processor and system |
US20170038999A1 (en) * | 2015-08-05 | 2017-02-09 | Qualcomm Incorporated | System and method for flush power aware low power mode control in a portable computing device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017050014A1 (en) | Data storage processing method and device | |
US20160132541A1 (en) | Efficient implementations for mapreduce systems | |
CN104317736B (en) | A kind of distributed file system multi-level buffer implementation method | |
CN108900626B (en) | Data storage method, device and system in cloud environment | |
CN106534308B (en) | Method and device for solving data block access hot spot in distributed storage system | |
WO2011000260A1 (en) | Method, apparatus and network system for managing memory resources in cluster system | |
US10802972B2 (en) | Distributed memory object apparatus and method enabling memory-speed data access for memory and storage semantics | |
CN1602480A (en) | Managing storage resources attached to a data network | |
US11762770B2 (en) | Cache memory management | |
US20220083281A1 (en) | Reading and writing of distributed block storage system | |
US20220066928A1 (en) | Pooled memory controller for thin-provisioning disaggregated memory | |
CN115129621A (en) | Memory management method, device, medium and memory management module | |
US11157191B2 (en) | Intra-device notational data movement system | |
US10802748B2 (en) | Cost-effective deployments of a PMEM-based DMO system | |
CN102375789A (en) | Non-buffer zero-copy method of universal network card and zero-copy system | |
US20210149718A1 (en) | Weighted resource cost matrix scheduler | |
CN108628547A (en) | A kind of memory system data accesses, processor and caching allocation management method | |
US8838902B2 (en) | Cache layer optimizations for virtualized environments | |
WO2024021470A1 (en) | Cross-region data scheduling method and apparatus, device, and storage medium | |
CN106326143A (en) | Cache distribution, data access and data sending method, processor and system | |
US11169720B1 (en) | System and method for creating on-demand virtual filesystem having virtual burst buffers created on the fly | |
CN107357532A (en) | A kind of new cache pre-reading implementation method of new cluster-based storage | |
CN114518962A (en) | Memory management method and device | |
US7831776B2 (en) | Dynamic allocation of home coherency engine tracker resources in link based computing system | |
US20200110818A1 (en) | Mapping first identifier to second identifier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181009 |