CN105589664A - Virtual storage high-speed transmission method - Google Patents

Virtual storage high-speed transmission method Download PDF

Info

Publication number
CN105589664A
CN105589664A CN201511006361.8A CN201511006361A CN105589664A CN 105589664 A CN105589664 A CN 105589664A CN 201511006361 A CN201511006361 A CN 201511006361A CN 105589664 A CN105589664 A CN 105589664A
Authority
CN
China
Prior art keywords
data
actual position
position place
storage host
place storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511006361.8A
Other languages
Chinese (zh)
Other versions
CN105589664B (en
Inventor
张捷
牟俊
许成林
杨剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN ZHONGDIAN VENUS INFORMATION TECHNOLOGY Co Ltd
Original Assignee
SICHUAN ZHONGDIAN VENUS INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN ZHONGDIAN VENUS INFORMATION TECHNOLOGY Co Ltd filed Critical SICHUAN ZHONGDIAN VENUS INFORMATION TECHNOLOGY Co Ltd
Priority to CN201511006361.8A priority Critical patent/CN105589664B/en
Publication of CN105589664A publication Critical patent/CN105589664A/en
Application granted granted Critical
Publication of CN105589664B publication Critical patent/CN105589664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention discloses a virtual storage high-speed transmission method, wherein, by constructing a storage cluster, high-speed transmission of data is implemented by using a multi-level caching mechanism. The method comprises: addressing by means of a client, and acquiring a storage host where a real position of data in a storage cluster is located: the client sending volume name information to any one storage host in the storage cluster; the storage host parsing the volume name information, and obtaining distribution information of a storage volume of the storage host from the volume name information, and then returning the distribution information of the storage volume to the client; positioning the storage host where the real position of the data; the client reading/writing data from a cache of the storage host where the real position of the data is located, so as to implement data interaction; and in a process of implementing data interaction, writing the written data into a faster hard disk first, and then synchronizing the faster hard disk with a common disk. By updating the conventional data transmission caching mechanism, high-speed data transmission is performed by using a multiple caching mechanism manner, which can improve the transmission speed to a large extent.

Description

Virtual memory high speed transmission method
Technical field
The present invention relates to the technical fields such as GlusterFS, Infiniband, Python, specifically, is virtual memory high speed transmission method.
Background technology
At present the distributed storage technology of Virtual mainly contain ceph GlusterFS swift etc., but every kind of storage solution is variant, in the transmission solution of ceph, can realize cold and hot data separating, real-time, interactive data are all to obtain from buffer memory; Although and GlusterFS has caching mechanism, be multi-level buffer mechanism, promote not obvious to the transmission performance of storage.
But there is following shortcoming in existing technology:
Virtual memory (GlusterFS) is that the storage cluster of realizing under linux builds, and adopts single caching mechanism in existing solution, but limited to performance boost.
Summary of the invention
The object of the present invention is to provide virtual memory high speed transmission method, when solution prior art adopts single caching mechanism, to the limited drawback of performance boost, by upgrading existing transfer of data caching mechanism, adopt the mode of many caching mechanisms to carry out data high-speed transmission, can improve to a great extent transmission speed.
The present invention is achieved through the following technical solutions: virtual memory high speed transmission method, and by building storage cluster, utilize multi-level buffer mechanism to realize the high-speed transfer of data, comprise the following steps:
1), by client addressing, obtain the actual position place storage host of data in storage cluster;
2), after step 1), client is read/write data from the buffer memory of the actual position place storage host of data, realizes data interaction.
Further to better implement the present invention, adopt multistage cache mechanism to promote storage IO performance, special adopt following set-up mode: in data exchange process, the data of writing are first write faster in hard disk realizing, then hard disk is synchronizeed with ordinary magnetic disc faster.
Further to better implement the present invention, client can be searched by distributed elasticity hash algorithm the storage of distribution, adopts especially following set-up mode: described step 1) comprises the following steps:
1-1) file label information is sent in described storage cluster a storage host arbitrarily by client;
1-2) described storage host receives after file label information, resolves this file label information, therefrom obtains the distributed intelligence of the storage volume of this storage host, then the distributed intelligence of described storage volume is returned to client;
1-3) client obtains after the distributed intelligence of described storage volume, and the actual position place storage host of described data is positioned.
Further to better implement the present invention, can utilize and write cache mechanism and realize writing of data, put forward high performance object thereby reach, adopt especially following set-up mode: described step 2) comprise the following steps:
2-1) client is write data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
2-1-1) client is received after the positional information of actual position place storage host of the described data of returning, and sends and writes IO operation to the actual position place storage host of data;
What 2-1-2) the virtual memory engine in the actual position place storage host of described data received that client sends writes IO when operation, can be buffered in the cache of hard disk faster writing IO operation, completes this node and writes IO operation.
Further to better implement the present invention, adopt especially following set-up mode: the virtual memory engine in the actual position place storage host of described data also can the cycle be written to common hard disc by the batch data of writing IO operation being buffered in the cache of hard disk faster.
Further to better implement the present invention, can utilize and read cache mechanism and realize reading of data, put forward high performance object thereby reach, adopt especially following set-up mode: described step 2) further comprising the steps of:
2-2) client read data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
What 2-2-1) the virtual memory engine of the actual position place storage host of described data reception client sent reads IO operation;
2-2-2) the virtual memory engine of the actual position place storage host of described data is searched and whether is had required IO data from the internal memory of the actual position place storage host of data " is read cache ", if existed, directly return, the internal memory of simultaneously adjusting the actual position place storage host of required IO data to data " is read cache " LRU head of the queue, otherwise execution step 2-2-3);
2-2-3) in the actual position place storage host of data, hard disk " reading cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of these required IO data simultaneously, otherwise execution step 2-2-4);
2-2-4) in the actual position place storage host of data, hard disk " writing cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of required IO data simultaneously; If the focus access factor of required IO data reaches threshold value, the required IO data of current foundation are buffered in the actual position place storage host of data faster in hard disk " reading cache "; If there is no, execution step 2-2-5);
2-2-5) in the actual position place storage host of data, common hard disc, find required IO data and return, increase the focus access factor of these required IO data simultaneously, if focus access factor reaches threshold value, required IO data can be buffered in the actual position place storage host of data faster in hard disk " reading cache ".
Further to better implement the present invention, can further improve the access speed of data, adopt especially following set-up mode: described hard disk faster adopts SSD.
Further to better implement the present invention, adopt especially following set-up mode: the Internet Transmission between described client and storage cluster adopts the infinband of rdma protocol to realize.
The present invention compared with prior art, has the following advantages and beneficial effect:
(1) when the present invention solves prior art and adopts single caching mechanism, to the limited drawback of performance boost, by upgrading existing transfer of data caching mechanism, adopt the mode of many caching mechanisms to carry out data high-speed transmission, can improve to a great extent transmission speed.
(2) the present invention can realize the fast transport of virtual memory.
(3) to have transmission speed fast in the present invention, transmission data lossless consumption, and exchanges data is without bottleneck, data transmission credibility high.
Brief description of the drawings
Fig. 1 is the schematic diagram that client of the present invention is write data from the buffer memory of the actual position place storage host of data.
Fig. 2 is the schematic diagram of client of the present invention read data from the buffer memory of the actual position place storage host of data.
Detailed description of the invention
Below in conjunction with embodiment, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Virtual memory engine: based on GlusterFS amendment, increase transfer of data multi-level buffer mechanism.
Storage pool: to the description of storage cluster, to the expression of storage specification, convenient displaying and system processing.
RDMA:(RemoteDirectMemoryAccess) the long-range immediate data access of technology full name, for the delay that solves servers' data processing in Internet Transmission produces. RDMA directly imports data into the memory block of computer by network, data are moved quickly into the memory of remote system from a system, and operating system is not had any impact, and does not so just need too much to take the disposal ability of computer. It has been eliminated, and external memory storage copies and text exchange operation, thereby can liberate memory bandwidth and cpu cycle for improvement of application system performance.
RPC: remote procedure call protocol, it is a kind of by network request service remote computer program, and does not need to understand the agreement of bottom-layer network technology. RPC agreement is supposed the existence of some host-host protocol, as TCP or UDP, is carry information data between signal procedure.
Elasticity hash algorithm: elasticity hash algorithm is mapped as the binary value of random length the binary value of shorter regular length, this little binary value is called cryptographic Hash. Cryptographic Hash is the unique and extremely compact numeric representation form of one piece of data. If one section of plaintext of hash and only change a letter of this paragraph, Hash subsequently all will produce different values. Adopt elasticity hash algorithm locating file position, without index of metadata, eliminate Single Point of Faliure, magnetic disc i/o bottleneck, increasing linearity can expand.
LRU: a kind of page replacement algorithm of memory management, in internal memory but no data block (memory block) is called LRU, which data is operating system can belong to LRU and be shifted out internal memory and vacating space loads other data according to.
Client: the one of user side is called for short, can representative of consumer, also can represent certain service or certain system.
Cache, buffer memory.
Virtual memory (GlusterFS) is that the storage cluster of realizing under linux builds.
InfiniBand framework is a kind of how concurrent link " Convertion cable " technology of supporting, in this technology, every kind of link can reach the speed of service of 2.5Gbps. This framework speed in a link is 500MB/ second, and when four links, speed is 2GB/ second, and when 12 links, speed can reach 6GB/ second.
InfiniBand technology does not connect for general networking, and its main purpose of design is for the connectivity problem of server end. Therefore, InfiniBand technology will be applied to server and server (such as copying, distributed work etc.), between server and memory device (such as SAN and direct storage attachments) and server and network, (such as LAN's, WANs and theInternet) communicates by letter.
Different from the I/O subsystem of current computer, InfiniBand is the network communicating system of a perfect in shape and function. InfiniBand trade organization calls I/O network this new bus structures, and it is likened to switch, determines by controlling control information because given information is sought the path of its destination address. What InfiniBand used is 128 bit address space of IPv6, and therefore it can provide the device extension of near-infinite number.
While transmitting data by InfiniBand, data are to transmit in packet mode, and these packets can be combined into a rule information. The mode of operation of these information may be the read-write program of RDMA, or accepts the information sending by channel, or multileaving transmission. Just as the familiar transmission pattern of large scale computer user, all transfer of data are all started and are finished by channel adapter. Each processor (for example PC or data center server) has a host channel adapter, and each peripheral equipment has a target channel adapter. Can guarantee that by these adapter exchange of information information can obtain transmitting effectively reliably under certain service quality grade.
Embodiment 1:
Virtual memory high speed transmission method, by building storage cluster, utilizes multi-level buffer mechanism to realize the high-speed transfer of data, comprises the following steps:
1), by client addressing, obtain the actual position place storage host of data in storage cluster;
2), after step 1), client is read/write data from the buffer memory of the actual position place storage host of data, realizes data interaction.
Embodiment 2:
The present embodiment is at the enterprising one-step optimization in the basis of above-described embodiment, further to better implement the present invention, adopt multistage cache mechanism to promote storage IO performance, special adopt following set-up mode: in data exchange process, the data of writing are first write faster in hard disk realizing, then hard disk is synchronizeed with ordinary magnetic disc faster.
Embodiment 3:
The present embodiment is at the enterprising one-step optimization in the basis of above-mentioned arbitrary embodiment, further to better implement the present invention, client can be searched by distributed elasticity hash algorithm the storage of distribution, adopts especially following set-up mode: described step 1) comprises the following steps:
1-1) file label information is sent in described storage cluster a storage host arbitrarily by client;
1-2) described storage host receives after file label information, resolves this file label information, therefrom obtains the distributed intelligence of the storage volume of this storage host, then the distributed intelligence of described storage volume is returned to client;
1-3) client obtains after the distributed intelligence of described storage volume, and the actual position place storage host of described data is positioned.
Embodiment 4:
The present embodiment is at the enterprising one-step optimization in the basis of above-mentioned arbitrary embodiment, further to better implement the present invention, can utilize and write cache mechanism and realize writing of data, put forward high performance object thereby reach, adopt especially following set-up mode: described step 2) comprise the following steps:
2-1) client is write data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
2-1-1) client is received after the positional information of actual position place storage host of the described data of returning, and sends and writes IO operation to the actual position place storage host of data;
What 2-1-2) the virtual memory engine in the actual position place storage host of described data received that client sends writes IO when operation, can be buffered in the cache of hard disk faster writing IO operation, completes this node and writes IO operation.
Embodiment 5:
The present embodiment is at the enterprising one-step optimization in the basis of above-described embodiment, further to better implement the present invention, can realize cold and hot data separating, data in real time read-write can carry out alternately, adopting especially following set-up mode with hard disk faster: the virtual memory engine in the actual position place storage host of described data also can the cycle be written to common hard disc by the batch data that IO operates of writing being buffered in the cache of hard disk faster. The batch data of writing IO operation being buffered in the cache of hard disk is faster written to common hard disc by cycle, can realize cold and hot data separating, only in the data of in real time read-write just with SSD is mutual faster, thereby promote the write performance of storage, adopt this kind of mode, only need one faster hard disk can meet the demands; And if without multi-level buffer mechanism, only ordinary magnetic disc is all changed into hard disk (SSD) faster, although can improving performance, expend huge hardware cost, therefore the present invention adopts after multi-level buffer mechanism, can effectively save the input of hardware cost.
The described cycle is preferably set to 100ms; To trigger once every 100ms, if now the capacity of buffer memory reaches 80% of hard disk (SSD) total capacity faster and will trigger and continue to write common hard disc operation, the batch data of writing IO operation that is about to be buffered in the cache of hard disk is faster written in common hard disc.
Embodiment 6:
The present embodiment is at the enterprising one-step optimization in the basis of above-mentioned arbitrary embodiment, further to better implement the present invention, can utilize and read cache mechanism and realize reading of data, put forward high performance object thereby reach, adopt especially following set-up mode: described step 2) further comprising the steps of:
2-2) client read data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
What 2-2-1) the virtual memory engine of the actual position place storage host of described data reception client sent reads IO operation;
2-2-2) the virtual memory engine of the actual position place storage host of described data is searched and whether is had required IO data from the internal memory of the actual position place storage host of data " is read cache ", if existed, directly return, the internal memory of simultaneously adjusting the actual position place storage host of required IO data to data " is read cache " LRU head of the queue, otherwise execution step 2-2-3);
2-2-3) in the actual position place storage host of data, hard disk " reading cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of these required IO data simultaneously, otherwise execution step 2-2-4);
2-2-4) in the actual position place storage host of data, hard disk " writing cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of required IO data simultaneously; If the focus access factor of required IO data reaches threshold value, the required IO data of current foundation are buffered in the actual position place storage host of data faster in hard disk " reading cache "; If there is no, execution step 2-2-5);
2-2-5) in the actual position place storage host of data, common hard disc, find required IO data and return, increase the focus access factor of these required IO data simultaneously, if focus access factor reaches threshold value, required IO data can be buffered in the actual position place storage host of data faster in hard disk " reading cache ".
Embodiment 7:
The present embodiment is at the enterprising one-step optimization in the basis of above-mentioned arbitrary embodiment, further to better implement the present invention, can further improve the access speed of data, adopts especially following set-up mode: described hard disk faster adopts SSD.
Embodiment 8:
The present embodiment is at the enterprising one-step optimization in the basis of above-mentioned arbitrary embodiment, further to better implement the present invention, adopts especially following set-up mode: the Internet Transmission between described client and storage cluster adopts the infinband of rdma protocol to realize.
Embodiment 9:
The present embodiment is at the enterprising one-step optimization in the basis of above-mentioned arbitrary embodiment, and shown in Fig. 1, Fig. 2, virtual memory high speed transmission method, by building storage cluster, utilizes multi-level buffer mechanism to realize the high-speed transfer of data, comprises the following steps:
1) client can be searched by the addressing system of distributed elasticity hash algorithm the storage of distribution, obtains the actual position place storage host of data in storage cluster, and the actual position place storage host of data is positioned, and obtains locating storage host; Comprise following concrete steps:
1-1) file label information is sent in described storage cluster a storage host arbitrarily by client;
1-2) described storage host receives after file label information, resolves this file label information, therefrom obtains the distributed intelligence of the storage volume of this storage host, then the distributed intelligence of described storage volume is returned to client;
1-3) client obtains after the distributed intelligence of described storage volume, and the actual position place storage host of described data is positioned, and obtains locating storage host;
2), after step 1), client is read/write data from the buffer memory of location storage host, realizes data interaction, comprises following concrete steps:
2-1) client is write data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
2-1-1) client is received after the positional information of the location storage host of returning, and sends and writes IO operation to location storage host;
What 2-1-2) the virtual memory engine in the storage host of described location received that client sends writes IO when operation: first, can be buffered in the cache of SSD writing IO operation, complete this node and write IO operation; Secondly, the virtual memory engine in the storage host of described location also can the cycle be written to common hard disc (HDD) by the batch data of writing IO operation being buffered in the cache of SSD.
The batch data of writing IO operation being buffered in the cache of hard disk is faster written to common hard disc by cycle, can realize cold and hot data separating, only in the data of in real time read-write just with SSD is mutual faster, thereby promote the write performance of storage, adopt this kind of mode, only need one faster hard disk can meet the demands; And if without multi-level buffer mechanism, only ordinary magnetic disc is all changed into hard disk (SSD) faster, although can improving performance, expend huge hardware cost, therefore the present invention adopts after multi-level buffer mechanism, can effectively save the input of hardware cost.
The described cycle is preferably set to 100ms; To trigger once every 100ms, if now the capacity of buffer memory reaches 80% of hard disk (SSD) total capacity faster and will trigger and continue to write common hard disc operation, the batch data of writing IO operation that is about to be buffered in the cache of hard disk is faster written in common hard disc.
2-2) client read data from the buffer memory of location storage host, from buffer memory, read data adopts layering, and ground floor is internal memory cache, and internal memory cache adopts LRU mechanism data cached; The second layer is SSDcache, and SSDcache adopts focus to read mechanism, and system can be added up each data that read, and add up focus access factor, in the time that focus access factor reaches threshold value, system can the required IO data of automatic buffer memory in SSD, long-time not accessed data can be shifted out to SSD simultaneously; Comprise following concrete steps:
What 2-2-1) the virtual memory engine of described location storage host reception client sent reads IO operation;
2-2-2) the virtual memory engine of described location storage host searches whether there is required IO data (step 1) from the internal memory of location storage host " is read cache ", if existed, directly return, the internal memory of simultaneously adjusting the actual position place storage host of required IO data to data " is read cache " LRU head of the queue, otherwise execution step 2-2-3);
2-2-3) in the storage host of location, SSD " reading cache ", search whether there is required IO data (step 2), if existed, directly return, increase the focus access factor of these required IO data simultaneously, otherwise execution step 2-2-4);
2-2-4) in the storage host of location, SSD " writing cache ", search whether there is required IO data (step 3), if existed, directly return, increase the focus access factor of required IO data simultaneously; If the focus access factor of required IO data reaches threshold value, the required IO data of current foundation are buffered in " the reading cache " of SSD in the storage host of location; If there is no, execution step 2-2-5);
2-2-5) in the storage host of location, common hard disc (HDD), find required IO data and return to (step 4), increase the focus access factor of these required IO data simultaneously, if focus access factor reaches threshold value, required IO data can be buffered in the storage host of location during SSD " reads cache ".
In data exchange process, the data of writing are first write faster in hard disk realizing, then hard disk is synchronizeed with ordinary magnetic disc faster.
The infinband that the Internet Transmission mode of carrying out data interaction employing between client and storage cluster is rdma protocol, adopts 56GbpsFDRInfiniBand, has the interconnected characteristic of ultrahigh speed between its node; Be the ripe multistage fat tree networking of a kind of standard, there is the characteristic of level and smooth capacity dilatation; Approximate clog-free communication network, exchanges data is without bottleneck; Nanosecond communication delay, calculates storage information and transmits in time; Lossless network QOS(service quality), data transmit without losing; The communication of active and standby many planes of port, improves transmission reliability. QoS(QualityofService, service quality) refer to that a network can utilize various basic technologies, for the network service of specifying provides better service ability, be a kind of security mechanism of network, be by a kind of technology that solves the problem such as network delay and obstruction.
The above, be only preferred embodiment of the present invention, not the present invention done to any pro forma restriction, and any simple modification, equivalent variations that every foundation technical spirit of the present invention is done above embodiment, within all falling into protection scope of the present invention.

Claims (9)

1. virtual memory high speed transmission method, by building storage cluster, utilizes multi-level buffer mechanism to realize the high-speed transfer of data, it is characterized in that: comprise the following steps:
1), by client addressing, obtain the actual position place storage host of data in storage cluster;
2), after step 1), client is read/write data from the buffer memory of the actual position place storage host of data, realizes data interaction.
2. virtual memory high speed transmission method according to claim 1, is characterized in that: in data exchange process, the data of writing are first write faster in hard disk realizing, then hard disk is synchronizeed with ordinary magnetic disc faster.
3. virtual memory high speed transmission method according to claim 2, is characterized in that: described step 1) comprises the following steps:
1-1) file label information is sent in described storage cluster a storage host arbitrarily by client;
1-2) described storage host receives after file label information, resolves this file label information, therefrom obtains the distributed intelligence of the storage volume of this storage host, then the distributed intelligence of described storage volume is returned to client;
1-3) client obtains after the distributed intelligence of described storage volume, and the actual position place storage host of described data is positioned.
4. according to the virtual memory high speed transmission method described in claim 1 or 2 or 3, it is characterized in that: described step 2) comprise the following steps:
2-1) client is write data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
2-1-1) client is received after the positional information of actual position place storage host of the described data of returning, and sends and writes IO operation to the actual position place storage host of data;
What 2-1-2) the virtual memory engine in the actual position place storage host of described data received that client sends writes IO when operation, can be buffered in the cache of hard disk faster writing IO operation, completes this node and writes IO operation.
5. virtual memory high speed transmission method according to claim 4, is characterized in that: the virtual memory engine in the actual position place storage host of described data also can the cycle be written to common hard disc by the batch data of writing IO operation being buffered in the cache of hard disk faster.
6. virtual memory high speed transmission method according to claim 4, is characterized in that: described step 2) further comprising the steps of:
2-2) client read data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
What 2-2-1) the virtual memory engine of the actual position place storage host of described data reception client sent reads IO operation;
2-2-2) the virtual memory engine of the actual position place storage host of described data is searched and whether is had required IO data from the internal memory of the actual position place storage host of data " is read cache ", if existed, directly return, the internal memory of simultaneously adjusting the actual position place storage host of required IO data to data " is read cache " LRU head of the queue, otherwise execution step 2-2-3);
2-2-3) in the actual position place storage host of data, hard disk " reading cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of these required IO data simultaneously, otherwise execution step 2-2-4);
2-2-4) in the actual position place storage host of data, hard disk " writing cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of required IO data simultaneously; If the focus access factor of required IO data reaches threshold value, the required IO data of current foundation are buffered in the actual position place storage host of data faster in hard disk " reading cache "; If there is no, execution step 2-2-5);
2-2-5) in the actual position place storage host of data, common hard disc, find required IO data and return, increase the focus access factor of these required IO data simultaneously, if focus access factor reaches threshold value, required IO data can be buffered in the actual position place storage host of data faster in hard disk " reading cache ".
7. according to the virtual memory high speed transmission method described in claim 1 or 2 or 3 or 5, it is characterized in that: described step 2) further comprising the steps of:
2-2) client read data from the buffer memory of the actual position place storage host of data, comprises following concrete steps:
What 2-2-1) the virtual memory engine of the actual position place storage host of described data reception client sent reads IO operation;
2-2-2) the virtual memory engine of the actual position place storage host of described data is searched and whether is had required IO data from the internal memory of the actual position place storage host of data " is read cache ", if existed, directly return, the internal memory of simultaneously adjusting the actual position place storage host of required IO data to data " is read cache " LRU head of the queue, otherwise execution step 2-2-3);
2-2-3) in the actual position place storage host of data, hard disk " reading cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of these required IO data simultaneously, otherwise execution step 2-2-4);
2-2-4) in the actual position place storage host of data, hard disk " writing cache ", search whether there are required IO data faster, if existed, directly return, increase the focus access factor of required IO data simultaneously; If the focus access factor of required IO data reaches threshold value, the required IO data of current foundation are buffered in the actual position place storage host of data faster in hard disk " reading cache "; If there is no, execution step 2-2-5);
2-2-5) from common hard disc, find required IO data and return, increase the focus access factor of these required IO data simultaneously, if focus access factor reaches threshold value, required IO data can be buffered in the actual position place storage host of data faster in hard disk " reading cache ".
8. according to the virtual memory high speed transmission method described in claim 1 or 2 or 3 or 5 or 6, it is characterized in that: described hard disk faster adopts SSD.
9. according to the virtual memory high speed transmission method described in claim 1 or 2 or 3 or 5 or 6, it is characterized in that: the Internet Transmission between described client and storage cluster adopts the infinband of rdma protocol to realize.
CN201511006361.8A 2015-12-29 2015-12-29 Virtual memory high speed transmission method Active CN105589664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511006361.8A CN105589664B (en) 2015-12-29 2015-12-29 Virtual memory high speed transmission method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511006361.8A CN105589664B (en) 2015-12-29 2015-12-29 Virtual memory high speed transmission method

Publications (2)

Publication Number Publication Date
CN105589664A true CN105589664A (en) 2016-05-18
CN105589664B CN105589664B (en) 2018-07-31

Family

ID=55929282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511006361.8A Active CN105589664B (en) 2015-12-29 2015-12-29 Virtual memory high speed transmission method

Country Status (1)

Country Link
CN (1) CN105589664B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657356A (en) * 2016-12-29 2017-05-10 郑州云海信息技术有限公司 Data writing method and device for cloud storage system, and cloud storage system
CN107168657A (en) * 2017-06-15 2017-09-15 深圳市云舒网络技术有限公司 It is a kind of that cache design method is layered based on the virtual disk that distributed block is stored
CN108009008A (en) * 2016-10-28 2018-05-08 北京市商汤科技开发有限公司 Data processing method and system, electronic equipment
CN109324759A (en) * 2018-09-17 2019-02-12 山东浪潮云投信息科技有限公司 The processing terminal of big data platform, the method read data and write data
CN109656467A (en) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 Data transmission system, data interactive method, device and the electronic equipment of cloud network
CN110191194A (en) * 2019-06-13 2019-08-30 华中科技大学 A kind of Distributed File System Data transmission method and system based on RDMA network
US10809937B2 (en) 2019-02-25 2020-10-20 International Business Machines Corporation Increasing the speed of data migration
CN112817887A (en) * 2021-02-24 2021-05-18 上海交通大学 Far memory access optimization method and system under separated combined architecture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6557075B1 (en) * 1999-08-31 2003-04-29 Andrew Maher Maximizing throughput in a pairwise-redundant storage system
CN102291466A (en) * 2011-09-05 2011-12-21 浪潮电子信息产业股份有限公司 Method for optimizing cluster storage network resource configuration
CN103092532A (en) * 2013-01-21 2013-05-08 浪潮(北京)电子信息产业有限公司 Storage method of cluster storage system
CN103167036A (en) * 2013-01-28 2013-06-19 浙江大学 Raster data access method based on distributed multi-stage cache system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6557075B1 (en) * 1999-08-31 2003-04-29 Andrew Maher Maximizing throughput in a pairwise-redundant storage system
CN102291466A (en) * 2011-09-05 2011-12-21 浪潮电子信息产业股份有限公司 Method for optimizing cluster storage network resource configuration
CN103092532A (en) * 2013-01-21 2013-05-08 浪潮(北京)电子信息产业有限公司 Storage method of cluster storage system
CN103167036A (en) * 2013-01-28 2013-06-19 浙江大学 Raster data access method based on distributed multi-stage cache system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009008A (en) * 2016-10-28 2018-05-08 北京市商汤科技开发有限公司 Data processing method and system, electronic equipment
CN106657356A (en) * 2016-12-29 2017-05-10 郑州云海信息技术有限公司 Data writing method and device for cloud storage system, and cloud storage system
CN107168657A (en) * 2017-06-15 2017-09-15 深圳市云舒网络技术有限公司 It is a kind of that cache design method is layered based on the virtual disk that distributed block is stored
CN107168657B (en) * 2017-06-15 2020-05-26 深圳市云舒网络技术有限公司 Virtual disk hierarchical cache design method based on distributed block storage
CN109656467A (en) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 Data transmission system, data interactive method, device and the electronic equipment of cloud network
CN109656467B (en) * 2017-10-11 2021-12-03 阿里巴巴集团控股有限公司 Data transmission system of cloud network, data interaction method and device and electronic equipment
CN109324759A (en) * 2018-09-17 2019-02-12 山东浪潮云投信息科技有限公司 The processing terminal of big data platform, the method read data and write data
US10809937B2 (en) 2019-02-25 2020-10-20 International Business Machines Corporation Increasing the speed of data migration
CN110191194A (en) * 2019-06-13 2019-08-30 华中科技大学 A kind of Distributed File System Data transmission method and system based on RDMA network
CN110191194B (en) * 2019-06-13 2020-07-03 华中科技大学 RDMA (remote direct memory Access) network-based distributed file system data transmission method and system
CN112817887A (en) * 2021-02-24 2021-05-18 上海交通大学 Far memory access optimization method and system under separated combined architecture

Also Published As

Publication number Publication date
CN105589664B (en) 2018-07-31

Similar Documents

Publication Publication Date Title
CN105589664A (en) Virtual storage high-speed transmission method
US11467975B2 (en) Data processing method and NVMe storage device
CN103647807B (en) A kind of method for caching information, device and communication equipment
US9304928B2 (en) Systems and methods for adaptive prefetching
US8090790B2 (en) Method and system for splicing remote direct memory access (RDMA) transactions in an RDMA-aware system
US9244980B1 (en) Strategies for pushing out database blocks from cache
US20150127691A1 (en) Efficient implementations for mapreduce systems
CN101388824B (en) File reading method and system under sliced memory mode in cluster system
US20120102245A1 (en) Unified i/o adapter
CN103870312B (en) Establish the method and device that virtual machine shares memory buffers
US20190087352A1 (en) Method and system transmitting data between storage devices over peer-to-peer (p2p) connections of pci-express
CN114201421B (en) Data stream processing method, storage control node and readable storage medium
KR102238525B1 (en) High bandwidth peer-to-peer switched key-value caching
CN107229415A (en) A kind of data write method, data read method and relevant device, system
CN103329111A (en) Data processing method, device and system based on block storage
CN101789976A (en) Embedded network storage system and method thereof
CN103905439A (en) Webpage browsing accelerating method based on home gateway
KR20160060550A (en) Page cache device and method for efficient mapping
US20170039140A1 (en) Network storage device for use in flash memory and processing method therefor
WO2023165543A1 (en) Shared cache management method and apparatus, and storage medium
US8566521B2 (en) Implementing cache offloading
US20140164553A1 (en) Host ethernet adapter frame forwarding
WO2016201998A1 (en) Cache distribution, data access and data sending methods, processors, and system
CN102868684A (en) Fiber channel target and realizing method thereof
Dalessandro et al. iSER storage target for object-based storage devices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant