CN104991745B - A kind of memory system data wiring method and system - Google Patents

A kind of memory system data wiring method and system Download PDF

Info

Publication number
CN104991745B
CN104991745B CN201510432216.XA CN201510432216A CN104991745B CN 104991745 B CN104991745 B CN 104991745B CN 201510432216 A CN201510432216 A CN 201510432216A CN 104991745 B CN104991745 B CN 104991745B
Authority
CN
China
Prior art keywords
write
data
request
chinese ink
brush dipped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510432216.XA
Other languages
Chinese (zh)
Other versions
CN104991745A (en
Inventor
刘友生
张书宁
卓宝特
闫永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201510432216.XA priority Critical patent/CN104991745B/en
Publication of CN104991745A publication Critical patent/CN104991745A/en
Application granted granted Critical
Publication of CN104991745B publication Critical patent/CN104991745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a kind of memory system data wiring method and system, including:Data receiver and protocol analysis subsystem receive the write IO request of calculate node, and write IO request includes at least one communication message, which includes protocol fields and payload field;During write IO request is received, the protocol fields data of communication message and payload field data are stored in the different position of high speed storing medium respectively, and the payload field data of communication message is independently stored;After write IO request finishes receiving, write IO request is transmitted to data buffer storage subsystem;After data buffer storage subsystem receives write IO request, return to write operation to calculate node and complete information, and write operation is carried out to write IO request.By means of the invention it is possible under same hardware platform, memory system data write performance is improved.

Description

A kind of memory system data wiring method and system
Technical field
The present invention relates to technical field of computer data storage, espespecially a kind of memory system data wiring method and system.
Background technology
So far, in magnanimity big data field of storage, storage system is independent as one for technical development of computer Equipment is independent from computing system.Fig. 1 be the present invention big data storage computing system configuration diagram, such as Fig. 1 Shown, just by computing subsystem, three parts of transmission subsystem and storage subsystem form entire computing system.In such case Under, with the raising of the computing capability of computing subsystem, the performance requirement just proposed to the performance of storage subsystem is just increasingly It is high.
The main performance index of storage system includes:Input and output (IO, Input and Output) number per second, handling capacity With the IO response times.IO numbers abbreviation IOps (IO per Second) per second is time of storage system performed I/O operation per second Number, is an important indicator for weighing storage system I O process ability, the bigger I O process for illustrating storage system of this numerical value Ability is stronger.Handling capacity is also referred to as transmission rate or bandwidth, the transfer bus per second from storage system when being actual use On the data volume that flows through, this index identifies maximum volume of transmitted data, in the feelings that sequential access or big data quantity access Under condition can it is important, this index is generally limited to the performance of hardware bus, as I/O transmission bus bandwidth, memory bandwidth etc. because Element.The IO response times are also referred to as IO delays, are that the I/O command read or write sent from computing system connects to computing system The time of IO responses is received, this index is for weighing processing speed of the storage system to single IO, and numerical value is smaller illustrates IO for this Processing speed is faster.
At present, the permanent storage media of storage system mainly uses mechanical hard disk, and the performance of system is primarily limited to The storage performance of single storage medium.To improve the overall performance of whole system, industry is mostly read parallel using multiple storage mediums The method write.Meanwhile other high speed storing media are added between computing subsystem and storage medium, such as solid state disk (SSD, Solid State Drives), random access memory (RAM, Random-Access Memory) etc., and by data It is temporarily held on high speed storing medium, data from high speed storing medium is synchronized in the system free time permanently stores Jie afterwards In matter.
The content of the invention
In order to solve the above technical problem, the present invention provides a kind of memory system data wiring method and systems, can Under same hardware platform, memory system data write performance is improved.
In order to reach the object of the invention, the present invention provides a kind of memory system data wiring method, including:Data receiver And protocol analysis subsystem receives the write IO request of calculate node, the write IO request includes at least one communication message, described Communication message includes protocol fields and payload field;During the write IO request is received, by the communication message Protocol fields data and payload field data be stored in the different position of high speed storing medium respectively, and by the communication The payload field data of message is independently stored;After the write IO request finishes receiving, the write IO request is transmitted to number According to cache subsystem;After data buffer storage subsystem receives the write IO request, return to write operation to calculate node and complete information, and Write operation is carried out to the write IO request.
Further, it is described during the write IO request is received, by the protocol fields data of the write IO request High speed storing medium is stored in respectively with payload field data, is specially:It, will during the write IO request is received Protocol fields data in communication message are stored in the protocol data area of high speed storing medium, will be effective in the communication message Payload field data are stored in the I/O data area of high speed storing medium.
Further, after the write IO request finishes receiving, the write IO request is transmitted to data buffer storage subsystem, is had Body is:After the write IO request finishes receiving, by the write IO request and the write IO request in high speed storing medium Data deposit position sends the I/O request queue of data buffer storage subsystem to.
Further, it is described that write operation is carried out to the write IO request, be specially:The IO of the data buffer storage subsystem please After queue is asked to receive the write IO request, the cache pool of data buffer storage subsystem is waken up;The cache pool is from I/O request queue The write IO request is obtained, and is write the corresponding payload field data of the write IO request according to the data deposit position Enter into cache pool.
Further, the method further includes:Dirty data writes with a brush dipped in Chinese ink subsystem to the data in the data buffer storage subsystem It is write with a brush dipped in Chinese ink, and the data after writing with a brush dipped in Chinese ink is written in permanent storage media.
Further, the dirty data writes with a brush dipped in Chinese ink subsystem and the data in the data buffer storage subsystem is write with a brush dipped in Chinese ink, tool Body is:It is determined to write with a brush dipped in Chinese ink mode according to the type of permanent storage media, the dirty data writes with a brush dipped in Chinese ink subsystem using the definite side of writing with a brush dipped in Chinese ink Formula writes with a brush dipped in Chinese ink the data in the cache pool of the data buffer storage subsystem.
Further, the mode of writing with a brush dipped in Chinese ink is write with a brush dipped in Chinese ink including order bulk, and brush is write with a brush dipped in Chinese ink and concurrently disperseed in fixed block size alignment It writes;If permanent storage media is common hard disc, definite mode of writing with a brush dipped in Chinese ink is write with a brush dipped in Chinese ink for order bulk;If permanent storage media is The RAID permutations formed using hard disk, definite mode of writing with a brush dipped in Chinese ink are alignd for data according to the fixed block size that the band of array carries out It writes with a brush dipped in Chinese ink;If permanent storage media is distributed storage pond, definite writes with a brush dipped in Chinese ink mode concurrently to disperse to write with a brush dipped in Chinese ink.
The present invention also provides a kind of storage system, including:Data receiver and protocol analysis subsystem calculate for receiving The write IO request of node, at least one communication message of write IO request write IO request, the communication message include protocol fields With payload field;During the write IO request is received, by the protocol fields data of the communication message and effectively Payload field data are stored in the different position of high speed storing medium respectively, and by the payload Field Count of the communication message According to independent storage;After the write IO request finishes receiving, the write IO request is transmitted to data buffer storage subsystem;Data buffer storage Subsystem, after receiving the write IO request, to calculate node return write operation complete information, and to the write IO request into Row write operates.
Further, the data receiver and protocol analysis subsystem, are specifically used for:The IO that writes for receiving calculate node please It asks;During the write IO request is received, the protocol fields data of communication message are stored in the association of high speed storing medium Data field is discussed, the payload field data of communication message is stored in the I/O data area of high speed storing medium;The IO that writes please It asks after finishing receiving, the write IO request and the write IO request is sent in the data deposit position of high speed storing medium The I/O request queue of data buffer storage subsystem.
Further, the data buffer storage subsystem, after receiving the write IO request specifically for I/O request queue, to meter Operator node returns to write operation and completes information, and wakes up the cache pool of data buffer storage subsystem;The cache pool is from I/O request queue It is middle to obtain the write IO request, the corresponding payload field data of the write IO request is write according to the data deposit position Enter into cache pool.
Further, the storage system further includes:Dirty data writes with a brush dipped in Chinese ink subsystem, for the class according to permanent storage media Type is determined to write with a brush dipped in Chinese ink mode, the data in the cache pool of the data buffer storage subsystem is brushed using definite mode of writing with a brush dipped in Chinese ink It writes, and the data after writing with a brush dipped in Chinese ink is written in permanent storage media.
Further, the mode of writing with a brush dipped in Chinese ink is write with a brush dipped in Chinese ink including order bulk, and brush is write with a brush dipped in Chinese ink and concurrently disperseed in fixed block size alignment It writes;If permanent storage media is common hard disc, definite mode of writing with a brush dipped in Chinese ink is write with a brush dipped in Chinese ink for order bulk;If permanent storage media is The RAID permutations formed using hard disk, definite mode of writing with a brush dipped in Chinese ink are alignd for data according to the fixed block size that the band of array carries out It writes with a brush dipped in Chinese ink;If permanent storage media is distributed storage pond, definite writes with a brush dipped in Chinese ink mode concurrently to disperse to write with a brush dipped in Chinese ink.
Compared with prior art, the present invention, by parsing protocol data in advance, is avoided in agreement solution when IO is write in processing The memory copying generated in the process is analysed, thus memory load and processor load effectively in reduction system;Dirty data is write with a brush dipped in Chinese ink It is to effectively reduce the processor load of system, and the direct effect that memory load declines is exactly that storage system is brought to handle up The raising of rate, it is exactly that processor has more to handle IO of the write IO request so as to lifting system that processor load, which declines meeting result, The IOPS indexs of processing capacity and storage system;In addition, the processing road of IO is write in the method shortening that IO completions are write by returning in advance Footpath, so as to reduce the IO reaction time of write operation.The present invention is based on the computing system of existing big data storage, in same hardware Under platform, system load can be effectively reduced, reduces IO and is delayed, memory load and processor in reduction system increase system IO Processing capacity.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that being understood by implementing the present invention.The purpose of the present invention and other advantages can be by specification, rights Specifically noted structure is realized and obtained in claim and attached drawing.
Description of the drawings
Attached drawing is used for providing further understanding technical solution of the present invention, and a part for constitution instruction, with this The embodiment of application technical solution for explaining the present invention together, does not form the limitation to technical solution of the present invention.
Fig. 1 is the configuration diagram of the computing system of the big data storage of the present invention.
Fig. 2 is the structure diagram of storage system in a kind of embodiment of the invention.
Fig. 3 is the flow diagram of memory system data wiring method in a kind of embodiment of the invention.
Fig. 4 is the schematic diagram that data message receives processing in memory system data ablation process of the invention.
Specific embodiment
Understand to make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention Embodiment be described in detail.It should be noted that in the case where there is no conflict, in the embodiment and embodiment in the application Feature can mutually be combined.
Step shown in the flowchart of the accompanying drawings can be in the computer system of such as a group of computer-executable instructions It performs.Also, although logical order is shown in flow charts, it in some cases, can be to be different from herein suitable Sequence performs shown or described step.
The present invention is based on the computing systems of existing big data storage, under same hardware platform, can effectively reduce system Load reduces IO and is delayed, and memory load and processor in reduction system increase system I O process ability.
Fig. 2 is the structure diagram of storage system in a kind of embodiment of the invention.As shown in Fig. 2, the storage of the present invention System includes:Data receiver and protocol analysis subsystem, data buffer storage subsystem and dirty data write with a brush dipped in Chinese ink subsystem, wherein,
Data receiver and protocol analysis subsystem, for being responsible for communicating with calculate node, the IO that writes for receiving calculate node please It asks, and write IO request handling result is fed back to calculate node.
In the present invention, to reduce the RAM of system loads, the data received are parsed simultaneously receiving data, Specifically, by the protocol data in communication message and write I/O data and separately store, I/O data will be write and be individually stored in I/O data area Domain;After the completion of complete write request data receiver, data receiver and protocol analysis subsystem are by write request and corresponding number Data buffer storage subsystem is passed to according to position.
Data buffer storage subsystem, including I/O request queue and data cache pool.
Specifically, I/O request queue is responsible for the IO that writes that reception is passed over from data receiver and protocol analysis subsystem and is asked It asks;To shorten I/O path, I/O request queue returns to upper strata write complete at once, after just starting afterwards after receiving data and writing IO The I O process work of platform cache pool, cache pool take out write IO request from I/O request queue, and then data to be written is merged into In data buffer storage.The management algorithm of data buffer storage is known to those skilled in the art, therefore this will not be repeated here.
Dirty data writes with a brush dipped in Chinese ink subsystem, and data are written to from cache pool in permanent storage media.
Specifically, the data to be optimal write with a brush dipped in Chinese ink performance, reduce the system load that dirty data writes with a brush dipped in Chinese ink operation, and data are write with a brush dipped in Chinese ink Module selects optimal to write with a brush dipped in Chinese ink scheme according to the type of permanent storage media.In the present invention, writing with a brush dipped in Chinese ink scheme includes order bulk It writes with a brush dipped in Chinese ink, fixed block size alignment is write with a brush dipped in Chinese ink, and concurrently disperses to write with a brush dipped in Chinese ink several ways, if the permanent storage media of rear end is common hard Disk is write with a brush dipped in Chinese ink using order bulk;If permanent storage media be using hard disk form RAID permutations, data according to array item Band size alignment is write with a brush dipped in Chinese ink;If permanent storage media is distributed storage pond, write with a brush dipped in Chinese ink using scattered parallel mode.
The data write IO request of the present invention is initiated by the calculate node in computing system, is received and processed by storage system. Based on above-mentioned storage system, as shown in figure 3, the memory system data wiring method of the present invention, including:
Step 301, data receiver and protocol analysis subsystem receive the write IO request of calculate node, and the write IO request is at least One communication message, communication message include protocol fields and payload field;During write IO request is received, obtaining should The payload field data of communication message, and payload field data is stored in the different position of high speed storing medium, And the payload field data of communication message is independently stored;After the write IO request finishes receiving, which is transmitted To data buffer storage subsystem.
In this step, data receiver and protocol analysis subsystem receive the write IO request of calculate node, the write IO request As shown in figure 4, including one or more communication messages, each communication message includes protocol fields and payload field.It is connecing During receiving write IO request, data receiver and protocol analysis subsystem parse the write IO request received, will write IO The protocol fields data of each communication message and payload field data are separately stored in request, specifically, data receiver and The protocol fields data of communication message are stored in the protocol data area of high speed storing medium by protocol analysis subsystem, and communication is reported The payload field data of text is stored in the I/O data area of high speed storing medium.
After complete write IO request finishes receiving, which is transmitted to by data receiver and protocol analysis subsystem In the I/O request queue of cache subsystem, and corresponding data deposit position is sent in high speed storing medium by the write IO request Give data buffer storage subsystem.
Step 302, after data buffer storage subsystem receives write IO request, return to write operation to calculate node and complete information, and Write operation is carried out to write IO request.
In this step, after the I/O request queue of data buffer storage subsystem receives write IO request, write to calculate node return Information is completed in operation, and wakes up the cache pool of data buffer storage subsystem;Cache pool obtains write IO request, root from I/O request queue According to data deposit position the corresponding payload field data of the write IO request is written in cache pool, wherein data buffer storage Management algorithm is known to those skilled in the art, therefore this will not be repeated here.
Step 303, dirty data writes with a brush dipped in Chinese ink subsystem and the data in data cache subsystem is write with a brush dipped in Chinese ink, and after writing with a brush dipped in Chinese ink Data are written in permanent storage media.
In this step, dirty data writes with a brush dipped in Chinese ink subsystem and the data in the cache pool of data cache subsystem is write with a brush dipped in Chinese ink, And the data after writing with a brush dipped in Chinese ink are written in permanent storage media.In this way, reduce storage system load so that storage system, which has, more fills The processing capacity of foot is used to handle write IO request.
Data to be optimal write with a brush dipped in Chinese ink performance, reduce the system load that dirty data writes with a brush dipped in Chinese ink operation, and data write with a brush dipped in Chinese ink module root It selects optimal to write with a brush dipped in Chinese ink scheme according to the type of permanent storage media.In a specific embodiment of the present invention, scheme is write with a brush dipped in Chinese ink including suitable Sequence bulk is write with a brush dipped in Chinese ink, and fixed block size alignment is write with a brush dipped in Chinese ink, and concurrently disperses to write with a brush dipped in Chinese ink several ways.For example, if the permanent storage media of rear end It is common hard disc, is write with a brush dipped in Chinese ink using order bulk;If permanent storage media be using hard disk form RAID permutations, data according to The stripe size alignment of array is write with a brush dipped in Chinese ink;If permanent storage media is distributed storage pond, using scattered parallel mode into Row is write with a brush dipped in Chinese ink.
It can be seen that the present invention, by parsing protocol data in advance, is avoided when IO is write in processing by above system description The memory copying generated during protocol analysis, thus memory load and processor load effectively in reduction system;Dirty number According to the processor load write with a brush dipped in Chinese ink also for the system that effectively reduces, and the direct effect that memory load declines is exactly to bring storage The raising of system throughput, processor load decline can result be exactly that processor has more to handle write IO request so as to being promoted The I O process ability of system and the IOPS indexs of storage system;In addition, IO is write in the method shortening that IO completions are write by returning in advance Processing path, so as to reduce the IO reaction time of write operation.
Although disclosed herein embodiment as above, the content only for ease of understanding the present invention and use Embodiment is not limited to the present invention.Technical staff in any fields of the present invention is taken off not departing from the present invention On the premise of the spirit and scope of dew, any modification and variation, but the present invention can be carried out in the form and details of implementation Scope of patent protection, still should be subject to the scope of the claims as defined in the appended claims.

Claims (9)

1. a kind of memory system data wiring method, which is characterized in that including:
Data receiver and protocol analysis subsystem receive the write IO request of calculate node, and the write IO request includes at least one logical Believe message, the communication message includes protocol fields and payload field;During the write IO request is received, by institute The protocol data area that the protocol fields data in communication message are stored in high speed storing medium is stated, by having in the communication message Effect payload field data are stored in the I/O data area of high speed storing medium, and by the payload field data of the communication message Independent storage;After the write IO request finishes receiving, the write IO request is transmitted to data buffer storage subsystem;
After data buffer storage subsystem receives the write IO request, return to write operation to calculate node and complete information, and write to described I/O request carries out write operation;
Wherein, after the write IO request finishes receiving, the write IO request is transmitted to data buffer storage subsystem, is specially:
After the write IO request finishes receiving, by the write IO request and the write IO request high speed storing medium IO numbers Send the I/O request queue of data buffer storage subsystem to according to the data deposit position information in area.
2. memory system data wiring method according to claim 1, which is characterized in that it is described to the write IO request into Row write operates, and is specially:
After the I/O request queue of the data buffer storage subsystem receives the write IO request, the caching of data buffer storage subsystem is waken up Pond;The cache pool obtains the write IO request from I/O request queue, and writes IO by described according to the data deposit position The payload field data of request is written in cache pool.
3. memory system data wiring method according to claim 2, which is characterized in that the method further includes:Dirty number The data in the data buffer storage subsystem are write with a brush dipped in Chinese ink according to subsystem is write with a brush dipped in Chinese ink, and the data after writing with a brush dipped in Chinese ink are written to and are permanently deposited In storage media.
4. memory system data wiring method according to claim 3, which is characterized in that the dirty data writes with a brush dipped in Chinese ink subsystem Data in the data buffer storage subsystem are write with a brush dipped in Chinese ink, are specially:
It is determined to write with a brush dipped in Chinese ink mode according to the type of permanent storage media, the dirty data writes with a brush dipped in Chinese ink subsystem and writes with a brush dipped in Chinese ink mode using what is determined Data in the cache pool of the data buffer storage subsystem are write with a brush dipped in Chinese ink.
5. memory system data wiring method according to claim 4, which is characterized in that the mode of writing with a brush dipped in Chinese ink includes order Bulk is write with a brush dipped in Chinese ink, and fixed block size alignment is write with a brush dipped in Chinese ink and concurrently disperses to write with a brush dipped in Chinese ink;
If permanent storage media is common hard disc, definite mode of writing with a brush dipped in Chinese ink is write with a brush dipped in Chinese ink for order bulk;If permanent storage media It is the RAID permutations formed using hard disk, the definite fixed block size pair write with a brush dipped in Chinese ink mode and carried out for data according to the band of array It writes with a brush dipped in Chinese ink together;If permanent storage media is distributed storage pond, definite writes with a brush dipped in Chinese ink mode concurrently to disperse to write with a brush dipped in Chinese ink.
6. a kind of storage system, which is characterized in that including:
Data receiver and protocol analysis subsystem, for receiving the write IO request of calculate node, the write IO request is at least one Communication message, the communication message include protocol fields and payload field;It, will during the write IO request is received Protocol fields data in the communication message are stored in the protocol data area of high speed storing medium, will be in the communication message Payload field data is stored in the I/O data area of high speed storing medium, and by the payload Field Count of the communication message According to independent storage;After the write IO request finishes receiving, the write IO request is transmitted to data buffer storage subsystem;
Data buffer storage subsystem after receiving the write IO request, returns to write operation to calculate node and completes information, and to institute It states write IO request and carries out write operation;
Wherein, the data receiver and protocol analysis subsystem, are specifically used for:
After the write IO request finishes receiving, by the write IO request and the write IO request high speed storing medium data Deposit position sends the I/O request queue of data buffer storage subsystem to.
7. storage system according to claim 6, which is characterized in that the data buffer storage subsystem, it please specifically for IO After queue is asked to receive the write IO request, return to write operation to calculate node and complete information, and wake up data buffer storage subsystem Cache pool;The cache pool obtains the write IO request from I/O request queue, is write according to the data deposit position by described The corresponding payload field data of I/O request is written in cache pool.
8. storage system according to claim 7, which is characterized in that the storage system further includes:Dirty data writes with a brush dipped in Chinese ink son For determining to write with a brush dipped in Chinese ink mode according to the type of permanent storage media, mode is write with a brush dipped in Chinese ink to the data buffer storage using definite for system Data in the cache pool of subsystem are write with a brush dipped in Chinese ink, and the data after writing with a brush dipped in Chinese ink are written in permanent storage media.
9. storage system according to claim 8, which is characterized in that the mode of writing with a brush dipped in Chinese ink is write with a brush dipped in Chinese ink including order bulk, Gu Determine block size alignment to write with a brush dipped in Chinese ink and concurrently disperse to write with a brush dipped in Chinese ink;
If permanent storage media is common hard disc, definite mode of writing with a brush dipped in Chinese ink is write with a brush dipped in Chinese ink for order bulk;If permanent storage media It is the RAID permutations formed using hard disk, the definite fixed block size pair write with a brush dipped in Chinese ink mode and carried out for data according to the band of array It writes with a brush dipped in Chinese ink together;If permanent storage media is distributed storage pond, definite writes with a brush dipped in Chinese ink mode concurrently to disperse to write with a brush dipped in Chinese ink.
CN201510432216.XA 2015-07-21 2015-07-21 A kind of memory system data wiring method and system Active CN104991745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510432216.XA CN104991745B (en) 2015-07-21 2015-07-21 A kind of memory system data wiring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510432216.XA CN104991745B (en) 2015-07-21 2015-07-21 A kind of memory system data wiring method and system

Publications (2)

Publication Number Publication Date
CN104991745A CN104991745A (en) 2015-10-21
CN104991745B true CN104991745B (en) 2018-06-01

Family

ID=54303561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510432216.XA Active CN104991745B (en) 2015-07-21 2015-07-21 A kind of memory system data wiring method and system

Country Status (1)

Country Link
CN (1) CN104991745B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506146A (en) * 2017-08-29 2017-12-22 郑州云海信息技术有限公司 A kind of data-storage system
CN112214166B (en) 2017-09-05 2022-05-24 华为技术有限公司 Method and apparatus for transmitting data processing requests
CN109992212B (en) * 2019-04-10 2020-03-27 苏州浪潮智能科技有限公司 Data writing method and data reading method
CN114281762B (en) * 2022-03-02 2022-06-03 苏州浪潮智能科技有限公司 Log storage acceleration method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1870593A (en) * 2006-04-12 2006-11-29 杭州华为三康技术有限公司 Method and device of read-write buffer storage location based on field programable logical array
CN102782661A (en) * 2012-05-18 2012-11-14 华为技术有限公司 Data storage system and method
CN103064929A (en) * 2012-12-24 2013-04-24 创新科存储技术(深圳)有限公司 Method for server writing data in network file system
CN103970688A (en) * 2013-02-04 2014-08-06 Lsi公司 Method and system for reducing write latency in a data storage system
CN104461936A (en) * 2014-11-28 2015-03-25 华为技术有限公司 Cached data disk brushing method and device
CN104536704A (en) * 2015-01-12 2015-04-22 浪潮(北京)电子信息产业有限公司 Dual-controller communication method, transmitting end controller and receiving end controller

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070050569A1 (en) * 2005-09-01 2007-03-01 Nils Haustein Data management system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1870593A (en) * 2006-04-12 2006-11-29 杭州华为三康技术有限公司 Method and device of read-write buffer storage location based on field programable logical array
CN102782661A (en) * 2012-05-18 2012-11-14 华为技术有限公司 Data storage system and method
CN103064929A (en) * 2012-12-24 2013-04-24 创新科存储技术(深圳)有限公司 Method for server writing data in network file system
CN103970688A (en) * 2013-02-04 2014-08-06 Lsi公司 Method and system for reducing write latency in a data storage system
CN104461936A (en) * 2014-11-28 2015-03-25 华为技术有限公司 Cached data disk brushing method and device
CN104536704A (en) * 2015-01-12 2015-04-22 浪潮(北京)电子信息产业有限公司 Dual-controller communication method, transmitting end controller and receiving end controller

Also Published As

Publication number Publication date
CN104991745A (en) 2015-10-21

Similar Documents

Publication Publication Date Title
CN103970688B (en) Shorten the method and system that the stand-by period is write in data-storage system
CN108363670B (en) Data transmission method, device, equipment and system
CN104991745B (en) A kind of memory system data wiring method and system
WO2015081757A1 (en) Cold and hot data identification threshold calculation method, apparatus and system
CN108701004A (en) A kind of system of data processing, method and corresponding intrument
CN113485636B (en) Data access method, device and system
US9411519B2 (en) Implementing enhanced performance flash memory devices
US20120324160A1 (en) Method for data access, message receiving parser and system
CN102609215B (en) Data processing method and device
US20180121388A1 (en) Symmetric block sparse matrix-vector multiplication
EP2927779B1 (en) Disk writing method for disk arrays and disk writing device for disk arrays
CN104407933A (en) Data backup method and device
CN107766270A (en) Digital independent management method and device for PCIe device
CN103198001B (en) Storage system capable of self-testing peripheral component interface express (PCIE) interface and test method
CN105681222A (en) Method and apparatus for data receiving and caching, and communication system
CN103002046A (en) Multi-system data copying remote direct memory access (RDMA) framework
CN105260332A (en) Method and system for orderly storing CPLD data packets
US20120042125A1 (en) Systems and Methods for Efficient Sequential Logging on Caching-Enabled Storage Devices
US20160283379A1 (en) Cache flushing utilizing linked lists
CN109753225B (en) Data storage method and equipment
Zhou et al. Optimization design of high-speed data acquisition system based on DMA double cache mechanism
US20070180180A1 (en) Storage system, and storage control method
CN110489353A (en) A kind of raising solid state hard disk bandwidth reading performance method and device
CN104778015B (en) A kind of performance of disk arrays optimization method and system
WO2013184855A1 (en) Memory with bank-conflict-resolution (bcr) module including cache

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant