CN108427537A - Distributed memory system and its file write-in optimization method, client process method - Google Patents

Distributed memory system and its file write-in optimization method, client process method Download PDF

Info

Publication number
CN108427537A
CN108427537A CN201810032346.8A CN201810032346A CN108427537A CN 108427537 A CN108427537 A CN 108427537A CN 201810032346 A CN201810032346 A CN 201810032346A CN 108427537 A CN108427537 A CN 108427537A
Authority
CN
China
Prior art keywords
request
file
written
write
write request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810032346.8A
Other languages
Chinese (zh)
Inventor
颜新波
钱明
丁晓杰
曹敬涛
徐启亮
韩明轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kai Xiang Mdt Infotech Ltd
Original Assignee
Shanghai Kai Xiang Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Kai Xiang Mdt Infotech Ltd filed Critical Shanghai Kai Xiang Mdt Infotech Ltd
Priority to CN201810032346.8A priority Critical patent/CN108427537A/en
Publication of CN108427537A publication Critical patent/CN108427537A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of file of distributed memory system write-in optimization method and systems, and the data write efficiency in distributed memory system is improved in a manner of inexpensive, reduce the write-in stand-by period, improve write performance.Its technical solution is:It is returned to the successful message of write-in when distributed memory system is written in data write-in caching, the successful message of write-in need not be returned again to after disk is written in data, data are saved from the time being written in caching in the machine disk of rear end, improve file write efficiency.

Description

Distributed memory system and its file write-in optimization method, client process method
Technical field
The present invention relates to the write-ins of the file of the data storage technology of computer realm more particularly to distributed memory system Optimized treatment method and system.
Background technology
With the rapid development of computer technology, various data information amounts constantly increase, and are developed by initial GB To the PB of TB again till now, the even following EB grades, data information amount is increasing, this demand for the amount of storage of data Increasing, traditional file system early has been unable to meet the requirements such as large capacity, high availability, the high-performance of existing application, institute Storage system has obtained extensive attention in a distributed manner.
Distributed memory system is that data dispersion is stored in more independent equipment.Traditional network store system is adopted All data are stored with the storage server of concentration, and storage server becomes the bottleneck and reliability and safety of system performance Property focus, cannot meet Mass storage application needs.Distributed network storage system uses expansible system structure, Storage load is shared using more storage servers, positions storage information using location server, it not only increases system Reliability, availability and access efficiency are also easy to extend.
As shown in Figure 1, for distributed memory system, if one file of write-in, needs to carry out following a few steps:
1, data content is written to client first;
2, client finds out the data need which platform server be stored according to certain algorithm;
3, it is stored in the server that rear end is found from client by data;
4, it after writing data into the disk success of back-end server, returns and successful message is written.
Write-in could be returned after data must being waited to be written to disk when as it can be seen that data are written by above-mentioned data write-in flow Successful message, the stand-by period is longer, because the speed of write magnetic disk is very slow, can influence distributed memory system write-in The speed of data, to reduce user experience.
Since the distributed memory system in currently used is insufficient for the write performance of mass data, seriously affect point The service quality of cloth storage system.
Invention content
A brief summary of one or more aspects is given below to provide to the basic comprehension in terms of these.This general introduction is not The extensive overview of all aspects contemplated, and be both not intended to identify critical or decisive element in all aspects also non- Attempt to define the range in terms of any or all.Its unique purpose is to provide the one of one or more aspects in simplified form A little concepts are with the sequence for more detailed description given later.
It is an object of the invention to solve the above problems, a kind of file write-in optimization side of distributed memory system is provided Method and system improve the data write efficiency in distributed memory system in a manner of inexpensive, reduce the write-in stand-by period, carry High write performance.
The technical scheme is that:Present invention is disclosed a kind of files of distributed memory system, and optimization method is written, Including:
Client receives file write request, and file is written in caching and returns to the successful message of write-in;
Client is searching the file to be written corresponding server in distributed memory system from the background, receives and returns Lookup result;
The file that client is written in caching is then written to the magnetic of the server of the distributed memory system found In disk.
Present invention further teaches a kind of distributed memory systems, including at least one client and multiple servers, wherein Caching is installed in client, disk is housed in server, the file in the file write request received is written to slow by client In depositing and the successful message of write-in is returned, then searches file the be written corresponding clothes in distributed memory system on backstage Business device and according to lookup result be written to caching in file be then written to the corresponding of the distributed memory system found In the disk of server.
Present invention further teaches a kind of files to distributed memory system, and the client process method optimized is written, It is characterized in that, the framework of distributed memory system is as described above, client process method includes:
Step 1:Client receives file write request, a write request size accumulator is safeguarded, to the number of this write request Cumulative be put into accumulator is carried out according to size;
Step 2:Judge whether the file to be written in write request is already present on distributed memory system, if being not present Then first create this document;
Step 3:This write request is put into request queue;
Step 4:For request queue, judge whether each request is write request, then current is asked if it is write request Write request cumulative data size in queue is asked then to mark if not write request and currently ask plus the size of data of this write request It asks and all refreshes all requests before the current request in request queue, wherein refresh operation refers to currently asking All requests before asking remove out request queue to indicate that the operation of request has been completed;
Step 5:Each of request queue request is handled, will not be that the request marks of write request come out, for Two adjacent write requests merge, and the data file size after merging is no more than the size of the caching page in caching;
Step 6:Judge whether request queue is whether empty or first request is write request, it is no if then entering step 7 Then enter step 8;
Step 7:When there are asked to these when other requests of non-write request or the write request not being connected in request queue It is more than caching to ask and be marked and be added to the cumulative data size of write request in the request queue in refreshing queue or step 4 In window the request in window is added to refreshing queue when being sized;
Step 8:It is returned to user and successful message is written;
Step 9:Whether the current write request in judgment step 1 is first time write request, and if it is first to file refreshes Window space;
Step 10:Calculate caching page number and data cached size;
Step 11:If window number is more than setting number or the size of data of write request is greater than the set value, Huo Zheyi Through no next request, then the file in caching is flushed in the disk of server;
Step 12:Other requests for restoring non-write request, to prevent the operation in request queue to be blocked.
The one of the client process method that file write-in according to the present invention to distributed memory system optimizes is real Example is applied, the setting range of window size is 512KB~1GB.
The one of the client process method that file write-in according to the present invention to distributed memory system optimizes is real Example is applied, powers off and loses when preventing data in write buffer by the way of preserving daily record.
The present invention, which compares the prior art, following advantageous effect:The write-in optimization method of the present invention is deposited in write-in distribution It is returned to the successful message of write-in when storage system in data write-in caching, it need not be after disk be written in data again It returns and successful message is written, save data from the time being written in caching in the machine disk of rear end, improve file write-in Efficiency.The improvements correspondence of the present invention is used to say that transparent, is to be carried out in the level of storage system, not for upper layer application It needs to change any logic, the use of some storage systems analysis scan tool will not be influenced.Optimizations all simultaneously are not Process flow can be lengthened, complexity will not be increased.
Description of the drawings
After reading the detailed description of embodiment of the disclosure in conjunction with the following drawings, it better understood when the present invention's Features described above and advantage.In the accompanying drawings, each component is not necessarily drawn to scale, and has similar correlation properties or feature Component may have same or similar reference numeral.
Fig. 1 shows the flow matters figure of the existing file wiring method of distributed memory system.
Fig. 2 shows the schematic diagrams of an embodiment of the distributed memory system of the present invention.
Fig. 3 shows the flow chart of an embodiment of the file write-in optimization method of the distributed memory system of the present invention.
Fig. 4 shows the client process method of the present invention optimized to the file write-in of distributed memory system The flow chart of one embodiment.
Specific implementation mode
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.Note that below in conjunction with attached drawing and specifically real The aspects for applying example description is merely exemplary, and is understood not to carry out any restrictions to protection scope of the present invention.
Fig. 2 shows the principles of an embodiment of the distributed memory system of the present invention.As shown in Fig. 2, the present embodiment Distributed memory system includes that at least one client (being illustrated as 1 client) and multiple servers (are illustrated as 2 services Device).Installation caches in the client, and disk is housed in server.Client first is by the text in the file write request received Part is written in caching.Then client returns to user and successful message is written.The visitor after returning to the successful message of write-in Family end is carrying out following operation from the background:Search file the be written corresponding server in distributed memory system.Client End is then written to the corresponding service of the distributed memory system found according to the file that lookup result is written in caching In the disk of device.
Fig. 3 shows the flow of an embodiment of the file write-in optimization method of the distributed memory system of the present invention.Please Referring to Fig. 3, the implementation steps of the method for the present embodiment are as described below.
Step S1:Client receives file write request, and file is written in caching and returns to the successful message of write-in.
If when data just in write buffer when suddenly power off, content also in the buffer will lose, in this way for user Experience is very poor.The mode for preserving daily record can be used to prevent contents lost in embodiment, if just dashed forward in write buffer So power-off can be synchronously written some record data in daily record, including which position filename, data offset and file write The information set etc., when connecting with the mains again, the present embodiment can read journal file automatically, and be made according to the record of journal file Corresponding operation.
Step S2:Client is searching the file to be written corresponding server in distributed memory system from the background, Receive the lookup result returned.
Step S3:The file that client is written in caching is then written to the service of the distributed memory system found In the disk of device.
Wherein step S2 and S3 is that client is implemented on backstage.
Fig. 4 shows the client process method of the present invention optimized to the file write-in of distributed memory system The flow of one embodiment.Fig. 4 is referred to, the implementation steps of the client process method of the present embodiment are as follows.
Step S101:Client receives write request, a write request size accumulator is safeguarded, to the number of this write request Cumulative be put into accumulator is carried out according to size.
Step S102:Judge that the file to be written in write request whether there is in distributed memory system, if it is S104 is carried out, if not then carrying out S103.
Step S103:Create this document.
Step S104:Judge whether to disable file write-in optimization.Step S118 is gone to if disabling, is executed if not disabling Step S105.
When file is empty or this request disabling file write-in optimization, disk write operation is directly carried out.If can't help With then carrying out below step.Certainly, this step is not a steps necessary, is an optional side for providing file write-in optimization Formula.For example, some restrictive conditions can be arranged to decide whether the mode of application file write-in optimization.
Step S105:This request is put into request queue.
Step S106:For request queue, judge whether each request is write request, if it is carries out step S108, If not then carrying out step S107.
Step S107:The request is marked, all requests before this request in request queue are all refreshed. Refresh operation refers to that all requests before current request are removed out request queue, to indicate that the operation of request has been completed.
Step S108:Current polymerization size adds the size of data of this write request.Here polymerization size refers to current Request queue in write request cumulative data size.
Step S109:Each of request queue request is handled, will not be that the request marks of write request come out;It is right Two adjacent write requests merge, but the size after merging is no more than the caching page size in caching.
Because for distributed memory system, the same time will produce more write operation, in request queue Request, which merges, makes each request size reach caching page size as possible, can effectively save spatial cache, reduces to refresh and delay The operation deposited improves the overall performance of data write-in.
Step S110:If other request numbers are 0, that is, indicate that request queue is sky or first request is write request, Operation below is then carried out, step 113 is carried out if being not 0.
Step S111:When the write request that there are other to ask (non-write request) in request queue, is not connected to these Request is marked and refresh all.
Step S112:The step S111 requests marked are put into refreshing queue, when the cumulative number of write request in request queue It is sized more than the window in caching according to size, the request in window is put into refreshing queue.
The setting range of window size is 512KB~1GB, and specific size can be configured according to demand, so that user There is better experience.The memory size of general device is limited, so maximum window size is noted that when setting cannot be more than it Device memory size, the setting of minimum window size can not be too small, preferably can set an optimum number according to test data Value.
Step S113:It is returned to user and successful message is written.
Step S114:Judge whether current write request is to ask for the first time, if it is carries out step S115, otherwise carries out Step S116.
Step S115:Apply for the window space refreshed.
Step S116:Calculate caching page number and data cached size.
Step S117:If window number is more than setting number (being defaulted as 8) or the size of data of write request is more than setting Value (being defaulted as 1MB) or without being refreshed when next request.
Step S118:File in caching is flushed in the disk of server.
Step S119:Restore other requests (non-write request), prevents thering is operation to be blocked in request queue.
Although to simplify explanation to illustrate the above method and being described as a series of actions, it should be understood that and understand, The order that these methods are not acted is limited, because according to one or more embodiments, some actions can occur in different order And/or with from it is depicted and described herein or herein it is not shown and describe but it will be appreciated by those skilled in the art that other Action concomitantly occurs.
Those skilled in the art will further appreciate that, the various illustratives described in conjunction with the embodiments described herein Logic plate, module, circuit and algorithm steps can be realized as electronic hardware, computer software or combination of the two.It is clear Explain to Chu this interchangeability of hardware and software, various illustrative components, frame, module, circuit and step be above with Its functional form makees generalization description.Such functionality be implemented as hardware or software depend on concrete application and It is applied to the design constraint of total system.Technical staff can realize each specific application described with different modes Functionality, but such realization decision should not be interpreted to cause departing from the scope of the present invention.
General place can be used in conjunction with various illustrative logic plates, module and the circuit that presently disclosed embodiment describes Reason device, digital signal processor (DSP), application-specific integrated circuit (ASIC), field programmable gate array (FPGA) other are compiled Journey logical device, discrete door or transistor logic, discrete hardware component or its be designed to carry out function described herein Any combinations are realized or are executed.General processor can be microprocessor, but in alternative, which can appoint What conventional processor, controller, microcontroller or state machine.Processor is also implemented as the combination of computing device, example As DSP and the combination of microprocessor, multi-microprocessor, the one or more microprocessors to cooperate with DSP core or it is any its His such configuration.
It can be embodied directly in hardware, in by processor in conjunction with the step of method or algorithm that embodiment disclosed herein describes It is embodied in the software module of execution or in combination of the two.Software module can reside in RAM memory, flash memory, ROM and deposit Reservoir, eprom memory, eeprom memory, register, hard disk, removable disk, CD-ROM or known in the art appoint In the storage medium of what other forms.Exemplary storage medium is coupled to processor so that the processor can be from/to the storage Medium reads and writees information.In alternative, storage medium can be integrated into processor.Pocessor and storage media can It resides in ASIC.ASIC can reside in user terminal.In alternative, pocessor and storage media can be used as discrete sets Part is resident in the user terminal.
In one or more exemplary embodiments, described function can be in hardware, software, firmware, or any combination thereof Middle realization.If being embodied as computer program product in software, each function can be used as the instruction of one or more items or generation Code may be stored on the computer-readable medium or is transmitted by it.Computer-readable medium includes computer storage media and communication Both media comprising any medium for facilitating computer program to shift from one place to another.Storage medium can be can quilt Any usable medium that computer accesses.It is non-limiting as example, such computer-readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disc storage, disk storage or other magnetic storage apparatus can be used to carrying or store instruction Or data structure form desirable program code and any other medium that can be accessed by a computer.Any connection is also by by rights Referred to as computer-readable medium.For example, if software is using coaxial cable, fiber optic cables, twisted-pair feeder, digital subscriber line (DSL) or the wireless technology of such as infrared, radio and microwave etc is passed from web site, server or other remote sources It send, then the coaxial cable, fiber optic cables, twisted-pair feeder, DSL or such as infrared, radio and microwave etc is wireless Technology is just included among the definition of medium.Disk (disk) and dish (disc) as used herein include compression dish (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc, which disk (disk) are often reproduced in a manner of magnetic Data, and dish (disc) with laser reproduce data optically.Combinations of the above should also be included in computer-readable medium In the range of.
Offer is that can make or use this public affairs to make any person skilled in the art all to the previous description of the disclosure It opens.The various modifications of the disclosure all will be apparent for a person skilled in the art, and as defined herein general Suitable principle can be applied to spirit or scope of other variants without departing from the disclosure.The disclosure is not intended to be limited as a result, Due to example described herein and design, but should be awarded and principle disclosed herein and novel features phase one The widest scope of cause.

Claims (5)

1. optimization method is written in a kind of file of distributed memory system, which is characterized in that including:
Client receives file write request, and file is written in caching and returns to the successful message of write-in;
Client is searching the file to be written corresponding server in distributed memory system from the background, receives looking into for return Look for result;
In the disk for the server that the file that client is written in caching is then written to the distributed memory system found.
2. a kind of distributed memory system, which is characterized in that including at least one client and multiple servers, wherein client Middle installation caches, and disk is housed in server, and the file in the file write request received is written in caching simultaneously by client It returns and successful message is written, then searching file be written in distributed memory system on backstage, corresponding server is simultaneously The file being written in caching according to lookup result is then written to the corresponding server of the distributed memory system found Disk in.
3. the client process method optimized is written in a kind of file to distributed memory system, which is characterized in that distribution The framework of formula storage system is as claimed in claim 2, and client process method includes:
Step 1:Client receives file write request, safeguards a write request size accumulator, big to the data of this write request It is small to carry out cumulative be put into accumulator;
Step 2:Judge whether the file to be written in write request is already present on distributed memory system, if first there is no if Create this document;
Step 3:This write request is put into request queue;
Step 4:For request queue, judge whether each request is write request, if it is write request then by current request team Write request cumulative data size adds the size of data of this write request in row, and current request is then marked simultaneously if not write request All requests before current request in request queue are all refreshed, wherein refresh operation refer to by current request it Preceding all requests remove out request queue to indicate that the operation of request has been completed;
Step 5:Each of request queue request is handled, will not be that the request marks of write request come out, for two Adjacent write request merges, and the data file size after merging is no more than the size of the caching page in caching;
Step 6:Judge whether request queue is empty or whether first request is write request, if then entering step 7, otherwise into Enter step 8;
Step 7:When in request queue there are non-write request other request or be not connected write request when to these ask into The cumulative data size of write request is more than in caching in line flag and the request queue being added in refreshing queue or step 4 The request in window is added to refreshing queue when window is sized;
Step 8:It is returned to user and successful message is written;
Step 9:Whether the current write request in judgment step 1 is first time write request, the window that if it is first to file refreshes Space;
Step 10:Calculate caching page number and data cached size;
Step 11:If the size of data that window number is more than setting number either write request is greater than the set value or has not had There is next request, then flushes to the file in caching in the disk of server;
Step 12:Other requests for restoring non-write request, to prevent the operation in request queue to be blocked.
4. the client process method optimized is written in the file according to claim 3 to distributed memory system, It is characterized in that, the setting range of window size is 512KB~1GB.
5. the client process method optimized is written in the file according to claim 3 to distributed memory system, It is lost it is characterized in that, being powered off when preventing data in write buffer by the way of preserving daily record.
CN201810032346.8A 2018-01-12 2018-01-12 Distributed memory system and its file write-in optimization method, client process method Pending CN108427537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810032346.8A CN108427537A (en) 2018-01-12 2018-01-12 Distributed memory system and its file write-in optimization method, client process method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810032346.8A CN108427537A (en) 2018-01-12 2018-01-12 Distributed memory system and its file write-in optimization method, client process method

Publications (1)

Publication Number Publication Date
CN108427537A true CN108427537A (en) 2018-08-21

Family

ID=63155891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810032346.8A Pending CN108427537A (en) 2018-01-12 2018-01-12 Distributed memory system and its file write-in optimization method, client process method

Country Status (1)

Country Link
CN (1) CN108427537A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109981741A (en) * 2019-02-26 2019-07-05 启迪云计算有限公司 A kind of maintaining method of distributed memory system
CN110059514A (en) * 2019-04-18 2019-07-26 珠海美佳音科技有限公司 Method for writing data, NFC label, NFC device and storage medium
CN110928489A (en) * 2019-10-28 2020-03-27 成都华为技术有限公司 Data writing method and device and storage node
CN111142795A (en) * 2019-12-20 2020-05-12 浪潮电子信息产业股份有限公司 Control method, control device and control equipment for write operation of distributed storage system
CN111208946A (en) * 2020-01-06 2020-05-29 北京同有飞骥科技股份有限公司 Data persistence method and system supporting KB-level small file concurrent IO
CN114598714A (en) * 2022-03-11 2022-06-07 上海凯翔信息科技有限公司 Data storage system based on cloud NAS
CN115858421A (en) * 2023-03-01 2023-03-28 浪潮电子信息产业股份有限公司 Cache management method, device, equipment, readable storage medium and server

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426127A (en) * 2015-11-13 2016-03-23 浪潮(北京)电子信息产业有限公司 File storage method and apparatus for distributed cluster system
CN105549905A (en) * 2015-12-09 2016-05-04 上海理工大学 Method for multiple virtual machines to access distributed object storage system
CN107329708A (en) * 2017-07-04 2017-11-07 郑州云海信息技术有限公司 A kind of distributed memory system realizes data cached method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426127A (en) * 2015-11-13 2016-03-23 浪潮(北京)电子信息产业有限公司 File storage method and apparatus for distributed cluster system
CN105549905A (en) * 2015-12-09 2016-05-04 上海理工大学 Method for multiple virtual machines to access distributed object storage system
CN107329708A (en) * 2017-07-04 2017-11-07 郑州云海信息技术有限公司 A kind of distributed memory system realizes data cached method and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109981741A (en) * 2019-02-26 2019-07-05 启迪云计算有限公司 A kind of maintaining method of distributed memory system
CN110059514A (en) * 2019-04-18 2019-07-26 珠海美佳音科技有限公司 Method for writing data, NFC label, NFC device and storage medium
CN110059514B (en) * 2019-04-18 2021-06-08 珠海美佳音科技有限公司 Data writing method, NFC tag, NFC device and storage medium
CN110928489A (en) * 2019-10-28 2020-03-27 成都华为技术有限公司 Data writing method and device and storage node
CN110928489B (en) * 2019-10-28 2022-09-09 成都华为技术有限公司 Data writing method and device and storage node
CN111142795A (en) * 2019-12-20 2020-05-12 浪潮电子信息产业股份有限公司 Control method, control device and control equipment for write operation of distributed storage system
CN111208946A (en) * 2020-01-06 2020-05-29 北京同有飞骥科技股份有限公司 Data persistence method and system supporting KB-level small file concurrent IO
CN114598714A (en) * 2022-03-11 2022-06-07 上海凯翔信息科技有限公司 Data storage system based on cloud NAS
CN114598714B (en) * 2022-03-11 2023-08-18 上海凯翔信息科技有限公司 Cloud NAS-based data storage system
CN115858421A (en) * 2023-03-01 2023-03-28 浪潮电子信息产业股份有限公司 Cache management method, device, equipment, readable storage medium and server

Similar Documents

Publication Publication Date Title
CN108427537A (en) Distributed memory system and its file write-in optimization method, client process method
US10248576B2 (en) DRAM/NVM hierarchical heterogeneous memory access method and system with software-hardware cooperative management
US11561930B2 (en) Independent evictions from datastore accelerator fleet nodes
CN102629941B (en) Caching method of a virtual machine mirror image in cloud computing system
CN105872040B (en) A method of write performance is stored using gateway node cache optimization distributed block
CN103116552B (en) Method and apparatus for distributing memory space in distributed memory system
CN101510219B (en) File data accessing method, apparatus and system
US20060031450A1 (en) Systems and methods for providing distributed cache coherence
CN106569960B (en) A kind of last level cache management method mixing main memory
CN108363641B (en) Main and standby machine data transmission method, control node and database system
CN105549905A (en) Method for multiple virtual machines to access distributed object storage system
CN102804151A (en) Memory agent to access memory blade as part of the cache coherency domain
US20130290636A1 (en) Managing memory
CN106844740A (en) Data pre-head method based on memory object caching system
WO2014101108A1 (en) Caching method for distributed storage system, node and computer readable medium
CN107888687B (en) Proxy client storage acceleration method and system based on distributed storage system
CN112632069B (en) Hash table data storage management method, device, medium and electronic equipment
CN106528451B (en) The cloud storage frame and construction method prefetched for the L2 cache of small documents
CN103076992A (en) Memory data buffering method and device
CN106202459A (en) Relevant database storage performance optimization method under virtualized environment and system
CN103970678B (en) Catalogue designing method and device
CN110442533A (en) A kind of method, equipment and storage medium improving access performance
CN115794669A (en) Method, device and related equipment for expanding memory
US11099998B2 (en) Method and device for optimization of data caching
CN116414563A (en) Memory control device, cache consistency system and cache consistency method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180821

RJ01 Rejection of invention patent application after publication