CN108875035A - The date storage method and relevant device of distributed file system - Google Patents

The date storage method and relevant device of distributed file system Download PDF

Info

Publication number
CN108875035A
CN108875035A CN201810665263.2A CN201810665263A CN108875035A CN 108875035 A CN108875035 A CN 108875035A CN 201810665263 A CN201810665263 A CN 201810665263A CN 108875035 A CN108875035 A CN 108875035A
Authority
CN
China
Prior art keywords
memory node
weight parameter
node
network
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810665263.2A
Other languages
Chinese (zh)
Other versions
CN108875035B (en
Inventor
毕银龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201810665263.2A priority Critical patent/CN108875035B/en
Publication of CN108875035A publication Critical patent/CN108875035A/en
Application granted granted Critical
Publication of CN108875035B publication Critical patent/CN108875035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Abstract

The embodiment of the present application discloses the date storage method and relevant device of a kind of distributed file system, for reducing the delay for showing catalogue, improves the efficiency that catalogue is shown.The distributed file system includes at least one memory node, and the embodiment of the present application method includes:Receive data to be stored;The first weight parameter of each memory node at least one memory node is obtained, the first network time delay and the first residual storage capacity of each memory node generate according to first weight parameter, and the first network time delay is bigger, and first weight parameter is smaller;According to first weight parameter, the maximum memory node of weight parameter in each memory node is determined as the first memory node;The data to be stored is stored in first memory node.

Description

The date storage method and relevant device of distributed file system
Technical field
This application involves the date storage methods and correlation of field of data storage more particularly to distributed file system to set It is standby.
Background technique
In the 21st century, with the arrival of internet, especially mobile Internet, social networks, e-commerce it is at full speed Development, the data that the mankind generate in production and life increase while exponential type is presented.Data are growing day by day, the capacity for needing to store Increasing, single node and disk array these traditional memory technologies have been difficult meet the needs of mass data storage, collect Group's storage system is obtained extensive utilization with its natural scalability advantage, including distributed file system.
Distributed file system is made of multiple memory nodes, and each memory node can be distributed in different places and pass through Network carries out communication and data transmission between memory node.When needing to write data into distributed file system, need to comform Target storage node of a certain memory node as storing data is chosen in more memory nodes.
Due in the prior art, when the data is written, being to randomly select a certain memory node from numerous memory nodes Target storage node, since the Network status of each memory node is different, the efficiency of the memory node write-in data of Network status difference Lowly, when the memory node randomly selected is the memory node of Network status difference, the time that storing data needs to spend is longer, Data storage efficiency is low.
Summary of the invention
The embodiment of the present application provides the date storage method and relevant device of a kind of distributed file system, for reducing Storing data the time it takes improves the efficiency of data storage.
In a first aspect, the embodiment of the present application provides a kind of date storage method of distributed file system, the distribution File system includes at least one memory node, and this method includes:
Receive data to be stored;
The first weight parameter of each memory node at least one memory node is obtained, according to first weight parameter What the first network time delay of each memory node and the first residual storage capacity generated, the first network time delay is bigger, this first Weight parameter is smaller;
The maximum memory node of first weight parameter in each memory node is determined as the first memory node;
The data to be stored is stored in first memory node.
Second aspect, the embodiment of the present application provide a kind of server, are applied to distributed file system, distribution text Part system includes at least one memory node, which includes:
Receiving unit, for receiving data to be stored;
First acquisition unit should for obtaining the first weight parameter of each memory node at least one memory node The first network time delay and the first residual storage capacity of each memory node generate according to first weight parameter, first net Network time delay is bigger, and first weight parameter is smaller;
Determination unit is deposited for the maximum memory node of the first weight parameter in each memory node to be determined as first Store up node;
Storage unit, for the received data to be stored of the receiving unit to be stored in first memory node.
The third aspect, the embodiment of the present application also provides a kind of servers, which is characterized in that the server includes:Processing Device and memory are stored with the instruction of the data storage of distributed file system described in aforementioned first aspect in the memory;
The processor is used to execute the instruction of the data storage of the distributed file system stored in memory, executes as before The step of stating the date storage method of distributed file system described in first aspect.
Fourth aspect, the embodiment of the present application also provides a kind of computer readable storage mediums, which is characterized in that the calculating The instruction that the data storage of distributed file system is stored in machine readable storage medium storing program for executing makes when run on a computer The step of obtaining the date storage method of distributed file system of the computer execution as described in aforementioned first aspect.
As can be seen from the above technical solutions, the embodiment of the present application has the following advantages that:
After receiving data to be stored, the weight parameter of each node is obtained, wherein the weight parameter of each memory node It is generated according to the network delay of each memory node and residual storage capacity, and network delay is bigger, the weight ginseng of the memory node Number is smaller, and the maximum memory node of weight parameter in each memory node is chosen to be to the memory node of storing data.In the present solution, When choosing memory node, can take into account the network delay of each memory node, and certain node network delay is bigger namely network A possibility that node of situation difference, the node is arrived in selection, is smaller, to avoid choosing the node of Network status difference as far as possible, in turn Storing data the time it takes is reduced, to improve the efficiency of data storage.
Detailed description of the invention
Fig. 1 is the network architecture schematic diagram of the date storage method of distributed file system provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram of the date storage method of distributed file system provided by the embodiments of the present application;
Fig. 3 is another process signal of the date storage method of distributed file system provided by the embodiments of the present application Figure;
Fig. 4 is a kind of structural schematic diagram of server provided by the embodiments of the present application;
Fig. 5 is another structural schematic diagram of server provided by the embodiments of the present application.
Specific embodiment
The embodiment of the present application provides a kind of date storage method of distributed file system, for reducing storing data institute The time of cost improves the efficiency of data storage.The embodiment of the present application also provides corresponding server and computer-readable deposit Storage media.It is described in detail separately below.
Description and claims of this specification and term " first ", " second ", " third ", " in above-mentioned attached drawing The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage The data that solution uses in this way are interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to Here the sequence other than those of diagram or description is implemented.
As shown in Figure 1, the distributed file system in the application includes client 10, server 20 and memory node 30, Although only showing a client, a server and four memory nodes in Fig. 1, it is to be understood that, it is only shown in Fig. 1 Facilitate and understand this programme, client 10, server 20 and memory node 30 all can be multiple, specific client end 10, server 20 and the quantity of memory node 30 should flexibly set according to actual needs, herein without limitation.
In the embodiment of the present application, client 10 is used to receive the data of user's typing, and is sent to server 20.Server 20, for managing each memory node 30, receive the data that client 10 is sent, are also used to store data according to certain rule In memory node 30.It is connected by network communication between client 10, server 20 and memory node 30.
In the embodiment of the present application, client 10 can be computer, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), mobile phone, vehicle-mounted computer, TV or other equipment with communication module etc., do not limit herein It is fixed.
Server 20 can be a server composition, be also possible to the server zone consisted of several servers, or Person is a cloud computing service center, can carry out the storage and processing of data, specifically herein without limitation.
Memory node 30 can be metadata memory node, be also possible to user data memory node, and server 20 will be to The metadata of write-in is stored in metadata memory node, and the user data being written into is stored in user data memory node.
Network can be wireless network connection or mobile network's connection, wireless network connection include but is not limited to, such as wireless Fidelity (WIreless-FIdelity, WiFi), bluetooth connection etc.;Mobile network's connection includes but is not limited to that such as whole world is mobile logical Letter system (Global System for Mobile communications, GSM), CDMA (Code Division Multiple Access, CDMA) etc..
The date storage method of the distributed file system in the application is described in detail below, referring to Fig. 2, this The date storage method embodiment of a kind of distributed file system that application embodiment provides includes:
201, server receives data to be stored.
In the present embodiment, when data to be stored is written by client in user, server can receive this and wait storing Data, and determine the data for needing to be written memory node.The data for needing to be written memory node can be metadata, be also possible to User data can also be other kinds of data etc., specifically herein without limitation.
Wherein, metadata is to describe the information of user data attribute, for supporting such as instruction storage location, searching history number According to, to user data record etc. functions.
202, server obtains the first weight parameter of each memory node at least one memory node.
In the present embodiment, the first weight parameter of each memory node is server preliminary setting parameter, first weight ginseng Number is that server is generated according to the first network time delay of each memory node and the first residual storage capacity, when the first network Prolong bigger, first weight parameter is smaller.After server receives data to be stored, each storage is obtained in data from having set First weight parameter of node.
In the present embodiment, network delay be data to be stored from server transport to memory node caused by overall delay.
In the present embodiment, weight parameter and network delay and the relation function of residual storage capacity are that test obtains in advance , as an example, the hardware environment of such as test is as follows:Processor be Intel's forth generation Intel Core i7-4790, memory 16GB, Hard disk is Seagate ST1000DM003-1ER162, and network interface card is Realtek RTL8168/8111/811, and operating system is Ubuntu14.04, metadata storage system are XFS, and the information of each memory node is respectively ClientClientIP: 192.168.1.30,OSD.0OSD/MonitorIP:192.168.1.1,OSD.1 OSDIP:And OSD.2 192.168.1.2 OSDIP:192.168.1.4.Be step-length with 5 milliseconds, 0 millisecond to 50 milliseconds for range, adjust target in above-mentioned 4 memory nodes The network delay of memory node, after setting the network delay of memory node, then with 0.1 be step-length, 0 to 1 is range adjustment The weight parameter of the target storage node, the weight parameter of other 3 memory nodes are 1, change rank in each weight parameter The random file of 2048 2MB is written in Duan Junxiang 4 memory nodes, the total 4GB size of the random file calculates each weight Time t required for the random file of the 4GB size is stored under the Parameters variation stage, by comparing heterogeneous networks delay, difference 4 memory nodes under the conditions of weight parameter store deadline t, to fit 4 memory nodes in idling carrier strip The relation function of the network delay of weight parameter and target storage node under part.Adjust the remaining storage of above-mentioned 4 memory nodes Capacity repeats above-mentioned test process, can fit 4 memory nodes weight under the conditions of different residual storage capacities and join Several and the network delay of target storage node and the relation function of residual storage capacity.It should be appreciated that herein to test hardware loop The citing in border, network delay parameter, weight parameter only understands this programme for convenience, specifically setting to the relevant parameter of test environment Surely should flexible choice according to the actual situation, herein without limitation.
203, the maximum memory node of weight parameter in each memory node is determined as the first memory node by server.
In the present embodiment, server is after getting the first weight parameter of each memory node, to each memory node First weight parameter is compared, and to select the maximum memory node of weight parameter from each memory node, and the weight is joined The maximum memory node of number is determined as the first memory node.
204, the data to be stored is stored in first memory node by server.
In the present embodiment, after receiving data to be stored, the weight parameter of each node is obtained, wherein each storage section The weight parameter of point is generated according to the network delay and residual storage capacity of each memory node, and network delay is bigger, the storage The weight parameter of node is smaller, and the maximum memory node of weight parameter in each memory node is chosen to be to the storage section of storing data Point.In the present solution, can take into account the network delay of each memory node, and the network delay of certain node when choosing memory node A possibility that bigger namely Network status difference node, the node is arrived in selection, is smaller, to avoid choosing Network status as far as possible The node of difference, and then storing data the time it takes is reduced, to improve the efficiency of data storage.
Based on 2 described embodiment of earlier figures, the number of another kind distributed file system provided by the embodiments of the present application According to storage method embodiment, referring particularly to Fig. 3, this method includes:
301, server receives data to be stored.
In the present embodiment, step 301 is similar with step 201 in aforementioned embodiment illustrated in fig. 2, and details are not described herein again.
302, server obtains the second network delay and the second residual storage capacity of each memory node.
In the present embodiment, server after receiving data to be stored, obtain each memory node the second network delay and Second residual storage capacity, second network delay are the current network time delay of each memory node, second residual storage capacity For the current residual memory capacity of each memory node.
303, it is more than that this is first default that server, which judges whether there is second network delay of any one memory node, Threshold value, second network delay of any one memory node is more than first preset threshold if it does not exist, then enters step 304, second network delay of any one memory node is more than first preset threshold if it exists, then enters step 305.
The first preset threshold is preset in the present embodiment, in server, server is getting each memory node After second network delay, the second network delay of each memory node is traversed, to judge whether there is any one memory node Network delay be more than first preset threshold.
In the present embodiment, any one memory node is any one in each memory node.
First preset threshold is server preliminary setting data, and hardware environment is different where each memory node, this is first pre- If the setting data of threshold value are also different, as an example, the first preset threshold can be for 5 milliseconds, 15 milliseconds, 25 milliseconds or other are joined Numerical value etc., it should be understood that it is only herein to prove the feasibility of this programme to the citing of the first preset threshold, it is specifically default to first The setting of threshold value, herein without limitation.
304, server obtains the first weight parameter of each memory node.
In the present embodiment, when server judgement there is no second network delay of any one memory node be more than this When one preset threshold, server obtains the first weight parameter of each memory node, which is according to each storage What the first network time delay of node and the first residual storage capacity generated, first network time delay is bigger, and the first weight parameter is smaller.
Wherein, it is each to store when first network time delay can give each memory node to set weight parameter for the first time for server The network delay of node, or when the last time is more than the first preset threshold there are the network delay of any one memory node Each memory node network delay, can also be the network delay in the case of other, specifically herein without limitation.
305, server generates the second of each memory node according to second network delay and the second residual storage capacity Weight parameter.
In the present embodiment, when server judgement exist second network delay of any one memory node more than this first When preset threshold, server is generated according to the second network delay of each memory node got and the second residual storage capacity Second weight parameter of the memory node, second network delay is bigger, and the second weight parameter is smaller.
306, the first weight parameter of each memory node is updated to second weight parameter by server.
In the present embodiment, since the weight parameter of each memory node is not real-time change, each deposited in server generation After the second weight parameter for storing up node, the first weight parameter of each memory node is updated to the second weight parameter.
307, server judges whether there is the second memory node that second network delay is more than the second preset threshold, if In the presence of the second memory node for being more than second preset threshold, then 308 are entered step;It is if it does not exist more than second preset threshold The second memory node, then enter step 309.
The second preset threshold is preset in the present embodiment, in server, server is getting each memory node After second network delay, the second network delay of each memory node is traversed, to judge whether there is any one memory node Network delay be more than second preset threshold.
Wherein, the second preset threshold is server preliminary setting data, and the setting of the second preset threshold is according to hardware environment Variation would also vary from, in general, the value of the second preset threshold is greater than the first preset threshold, the second preset threshold Value can be not less than 50 milliseconds, as an example, the value of such as the second preset threshold can be 50 milliseconds, 55 milliseconds, 60 Millisecond or other values etc., specifically herein without limitation.
308, the weight parameter of second memory node is updated to 0 by server.
In the present embodiment, step 303, step 305 and step 306 are optional step, if not executing step 303, step 305 With step 306, then 307 can be entered step after executing the step 304, then be more than that this is second default when server judgement exists When the second memory node of threshold value, on the basis of the weight parameter of each memory node is the first weight parameter, this second is deposited The weight parameter of storage node is updated to 0, to obtain the updated third weight parameter of each memory node.
It is more than second preset threshold when server judgement exists if executing step 303, step 305 and step 306 When the second memory node, on the basis of the weight parameter of each memory node is the second weight parameter, by second memory node Weight parameter be updated to 0, to obtain updated 4th weight parameter of each memory node.
Step 303, step 305 and step 306 are executed if should be appreciated that, step 303, step 305 and step 306 and step Rapid 307 limit to step 308 without successive timing, can first carry out step 303, step 305 and step 306, then execute step 307 to step 308;Step 307 can also be first carried out to step 308, then execute step 303, step 305 and step 306, may be used also To be performed simultaneously step 303, step 305 and step 306 and step 307 to step 308.
309, the maximum memory node of weight parameter in each memory node is determined as the first memory node by server.
In the present embodiment, step 302 to step 303, step 305 to step 306 and step 307 are to step 308 can Step is selected, if step 302 does not execute to step 303, step 305 to step 306 and step 307 to step 308, step 309 is similar with step 203 in aforementioned embodiment illustrated in fig. 2, and details are not described herein again.
If executing step 302 to step 303 and step 305 to step 306, step 307 is not executed to step 308, then is taken Device be engaged according to the second weight parameter of each memory node of generation, the maximum memory node of weight parameter in each memory node is determined For the first memory node.Namely after getting the second weight parameter of each memory node, to the second power of each memory node Weight parameter is compared, to select the maximum memory node of weight parameter from each memory node, and the weight parameter is maximum Memory node be determined as the first memory node.
If executing step 302, step 307 and step 308, step 303, step 305 and step 306 are not executed, then is serviced Device determines the maximum memory node of weight parameter in each memory node according to the updated third weight parameter of each memory node For the first memory node.Namely after getting the third weight parameter of each memory node, the third of each memory node is weighed Weight parameter is compared, to select the maximum memory node of weight parameter from each memory node, and the weight parameter is maximum Memory node be determined as the first memory node.
The server root if step 302 to step 303, step 305 to step 306 and step 307 are performed both by step 308 According to updated 4th weight parameter of each memory node, the maximum memory node of weight parameter in each memory node is determined as One memory node.Namely after getting the 4th weight parameter of each memory node, the 4th weight of each memory node is joined Number is compared, and to select the maximum memory node of weight parameter from each memory node, and is deposited the weight parameter is maximum Storage node is determined as the first memory node.
310, the data to be stored is stored in first memory node by server.
In the present embodiment, step 310 is similar with step 204 in aforementioned embodiment illustrated in fig. 2, and details are not described herein again.
In the present embodiment, after receiving data to be stored, the weight parameter of each node is obtained, wherein each storage section The weight parameter of point is generated according to the network delay and residual storage capacity of each memory node, and network delay is bigger, the storage The weight parameter of node is smaller, and the maximum memory node of weight parameter in each memory node is chosen to be to the storage section of storing data Point.In the present solution, can take into account the network delay of each memory node, and the network delay of certain node when choosing memory node A possibility that bigger namely Network status difference node, the node is arrived in selection, is smaller, to avoid choosing Network status as far as possible The node of difference, and then storing data the time it takes is reduced, to improve the efficiency of data storage.
Further, server judge the current network time delay of each memory node whether more than the first preset threshold, only When the network delay of any one memory node in each memory node is less than the first preset threshold, each storage section is just obtained First weight parameter of point, so that the weight parameter of each memory node and the current network time delay of each memory node generate pass Connection, increases the flexibility of this programme and the adaptability to Network status.
Further, whether server determines the current network time delay for judging each memory node more than the first default threshold Value, when the network delay of any one memory node in each memory node is more than the first preset threshold, according to each memory node Current network time delay and current residual memory capacity generate the second weight parameter of each memory node, and joined according to the second weight Number determines the memory node of storing data.Both the excessively frequent weight parameter for updating each memory node had been avoided, had been in turn ensured The weight parameter of each memory node can be adjusted according to current network delay, increase the feasibility of this programme.
Further, when the network delay of memory node is more than the second preset threshold, the weight of the memory node is joined Number is updated to 0, when avoiding storing data in the excessively poor memory node of Network status, the case where caused cost overlong time, To further increase the efficiency of data storage.
Based on earlier figures 2 and embodiment shown in Fig. 3, Fig. 4 is a kind of structural representation of server provided in this embodiment Figure, the server 400 are applied to distributed file system, and the distributed file system includes at least one memory node, institute Stating server 400 includes:
Receiving unit 401, for receiving data to be stored;
First acquisition unit 402, for obtaining the first weight ginseng of each memory node at least one described memory node Number, first weight parameter are to be generated according to the first network time delay of each memory node and the first residual storage capacity , the first network time delay is bigger, and first weight parameter is smaller;
Determination unit 403 is used for according to first weight parameter, and weight parameter in each memory node is maximum Memory node is determined as the first memory node;
Storage unit 404 is deposited for the received data to be stored of the receiving unit 401 to be stored in described first Store up node.
Further, the first network time delay is more than first pre- there are the network delay of any memory node for the last time If the network of each memory node when threshold value prolongs, any memory node is contained at least one described memory node, described Server further includes:
Second acquisition unit 405, the remaining storage of the second network delay and second for obtaining each memory node are held Amount, second network delay are current network time delay, and second residual storage capacity is current residual memory capacity;
Judging unit 406, second network delay for judging whether there is any one memory node are more than institute State the first preset threshold;
The first acquisition unit 402 is specifically used for, any one memory node is not present when the judging unit determines Second network delay when being more than first preset threshold, obtain the first weight parameter of each memory node.
Further, the server further includes:
Generation unit 407, for determining that there are second nets of any one memory node when the judging unit 405 When network time delay is more than first preset threshold, then according to second network delay and the generation of the second residual storage capacity Second weight parameter of each memory node, second network delay is bigger, and second weight parameter is smaller;
Updating unit 408, for the first weight parameter of each memory node to be updated to second weight parameter;
The determination unit 403 is also used to, according to second weight parameter, by the second weight in each memory node The maximum memory node of parameter is determined as the first memory node.
It further, further include the second memory node at least one described memory node,
The judging unit 406 is also used to, and judging whether there is second network delay is more than the second preset threshold Second memory node;
The updating unit 408 is also used to, when the judging unit determines that existing is more than the second of second preset threshold The weight parameter of second memory node is then updated to 0 by memory node;
The determination unit 403 is also used to, will be in each memory node according to the updated third weight parameter The maximum memory node of second weight parameter is determined as the first memory node.
In the present embodiment, after receiving unit 401 receives data to be stored, obtained by first acquisition unit 402 each The weight parameter of node, wherein the weight parameter of each memory node is held according to the network delay and remaining storage of each memory node Amount generates, and network delay is bigger, and the weight parameter of the memory node is smaller, and determination unit 403 is by weight in each memory node The maximum memory node of parameter is chosen to be the memory node of storing data.In the present solution, can take into account when choosing memory node The network delay of each memory node, and certain node network delay is bigger namely the node of Network status difference, choose and arrive the node A possibility that it is smaller, to avoid choosing the node of Network status difference as far as possible, and then reduce storing data the time it takes, To improve the efficiency of data storage.
A kind of server is also provided in the embodiment of the present application, referring to Fig. 5, which can be different because of configuration or performance And bigger difference is generated, it may include one or more processors 501 and memory 502 (such as one or one The above mass memory unit).Wherein, memory 502 can be of short duration storage or persistent storage.It is stored on memory 502 Program may include one or more modules (diagram does not mark), and each module may include to a series of in server Instruction operation.Further, processor 501 can be set to communicate with memory 502, execute storage on server 500 Series of instructions operation in device 502.
Server 500 can also include one or more input-output units 503, one or more power supplys 504, one or more wired or wireless network interfaces 505.
In some embodiments of the invention, processor 501, memory 502, input-output unit 503,504 and of power supply Wired or wireless network interface 505 can be connected by bus or other means, in Fig. 5 for being connected by bus.
The distributed document of the execution of server described in earlier figures 2 and embodiment illustrated in fig. 3 is stored in the memory The instruction of the data storage of system;
The processor is used to execute the instruction of the data storage of the distributed file system stored in memory, executes as before State the step in the date storage method of distributed file system described in Fig. 2 and embodiment illustrated in fig. 3.
A kind of computer readable storage medium is also provided in the embodiment of the present application, is stored in the computer readable storage medium It is distributed the instruction of the data storage of formula file system, when run on a computer, so that computer executes such as earlier figures 2 The step of with the date storage method of distributed file system described in embodiment illustrated in fig. 3.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the application Portion or part steps.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before Embodiment is stated the application is described in detail, those skilled in the art should understand that:It still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of date storage method of distributed file system, which is characterized in that the distributed file system includes at least One memory node, the method includes:
Receive data to be stored;
The first weight parameter of each memory node at least one described memory node is obtained, according to first weight parameter What the first network time delay of each memory node and the first residual storage capacity generated, the first network time delay is bigger, institute It is smaller to state the first weight parameter;
According to first weight parameter, the maximum memory node of weight parameter in each memory node is determined as first and is deposited Store up node;
The data to be stored is stored in first memory node.
2. the method according to claim 1, wherein the first network time delay is that the last time, there are any one The network delay of each memory node when the network delay of memory node is more than the first preset threshold, any one described storage section Point is contained at least one described memory node, the method also includes:
The second network delay and the second residual storage capacity of each memory node are obtained, second network delay is current Network delay, second residual storage capacity are current residual memory capacity;
Second network delay for judging whether there is any one memory node is more than first preset threshold;
The first weight parameter of each memory node includes at least one memory node described in the acquisition:
Second network delay of any one memory node is more than first preset threshold if it does not exist, then described in acquisition First weight parameter of each memory node.
3. according to the method described in claim 2, it is characterized in that, the method also includes:
Second network delay of any one memory node is more than first preset threshold if it exists, then according to described the Two network delays and the second residual storage capacity generate the second weight parameter of each memory node, second network delay Bigger, second weight parameter is smaller;
First weight parameter of each memory node is updated to second weight parameter;
It is described the maximum memory node of weight parameter in each memory node is determined as the first memory node to include:
According to second weight parameter of generation, the maximum memory node of weight parameter in each memory node is determined as First memory node.
4. according to the method described in claim 2, it is characterized in that, further including the second storage at least one described memory node Node, the method also includes:
Judge whether there is the second memory node that second network delay is more than the second preset threshold;
It is if it exists more than the second memory node of second preset threshold, then more by the weight parameter of second memory node New is 0.
5. according to the method described in claim 4, it is characterized in that, second preset threshold is not less than 50 milliseconds.
6. according to method described in claim 2 to 5 any claim, which is characterized in that first preset threshold is 5 millis Second, 15 milliseconds or 25 milliseconds.
7. a kind of server, which is characterized in that be applied to distributed file system, the distributed file system includes at least one A memory node, the system comprises:
Receiving unit, for receiving data to be stored;
First acquisition unit, it is described for obtaining the first weight parameter of each memory node at least one described memory node First weight parameter is to be generated according to the first network time delay of each memory node and the first residual storage capacity, described the One network delay is bigger, and first weight parameter is smaller;
Determination unit, for according to first weight parameter, the maximum storage of weight parameter in each memory node to be saved Point is determined as the first memory node;
Storage unit, for the received data to be stored of the receiving unit to be stored in first memory node.
8. server according to claim 7, which is characterized in that the first network time delay is to deposit there are any the last time The network of each memory node when the network delay of storage node is more than the first preset threshold prolongs, and any memory node is contained in At least one described memory node, the server further include:
Second acquisition unit, it is described for obtaining the second network delay and the second residual storage capacity of each memory node Second network delay is current network time delay, and second residual storage capacity is current residual memory capacity;
Judging unit, second network delay for judging whether there is any one memory node are more than described first pre- If threshold value;
The first acquisition unit is specifically used for, when the judging unit determines that there is no described the of any one memory node When two network delays are more than first preset threshold, the first weight parameter of each memory node is obtained.
9. a kind of server, which is characterized in that the server includes:Processor and memory are stored in the memory The instruction of the data storage of any distributed file system of claim 1-6;
The processor is used to execute the instruction of the data storage of the distributed file system stored in memory, executes such as right It is required that the step of date storage method of any distributed file system of 1-6.
10. a kind of computer readable storage medium, which is characterized in that be stored with distribution in the computer readable storage medium The instruction of the data storage of file system, when run on a computer, so that computer executes the claims 1-6 and appoints The step of date storage method of distributed file system described in one.
CN201810665263.2A 2018-06-25 2018-06-25 Data storage method of distributed file system and related equipment Active CN108875035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810665263.2A CN108875035B (en) 2018-06-25 2018-06-25 Data storage method of distributed file system and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810665263.2A CN108875035B (en) 2018-06-25 2018-06-25 Data storage method of distributed file system and related equipment

Publications (2)

Publication Number Publication Date
CN108875035A true CN108875035A (en) 2018-11-23
CN108875035B CN108875035B (en) 2022-02-18

Family

ID=64294660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810665263.2A Active CN108875035B (en) 2018-06-25 2018-06-25 Data storage method of distributed file system and related equipment

Country Status (1)

Country Link
CN (1) CN108875035B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636122A (en) * 2019-09-11 2019-12-31 中移(杭州)信息技术有限公司 Distributed storage method, server, system, electronic device, and storage medium
CN112101836A (en) * 2020-03-06 2020-12-18 蒋梅 Big data storage node dynamic management system and corresponding terminal
CN115865989A (en) * 2023-02-21 2023-03-28 中国市政工程西南设计研究总院有限公司 Wide area network configuration method for efficient and safe interconnection of information of enterprise headquarters and branch offices

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102082830A (en) * 2011-01-18 2011-06-01 浙江大学 Unstable network-oriented distributed file storage method based on quality perception
CN102667772A (en) * 2010-03-01 2012-09-12 株式会社日立制作所 File level hierarchical storage management system, method, and apparatus
CN103731505A (en) * 2014-01-17 2014-04-16 中国联合网络通信集团有限公司 Data distributed storage method and system
CN104023088A (en) * 2014-06-28 2014-09-03 山东大学 Storage server selection method applied to distributed file system
CN104142871A (en) * 2013-05-10 2014-11-12 中国电信股份有限公司 Data backup method and device and distributed file system
CN104468670A (en) * 2013-09-23 2015-03-25 深圳市腾讯计算机系统有限公司 Method and device for processing management data, distributed disaster tolerance method and distributed disaster tolerance system
CN104780092A (en) * 2014-01-13 2015-07-15 阿里巴巴集团控股有限公司 File transmission method and device as well as server system
CN105025053A (en) * 2014-04-24 2015-11-04 苏宁云商集团股份有限公司 Distributed file upload method based on cloud storage technology and system
CN105471985A (en) * 2015-11-23 2016-04-06 北京农业信息技术研究中心 Load balance method, cloud platform computing method and cloud platform
US20170286008A1 (en) * 2016-03-30 2017-10-05 Advanced Institutes Of Convergence Technology Smart storage platform apparatus and method for efficient storage and real-time analysis of big data
CN107241418A (en) * 2017-06-13 2017-10-10 腾讯科技(深圳)有限公司 A kind of load-balancing method, device, equipment and computer-readable recording medium
CN107451138A (en) * 2016-05-30 2017-12-08 中兴通讯股份有限公司 A kind of distributed file system storage method and system
CN107766346A (en) * 2016-08-15 2018-03-06 中国联合网络通信集团有限公司 Distributed file system file access method and device
CN107888634A (en) * 2016-09-29 2018-04-06 北京金山云网络技术有限公司 The data request method and device of a kind of distributed memory system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102667772A (en) * 2010-03-01 2012-09-12 株式会社日立制作所 File level hierarchical storage management system, method, and apparatus
CN102082830A (en) * 2011-01-18 2011-06-01 浙江大学 Unstable network-oriented distributed file storage method based on quality perception
CN104142871A (en) * 2013-05-10 2014-11-12 中国电信股份有限公司 Data backup method and device and distributed file system
CN104468670A (en) * 2013-09-23 2015-03-25 深圳市腾讯计算机系统有限公司 Method and device for processing management data, distributed disaster tolerance method and distributed disaster tolerance system
CN104780092A (en) * 2014-01-13 2015-07-15 阿里巴巴集团控股有限公司 File transmission method and device as well as server system
CN103731505A (en) * 2014-01-17 2014-04-16 中国联合网络通信集团有限公司 Data distributed storage method and system
CN105025053A (en) * 2014-04-24 2015-11-04 苏宁云商集团股份有限公司 Distributed file upload method based on cloud storage technology and system
CN104023088A (en) * 2014-06-28 2014-09-03 山东大学 Storage server selection method applied to distributed file system
CN105471985A (en) * 2015-11-23 2016-04-06 北京农业信息技术研究中心 Load balance method, cloud platform computing method and cloud platform
US20170286008A1 (en) * 2016-03-30 2017-10-05 Advanced Institutes Of Convergence Technology Smart storage platform apparatus and method for efficient storage and real-time analysis of big data
CN107451138A (en) * 2016-05-30 2017-12-08 中兴通讯股份有限公司 A kind of distributed file system storage method and system
CN107766346A (en) * 2016-08-15 2018-03-06 中国联合网络通信集团有限公司 Distributed file system file access method and device
CN107888634A (en) * 2016-09-29 2018-04-06 北京金山云网络技术有限公司 The data request method and device of a kind of distributed memory system
CN107241418A (en) * 2017-06-13 2017-10-10 腾讯科技(深圳)有限公司 A kind of load-balancing method, device, equipment and computer-readable recording medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636122A (en) * 2019-09-11 2019-12-31 中移(杭州)信息技术有限公司 Distributed storage method, server, system, electronic device, and storage medium
CN112101836A (en) * 2020-03-06 2020-12-18 蒋梅 Big data storage node dynamic management system and corresponding terminal
CN112101836B (en) * 2020-03-06 2021-07-09 江苏小梦科技有限公司 Big data storage node dynamic management system and corresponding terminal
CN115865989A (en) * 2023-02-21 2023-03-28 中国市政工程西南设计研究总院有限公司 Wide area network configuration method for efficient and safe interconnection of information of enterprise headquarters and branch offices
CN115865989B (en) * 2023-02-21 2023-05-12 中国市政工程西南设计研究总院有限公司 Wide area network configuration method for high-efficiency and safe interconnection of enterprise headquarter and branch office information

Also Published As

Publication number Publication date
CN108875035B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
US10412021B2 (en) Optimizing placement of virtual machines
US20140379712A1 (en) Data stream management systems
CN103067297B (en) A kind of dynamic load balancing method based on resource consumption prediction and device
CN106161610A (en) A kind of method and system of distributed storage
CN103019853A (en) Method and device for dispatching job task
CN108431796A (en) Distributed resource management system and method
CN106130972B (en) resource access control method and device
CN108875035A (en) The date storage method and relevant device of distributed file system
CN108920153A (en) A kind of Docker container dynamic dispatching method based on load estimation
CN105975345B (en) A kind of video requency frame data dynamic equalization memory management method based on distributed memory
CN106815254A (en) A kind of data processing method and device
CN104679594A (en) Middleware distributed calculating method
CN109981702A (en) A kind of file memory method and system
CN103631933A (en) Distributed duplication elimination system-oriented data routing method
CN113138860A (en) Message queue management method and device
CN109818809A (en) Interactive voice response system and its data processing method and phone customer service system
CN107562803B (en) Data supply system and method and terminal
US20220300323A1 (en) Job Scheduling Method and Job Scheduling Apparatus
CN113448685A (en) Pod scheduling method and system based on Kubernetes
CN107544848B (en) Cluster expansion method, apparatus, electronic equipment and storage medium
CN116089477B (en) Distributed training method and system
CN116880928A (en) Model deployment method, device, equipment and storage medium
CN112019577B (en) Exclusive cloud storage implementation method and device, computing equipment and computer storage medium
CN110188140A (en) Data pull method, apparatus, storage medium and computer equipment
CN114978913B (en) Cross-domain deployment method and system for service function chains based on cut chains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant