CN113407493A - Operation method, data read-write method, device, electronic equipment and medium - Google Patents

Operation method, data read-write method, device, electronic equipment and medium Download PDF

Info

Publication number
CN113407493A
CN113407493A CN202110677770.XA CN202110677770A CN113407493A CN 113407493 A CN113407493 A CN 113407493A CN 202110677770 A CN202110677770 A CN 202110677770A CN 113407493 A CN113407493 A CN 113407493A
Authority
CN
China
Prior art keywords
node
slot
nodes
waiting
agent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110677770.XA
Other languages
Chinese (zh)
Other versions
CN113407493B (en
Inventor
陈墨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202110677770.XA priority Critical patent/CN113407493B/en
Priority claimed from CN202110677770.XA external-priority patent/CN113407493B/en
Publication of CN113407493A publication Critical patent/CN113407493A/en
Application granted granted Critical
Publication of CN113407493B publication Critical patent/CN113407493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/183Provision of network file services by network file servers, e.g. by using NFS, CIFS

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure discloses an operation method, a data reading and writing device, electronic equipment and a computer readable medium. One embodiment of the method comprises: responding to the monitoring that the sub-nodes of the plurality of slot position nodes created in advance are changed, and acquiring the serial numbers of the sub-nodes included under the waiting agent node created in advance; determining whether empty slot position nodes exist in the plurality of slot position nodes or not in response to the fact that the sequencing positions of the numbers of the child nodes corresponding to the current waiting cache agent in the numbers of the child nodes meet the preset condition; and responding to the existence of the empty slot node, creating a child node under the empty slot node, and writing the service address of the current waiting cache agent into the empty slot node. This embodiment enables the orderly joining of multiple waiting caching agents.

Description

Operation method, data read-write method, device, electronic equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an operation method, a data reading and writing method, a device, an electronic device and a computer readable medium.
Background
Distributed storage systems typically support multiple client accesses. When a client accesses, metadata needs to be obtained first. However, in order to ensure data consistency among multiple clients, each client cannot locally cache metadata. Therefore, each client needs to acquire metadata from the distributed storage system before data reading and writing every time, which results in overload service of the distributed storage system, longer request time and increased data reading and writing delay.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Some embodiments of the present disclosure propose methods of operation, methods of reading and writing data, apparatuses, electronic devices and computer readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for waiting for a caching agent to run, the method comprising: responding to the monitoring that the sub-nodes of a plurality of pre-created slot position nodes are changed, and acquiring the number of each sub-node included under the pre-created waiting agent node, wherein each sub-node under the waiting agent node corresponds to one waiting caching agent in a plurality of waiting caching agents, and the sub-node of the slot position node corresponds to one running caching agent; determining whether empty slot position nodes exist in the plurality of slot position nodes or not in response to the fact that the sequencing positions of the numbers of the child nodes corresponding to the current waiting cache agent in the numbers of the child nodes meet the preset condition; and responding to the existence of the empty slot node, creating a child node under the empty slot node, and writing the service address of the current waiting cache agent into the empty slot node.
In a second aspect, some embodiments of the present disclosure provide a data reading and writing method for a client, including: determining the number of a slot position node corresponding to the data to be read and written according to the read-write parameters; determining a service address of the running cache agent corresponding to the slot node number according to the slot node number and a pre-established mapping, wherein the mapping is used for representing the corresponding relation between the node number of the slot node and the service address; sending a metadata acquisition request to a service address to acquire metadata; and reading and writing the data to be read and written according to the acquired metadata.
In a third aspect, some embodiments of the present disclosure provide a running device for waiting for a caching agent, the device comprising: the acquiring unit is configured to respond to monitoring that the sub-nodes of a plurality of pre-created slot position nodes are changed, and acquire the numbers of the sub-nodes included under the pre-created waiting agent node, wherein each sub-node under the waiting agent node corresponds to one waiting cache agent in a plurality of waiting cache agents, and the sub-node of the slot position node corresponds to one running cache agent; the determining unit is configured to respond to the determination that the sequencing positions of the numbers of the sub nodes corresponding to the current waiting cache agents in the numbers of the sub nodes meet the preset condition, and determine whether an empty slot position node exists in the plurality of slot position nodes; and the creating unit is configured to respond to the existence of the empty slot node, create a child node under the empty slot node, and write the service address of the current waiting cache agent into the empty slot node.
In a fourth aspect, some embodiments of the present disclosure provide a data acquisition apparatus for a client, including: the node number determining unit is configured to determine a slot node number corresponding to data to be read and written according to the read-write parameters; the service address determining unit is configured to determine a service address of the running cache agent corresponding to the slot node number according to the slot node number and a pre-established mapping, wherein the mapping is used for representing the corresponding relation between the node number of the slot node and the service address; a sending unit configured to send a metadata acquisition request to a service address to acquire metadata; and the reading and writing unit is configured to read and write data to be read and written according to the acquired metadata.
In a fifth aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first or second aspects.
In a sixth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, where the program when executed by a processor implements the method described in any of the implementations of the first or second aspect.
The above embodiments of the present disclosure have the following advantages: the orderly joining of a plurality of waiting caching agents can be realized, and the unordered competition of the waiting caching agents is avoided. In addition, when some running caching agents have faults, the child nodes corresponding to the running caching agents disappear, the running method provided by the disclosure can monitor the change, automatically create child nodes in the empty slot position, and write the service address of a certain waiting caching agent into the empty slot position node, so that the original fault agents can be replaced, the proxy service is provided for the outside, and the reduction of the system availability caused by the fault of a single agent is avoided.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied;
FIG. 2 is a flow diagram of some embodiments of a method of operation for a waiting caching agent according to the present disclosure;
FIG. 3 is a flow diagram of further embodiments of a method of operation for a waiting cache agent according to the present disclosure;
FIG. 4 is a flow diagram of some embodiments of a method for data reading and writing for a client according to the present disclosure;
FIG. 5 is a block diagram of some embodiments of an operating device for waiting for a caching agent according to the present disclosure;
FIG. 6 is a schematic block diagram of some embodiments of a data read/write device for a client according to the present disclosure;
FIG. 7 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which the methods of operation or the methods of reading and writing data of some embodiments of the present disclosure may be applied.
As shown in fig. 1, system architecture 100 may include end device clients 101 (e.g., clients 1, 2, 3), running caching agents 103 (e.g., running caching agents 1, 2, 3), waiting caching agents 104, and distributed storage system 105. Where multiple running caching agents 103 provide consistent metadata for multiple relatively independent clients as a whole. The running caching agent 103 caches the metadata 102 obtained from the distributed storage system 105, so that the request pressure of the back-end distributed storage system 105 can be relieved. After obtaining the metadata 102, the client 101 may read and write data from and to the distributed storage system 105 according to the metadata 102.
In practice, when a certain running caching agent (for example, running caching agent 1) fails, the waiting caching agent 104 can provide proxy service instead of the running caching agent, so that automatic switching of the failed caching agent is realized, and the reduction of the availability of the whole system is avoided.
The client 101, the running caching agent 103, the waiting caching agent 104 and the distributed storage system 105 can be hardware or software. When hardware, it may be a variety of electronic devices including, but not limited to, smart phones, tablets, laptop portable computers, desktop computers, and the like. When it is software, it can be installed in the electronic devices listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be noted that the operation method provided by the embodiment of the present disclosure may be performed by the waiting caching agent 104. Accordingly, the running means may be provided in the waiting caching agent 104. The data reading and writing method provided by the embodiment of the disclosure can be executed by the client 101. Accordingly, a data read/write device may be provided in the client 101.
It should be understood that the number of clients, running caching agents, waiting caching agents in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method for waiting for a caching agent to run in accordance with the present disclosure is shown. The operation method for the waiting caching agent comprises the following steps:
step 201, in response to monitoring that the child nodes of the pre-created slot nodes change, acquiring the numbers of the child nodes included in the pre-created waiting agent node, where each child node under the waiting agent node corresponds to one waiting caching agent of the multiple waiting caching agents, and the child node of each slot node corresponds to one running caching agent.
In some embodiments, the method of operation of the present disclosure may be implemented using some open-source distributed coordination service (e.g., Zookeeper). Therein, Zookeeper stores data through a tree structure, which is composed of a series of data nodes called ZNode. On this basis, as an example, a plurality of slot (slot) nodes and a waiting agent node may be created in advance in the Zookeeper. The slot node can be used as a container for running the caching agent, and when the running caching agent exists below the slot node, the slot node has a child node which corresponds to the running caching agent. The waiting proxy node (waiting proxys) is used for managing a plurality of waiting caching proxy correspondences. There may be multiple child nodes under the waiting agent node. Each child node corresponds to one of the plurality of waiting caching agents. Each child node has a unique number. The number of each child node may be designated or automatically generated. Optionally, a Zookeeper may be used to add an ordered number to each child node.
In some embodiments, a child node of the slot node corresponds to one running caching agent. The running caching agent may be a running caching agent. Optionally, the cache agent is operated to cache the metadata read from the distributed storage system, and in response to receiving a metadata request sent by the client, search the metadata from the local cache or the distributed storage system and return the metadata to the client. In these implementations, efficient caching of metadata is achieved by multiple running caching agents, relieving distributed storage system service pressure.
In some embodiments, each waiting caching agent may listen for child change (ChildrenChange) events under the slot node to obtain a change condition of a child node of the plurality of slot nodes. Specifically, when a certain running caching agent fails, the child node corresponding to the running caching agent is automatically deleted. At this time, each waiting caching agent monitors that the child node under the slot node changes.
In response to monitoring that a child node of any slot node of the plurality of slot nodes created in advance changes, the execution subject (each waiting agent node) of the running method may acquire the numbers of the child nodes included under the waiting agent node created in advance.
In some optional implementations of some embodiments, the number of each child node is generated by: and responding to the starting of the target waiting caching agent, creating a child node corresponding to the target waiting caching agent under the waiting caching agent node, and generating a number for the child node. Wherein the target waiting caching agent may be any one of a plurality of waiting caching agents. In these implementations, the unique number may be automatically generated for the child node by an open source distributed coordination service, such as Zookeeper, without human intervention.
In some optional implementations of some embodiments, the plurality of slot nodes is created by: determining the slot number stored in a pre-established slot number node; and creating slot position nodes with the number of the slot positions, wherein the slot position nodes are persistent nodes.
Step 202, in response to determining that the sequencing positions of the numbers of the current waiting caching agents in the numbers of the respective child nodes meet a preset condition, determining whether an empty slot node exists in the plurality of slot nodes.
In some embodiments, the execution subject may determine whether an ordering position of the number of the child node corresponding to the currently waiting caching agent in the numbers of the respective child nodes meets a preset condition. Wherein only one waiting caching agent of the plurality of waiting caching agents can meet the preset condition at a time. As an example, the preset condition may be: the sequencing position is the head of the queue or the tail of the queue. The determination that the number of the currently waiting caching agent is the minimum number of the numbers of the child nodes may also be in response to the determination that the ordering position of the number of the currently waiting caching agent in the numbers of the child nodes meets the preset condition.
On this basis, in response to determining that the ordering position of the number of the currently waiting cache agent in the numbers of the respective child nodes satisfies the preset condition, the execution main body may traverse the plurality of slot nodes to determine whether an empty slot node exists among the plurality of slot nodes. Wherein an empty slot node may be a slot node without a child node.
In some optional implementations of some embodiments, the waiting state may be entered in response to determining that an ordering position of a number of the currently waiting caching agent in numbers of the respective child nodes does not satisfy a preset condition. Thus, only one waiting caching agent per agent satisfies the predetermined condition, while the rest enter the waiting state. The waiting caching agents entering the waiting state do not execute subsequent steps any more, thereby ensuring the orderly operation of each waiting caching agent.
Step 203, responding to the existence of the empty slot node, creating a child node under the empty slot node, and writing the service address of the current waiting cache agent into the empty slot node.
In some embodiments, in response to the existence of the empty slot node, the execution agent may create a child node under the empty slot node, and write the service address of the current waiting caching agent (i.e., the waiting caching agent satisfying the preset condition) into the empty slot node. At this time, the cache agent becomes a running cache agent, so that the external service can be provided instead of the fault cache agent, and the automatic switching of the fault cache agent is realized.
In some optional implementations of some embodiments, the wait state may be entered in response to an absence of an empty slot node. It can be understood that since creating a child node under an empty slot node also causes a change in the child node of the empty slot node, it triggers the execution of the running method of the present disclosure again. However, since no empty slot node generally occurs under the condition that no cache agent fails, the operation triggered by creating a child node is waited, so that unnecessary agent cache switching is avoided while automatic switching of a failed agent cache is realized.
In some optional implementations of some embodiments, in response to the empty slot node being present, creating a child node under the empty slot node, and writing the service address of the waiting caching agent into the empty slot node, includes: and responding to the existence of the empty slot node, creating a child node under the empty slot node, writing the service address of the caching agent into the empty slot node, and deleting the child node corresponding to the caching agent under the agent node. In these alternative implementations, the child node corresponding to the caching agent under the agent node may also be deleted. Therefore, when the operation method is executed again in the following, the cache agent which is already operated is prevented from being used as a replacement fault cache agent, and therefore the conflict is avoided.
In some embodiments, by the operating method provided by the present disclosure, by setting a condition on the sorting position, ordered joining of multiple waiting caching agents can be realized, and out-of-order competition of the multiple waiting caching agents is avoided. In addition, when some running caching agents have faults, the child nodes corresponding to the running caching agents disappear, the running method provided by the disclosure can monitor the change, automatically create child nodes in the empty slot position, and write the service address of a certain waiting caching agent into the empty slot position node, so that the original fault agents can be replaced, the proxy service is provided for the outside, and the reduction of the system availability caused by the fault of a single agent is avoided.
With further reference to FIG. 3, a flow 300 of further embodiments of a method for waiting for a caching agent to run is illustrated. The process 300 of the method for waiting for a caching agent to run includes the steps of:
step 301, in response to monitoring that a plurality of slot nodes created in advance or child nodes under the waiting proxy node change, acquiring numbers of the child nodes included under the waiting proxy node created in advance.
In some embodiments, the executing agent running the method may listen to two types of nodes simultaneously: a plurality of slot nodes and a waiting agent node. In response to monitoring the change of the child nodes under any kind of nodes, the number of each child node included under the pre-created waiting proxy node can be obtained.
In practice, upon initial startup of a program, multiple waiting caching agents need to be created first. Thus, the child nodes under the waiting proxy node may change. On the basis, the subsequent steps are triggered by monitoring the change of the child nodes of the waiting agent nodes, and the waiting agent nodes can be orderly distributed to slot position nodes (which are empty when being started). Thereby achieving an orderly allocation of multiple waiting agent caches at program startup.
Step 302, in response to determining that the sequencing positions of the numbers of the child nodes corresponding to the current waiting caching agent in the numbers of the respective child nodes meet a preset condition, determining whether an empty slot node exists in the plurality of slot nodes.
Step 303, in response to the existence of the empty slot node, creating a child node under the empty slot node, and writing the service address of the current waiting cache agent into the empty slot node.
In some embodiments, the specific implementation of steps 301-303 and the technical effects thereof can refer to those in the embodiment of fig. 2, which are not described herein again.
Step 304, responding to the received metadata request sent by the client, wherein the metadata request comprises the data block number, and analyzing the data block number to obtain the slot node number.
In some embodiments, in response to receiving a metadata request sent by a client, the running caching agent may parse the data block number to obtain a slot node number. As an example, a hash algorithm may be used to determine a hash value corresponding to the data block number, and determine the obtained hash value as the slot node number.
Step 305, determining whether the slot node number is matched with the slot node number of the child node corresponding to the running caching agent.
In some embodiments, if the slot node number is the same as the slot node number to which the child node corresponding to the running caching agent belongs, the slot node number and the slot node number may be considered to be matched; if different, a mismatch may be considered.
Step 306, in response to the matching, querying whether metadata corresponding to the metadata request exists in the local cache.
Alternatively, the running caching agent may deny service in response to a mismatch.
Step 307, in response to the presence, returns the queried metadata to the client.
In response to the absence, the metadata is requested from the distributed storage system, and the metadata returned by the distributed storage system is returned to the client, step 308.
The running caching agent may cache metadata returned by the distributed storage system as needed.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of the operation method in some embodiments corresponding to fig. 3 adds a specific flow of providing proxy service to the outside after waiting for the caching agent to become the operation caching agent. In the process, the slot node number is obtained by analyzing the data block number, so that the data block is quickly and accurately positioned. In addition, the consistency of data inquired by each client can be ensured through the matching of the slot node numbers. That is, multiple running caching agents provide consistent metadata for multiple relatively independent clients as a whole.
With continued reference to fig. 4, a flow 400 of some embodiments of a method for reading and writing data to a client is shown, in accordance with the present disclosure. The data reading and writing method for the client comprises the following steps:
step 401, determining a slot node number corresponding to data to be read and written according to the read-write parameters.
In some embodiments, an execution subject (e.g., the client in fig. 1) of the data reading and writing method may determine, according to the reading and writing parameters, a slot node number corresponding to data to be read and written. By way of example, the read and write parameters include an offset (which may be used to determine the location of the data) and a data size. On the basis, the data block number can be obtained through the position and the data size of the data. And then, calculating the slot node number corresponding to the data block number through a Hash algorithm.
Step 402, determining a service address of the running cache agent corresponding to the slot node number according to the slot node number and a pre-established mapping, wherein the mapping is used for representing the corresponding relation between the slot node number and the service address.
As an example, the pre-established mapping may be a correspondence table. Therefore, the execution main body can inquire the service address corresponding to the slot node number in the corresponding relation table. The service address may comprise an IP (internet protocol) address and/or port number, etc.
In some optional implementations of some embodiments, the mapping is established by: acquiring the number of pre-created slot position nodes, wherein child nodes of the slot position nodes correspond to one running cache agent; traversing each slot node, and executing the following processing steps for each slot node: acquiring a service address of an operating cache agent corresponding to a child node under a slot node; and storing the node number of the slot node and the service address in an associated manner.
Step 403, sending a metadata obtaining request to the service address to obtain metadata.
In some embodiments, the running caching agent, upon receiving the metadata retrieval request, may retrieve the metadata from a local cache or distributed storage system and return it to the client. The metadata request includes a data block number.
And step 404, reading and writing data to be read and written according to the acquired metadata.
In some embodiments, the execution agent may perform reading and writing of data to be read and written from the distributed storage system according to the metadata.
In some embodiments, the client may implement reading and writing of data through a data reading and writing method. In this process, the client may first determine the slot node number according to the read-write parameter. On the basis, the service address is determined through the slot node number, so that the metadata can be accurately positioned, and the metadata can be read and written. The running method of the waiting caching agent can avoid the reduction of the system availability caused by the failure of a single agent, namely, the metadata can be stably returned to the client. In practice, metadata is used to describe data-related information, and a client can read and write data according to the metadata. Therefore, the client can perform stable data reading and writing, and the influence on the data reading and writing due to single agent fault is avoided.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a running device for waiting for a caching agent, which correspond to those shown in fig. 2, and which may be applied in various electronic devices.
As shown in fig. 5, the execution apparatus 500 for waiting for a caching agent of some embodiments includes: an acquisition unit 501, a determination unit 502, and a creation unit 503. The obtaining unit 501 is configured to, in response to monitoring that a change occurs in a child node of a plurality of slot nodes created in advance, obtain a number of each child node included under a waiting proxy node created in advance, where each child node under the waiting proxy node corresponds to one waiting caching proxy of a plurality of waiting caching proxies, and the child node of the slot node corresponds to one running caching proxy. The determining unit 502 is configured to determine whether an empty slot node exists in the plurality of slot nodes in response to determining that the sorting position of the number of the child node corresponding to the currently waiting caching agent in the numbers of the respective child nodes satisfies a preset condition. The creating unit 503 is configured to create a child node under an empty slot node in response to the empty slot node being present, writing the service address of the currently waiting caching agent into the empty slot node.
In an optional implementation of some embodiments, the creating unit 503 is further configured to: and responding to the existence of the empty slot node, creating a child node under the empty slot node, writing the service address of the caching agent into the empty slot node, and deleting the child node corresponding to the caching agent under the agent node.
In an alternative implementation of some embodiments, a caching agent is run to cache metadata read from the distributed storage system, and in response to receiving a metadata request sent by a client, to look up the metadata from a local cache or the distributed storage system and return it to the client.
In an optional implementation of some embodiments, the apparatus 500 further comprises: the device comprises an analysis unit, a matching unit, a query unit, a return unit and a request unit. The analysis unit is configured to respond to a received metadata request sent by the client, wherein the metadata request comprises a data block number, and the data block number is analyzed to obtain a slot node number. The matching unit is configured to determine whether the slot node number matches a number of a slot node to which a child node corresponding to the running cache agent belongs. The querying unit is configured to query whether metadata corresponding to the metadata request exists in the local cache in response to the matching. The returning unit is configured to return the queried metadata to the client in response to the presence. The request unit is configured to request metadata from the distributed storage system in response to the absence, and return the metadata returned by the distributed storage system to the client.
In an alternative implementation of some embodiments, the number of each child node is generated by: and responding to the starting of the target waiting caching agent, creating a child node corresponding to the target waiting caching agent under the waiting caching agent node, and generating a number for the child node.
In an alternative implementation of some embodiments, the plurality of slot nodes is created by: determining the slot number stored in a pre-established slot number node; and (4) creating slot nodes with the number of slot positions, wherein the slot nodes are persistent nodes.
In an optional implementation of some embodiments, the determining unit 502 is further configured to determine whether there is an empty slot node in the plurality of slot nodes in response to determining that the number of the currently waiting cache agent is the smallest number of the numbers of the respective child nodes.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
With continuing reference to fig. 6, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a data acquisition apparatus for a client, which correspond to those illustrated in fig. 4, and which may be particularly applicable in various electronic devices.
As shown in fig. 6, the data acquisition apparatus 600 for a client according to some embodiments includes: a node number determination unit 601, a service address determination unit 602, a transmission unit 603, and a read-write unit 604. The node number determining unit 601 is configured to determine a slot node number corresponding to data to be read and written according to the read-write parameters. A service address determining unit 602, configured to determine a service address of the running cache agent corresponding to the slot node number according to the slot node number and a pre-established mapping, where the mapping is used to represent a correspondence between the node number of the slot node and the service address. The sending unit 603 is configured to send a metadata obtaining request to the service address to obtain metadata. The reading and writing unit 604 is configured to read and write the data to be read and written according to the acquired metadata.
In an alternative implementation of some embodiments, the mapping is established by: acquiring the number of pre-created slot position nodes, wherein child nodes of the slot position nodes correspond to one running cache agent; traversing each slot node, and executing the following processing steps for each slot node: acquiring a service address of an operating cache agent corresponding to a child node under a slot node; and storing the node number of the slot node and the service address in an associated manner.
It will be understood that the elements described in the apparatus 600 correspond to various steps in the method described with reference to fig. 4. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 600 and the units included therein, and are not described herein again.
Referring now to fig. 7, a schematic diagram of an electronic device (e.g., a caching agent or client of fig. 1) 700 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via communications means 709, or may be installed from storage 708, or may be installed from ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the number of each child node included under a pre-created waiting agent node when monitoring that the child nodes of a plurality of pre-created slot position nodes change, wherein each child node under the waiting agent node corresponds to one waiting caching agent in a plurality of waiting caching agents, and the child node of the slot position node corresponds to one running caching agent; determining whether empty slot position nodes exist in the plurality of slot position nodes or not in response to the fact that the sequencing positions of the numbers of the child nodes corresponding to the current waiting cache agent in the numbers of the child nodes meet the preset condition; responding to the existence of the empty slot position node, creating a child node under the empty slot position node, and writing the service address of the current waiting cache agent into the empty slot position node; or
Determining the number of a slot position node corresponding to the data to be read and written according to the read-write parameters; determining a service address of the running cache agent corresponding to the slot node number according to the slot node number and a pre-established mapping, wherein the mapping is used for representing the corresponding relation between the node number of the slot node and the service address; sending a metadata acquisition request to a service address to acquire metadata; and reading and writing the data to be read and written according to the acquired metadata.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a determination unit, and a creation unit. The names of these units do not in some cases constitute a limitation to the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires the numbers of the respective child nodes included under the pre-created waiting proxy node".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (14)

1. A method of operation for a waiting cache agent, the method comprising:
responding to the monitoring that the sub-nodes of a plurality of pre-created slot position nodes are changed, and acquiring the number of each sub-node included under a pre-created waiting agent node, wherein each sub-node under the waiting agent node corresponds to one waiting caching agent in a plurality of waiting caching agents, and the sub-node of the slot position node corresponds to one running caching agent;
determining whether empty slot position nodes exist in the plurality of slot position nodes or not in response to the fact that the sequencing positions of the numbers of the sub nodes corresponding to the current waiting cache agent in the numbers of the sub nodes meet the preset condition;
and responding to the existence of an empty slot position node, creating a child node under the empty slot position node, and writing the service address of the current waiting cache agent into the empty slot position node.
2. The method of claim 1, wherein the creating a child node under the empty slot node in response to the existence of an empty slot node, writing the service address of the waiting caching agent to the empty slot node comprises:
responding to the existence of an empty slot position node, creating a child node under the empty slot position node, writing the service address of the cache agent waiting into the empty slot position node, and deleting the child node corresponding to the cache agent under the agent node.
3. The method of claim 1, wherein the running caching agent is to cache metadata read from a distributed storage system and to look up metadata from a local cache or the distributed storage system and return to a client in response to receiving a metadata request sent by the client.
4. The method of claim 3, wherein the method further comprises:
responding to a received metadata request sent by a client, wherein the metadata request comprises a data block number, and analyzing the data block number to obtain a slot node number;
determining whether the number of the slot position node is matched with the number of the slot position node to which the child node corresponding to the running caching agent belongs;
responding to the matching, and inquiring whether metadata corresponding to the metadata request exists in a local cache or not;
in response to the presence, returning the queried metadata to the client;
in response to the absence, requesting the metadata from the distributed storage system, and returning the metadata returned by the distributed storage system to the client.
5. The method of claim 1, wherein the number of each child node is generated by:
and responding to the starting of a target waiting caching agent, creating a child node corresponding to the target waiting caching agent under the waiting caching agent node, and generating a number for the child node.
6. The method of claim 1, wherein the plurality of slot nodes are created by:
determining the slot number stored in a pre-established slot number node;
and creating slot nodes with the number of the slots, wherein the slot nodes are persistent nodes.
7. The method of claim 1, wherein the determining whether an empty slot node exists in the plurality of slot nodes in response to determining that an ordering position of a number of a currently waiting caching agent in the numbers of the respective child nodes satisfies a preset condition comprises:
and in response to determining that the number of the currently waiting caching agent is the minimum number of the numbers of the sub-nodes, determining whether an empty slot node exists in the plurality of slot nodes.
8. The method of claim 1, wherein the obtaining numbers of the sub-nodes included under the pre-created waiting agent node in response to monitoring that the sub-nodes of the pre-created slot nodes change comprises:
and acquiring the number of each sub-node included in the pre-created waiting agent node in response to monitoring that the plurality of pre-created slot nodes or the sub-nodes under the waiting agent node are changed.
9. A data read-write method for a client comprises the following steps:
determining the number of a slot position node corresponding to the data to be read and written according to the read-write parameters;
determining a service address of the running cache agent corresponding to the slot node number according to the slot node number and a pre-established mapping, wherein the mapping is used for representing the corresponding relation between the node number of the slot node and the service address;
sending a metadata acquisition request to the service address, wherein the metadata request comprises a data block number to acquire metadata;
and reading and writing the data to be read and written according to the acquired metadata.
10. The method of claim 9, wherein the mapping is established by:
acquiring the number of pre-created slot position nodes, wherein child nodes of the slot position nodes correspond to one running cache agent;
traversing each slot node, and executing the following processing steps for each slot node:
acquiring a service address of an operating cache agent corresponding to a subnode under the slot node;
and storing the association between the node number of the slot node and the service address.
11. An operating device for a waiting cache agent, comprising:
the system comprises an acquisition unit, a cache agent and a cache agent execution unit, wherein the acquisition unit is configured to respond to the monitoring that the subnodes of a plurality of slot position nodes created in advance change, and acquire the numbers of the subnodes included under the waiting proxy nodes created in advance, wherein each subnode under the waiting proxy node corresponds to one waiting cache agent in a plurality of waiting cache agents, and the subnode of the slot position node corresponds to one running cache agent;
the determining unit is configured to determine whether an empty slot position node exists in the plurality of slot position nodes in response to determining that the sequencing positions of the numbers of the sub nodes corresponding to the current waiting cache agent in the numbers of the sub nodes meet a preset condition;
a creating unit configured to create a child node under an empty slot node in response to existence of the empty slot node, and write a service address of the current waiting cache agent into the empty slot node.
12. A data acquisition apparatus for a client, comprising:
the node number determining unit is configured to determine a slot node number corresponding to data to be read and written according to the read-write parameters;
the service address determining unit is configured to determine a service address of the running cache agent corresponding to the slot node number according to the slot node number and a pre-established mapping, wherein the mapping is used for representing the corresponding relation between the node number of the slot node and the service address;
a sending unit configured to send a metadata acquisition request to the service address to acquire metadata;
and the reading and writing unit is configured to read and write the data to be read and written according to the acquired metadata.
13. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8 or 9-10.
14. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any of claims 1-8 or 9-10.
CN202110677770.XA 2021-06-18 Operation method, data reading and writing method, device, electronic equipment and medium Active CN113407493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110677770.XA CN113407493B (en) 2021-06-18 Operation method, data reading and writing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110677770.XA CN113407493B (en) 2021-06-18 Operation method, data reading and writing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN113407493A true CN113407493A (en) 2021-09-17
CN113407493B CN113407493B (en) 2024-06-28

Family

ID=

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090097458A1 (en) * 2004-11-26 2009-04-16 China Mobile Communications Corporation Method and System for Agent Redirecting the Terminal Request
CN102385633A (en) * 2011-12-13 2012-03-21 方正国际软件有限公司 File management system using virtual storage directory
US20130054671A1 (en) * 2011-08-25 2013-02-28 Eric Maki System and method for optimizing name-resolution overhead in a caching network intermediary device
CN103544270A (en) * 2013-10-18 2014-01-29 南京大学镇江高新技术研究院 Data-center-oriented generalized network coding fault-tolerant storage platform and working method for same
CN105933391A (en) * 2016-04-11 2016-09-07 青岛海信传媒网络技术有限公司 Node capacity expansion method, device and system
CN106209447A (en) * 2016-07-07 2016-12-07 深圳市创梦天地科技有限公司 The fault handling method of distributed caching and device
WO2017165827A1 (en) * 2016-03-25 2017-09-28 Amazon Technologies, Inc. Low latency distributed storage service

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090097458A1 (en) * 2004-11-26 2009-04-16 China Mobile Communications Corporation Method and System for Agent Redirecting the Terminal Request
US20130054671A1 (en) * 2011-08-25 2013-02-28 Eric Maki System and method for optimizing name-resolution overhead in a caching network intermediary device
CN102385633A (en) * 2011-12-13 2012-03-21 方正国际软件有限公司 File management system using virtual storage directory
CN103544270A (en) * 2013-10-18 2014-01-29 南京大学镇江高新技术研究院 Data-center-oriented generalized network coding fault-tolerant storage platform and working method for same
WO2017165827A1 (en) * 2016-03-25 2017-09-28 Amazon Technologies, Inc. Low latency distributed storage service
CN105933391A (en) * 2016-04-11 2016-09-07 青岛海信传媒网络技术有限公司 Node capacity expansion method, device and system
CN106209447A (en) * 2016-07-07 2016-12-07 深圳市创梦天地科技有限公司 The fault handling method of distributed caching and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘云汉等: "时敏目标预警系统分布式数据库架构的实现", 《中国电子科学研究院学报》, vol. 8, no. 4, 31 August 2013 (2013-08-31), pages 428 - 432 *

Similar Documents

Publication Publication Date Title
CN107590001B (en) Load balancing method and device, storage medium and electronic equipment
US10656972B2 (en) Managing idempotent operations while interacting with a system of record
US20190132276A1 (en) Unified event processing for data/event exchanges with existing systems
US9590859B2 (en) Discovering resources of a distributed computing environment
CN111045833B (en) Interface calling method and device
CN111460129B (en) Method, device, electronic equipment and storage medium for generating identification
US9495417B2 (en) Dynamic rules to optimize common information model queries
WO2024124789A1 (en) File processing method and apparatus, server, and medium
CN113364887B (en) File downloading method based on FTP, proxy server and system
CN113079098B (en) Method, device, equipment and computer readable medium for updating route
CN111309366B (en) Method, device, medium and electronic equipment for managing registration core
CN112948138A (en) Method and device for processing message
CN108810130B (en) Method and device for planning distribution request
KR20210044281A (en) Method and apparatus for ensuring continuous device operation stability in cloud degraded mode
CN113553206B (en) Data event execution method and device, electronic equipment and computer readable medium
CN113407493B (en) Operation method, data reading and writing method, device, electronic equipment and medium
CN114785770A (en) Mirror layer file sending method and device, electronic equipment and computer readable medium
CN113407493A (en) Operation method, data read-write method, device, electronic equipment and medium
CN112783842B (en) Log collection method and device
CN114064803A (en) Data synchronization method and device
CN113760927A (en) Data processing method and device, electronic equipment and computer readable medium
CN113726885B (en) Flow quota adjusting method and device
US20240089339A1 (en) Caching across multiple cloud environments
WO2024055741A1 (en) Method and apparatus for establishing network connection, electronic device and storage medium
CN113778660A (en) System and method for managing hot spot data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant